CN112750110A - Evaluation system for evaluating lung lesion based on neural network and related products - Google Patents

Evaluation system for evaluating lung lesion based on neural network and related products Download PDF

Info

Publication number
CN112750110A
CN112750110A CN202110046474.XA CN202110046474A CN112750110A CN 112750110 A CN112750110 A CN 112750110A CN 202110046474 A CN202110046474 A CN 202110046474A CN 112750110 A CN112750110 A CN 112750110A
Authority
CN
China
Prior art keywords
neural network
data
lung
image
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110046474.XA
Other languages
Chinese (zh)
Inventor
雷娜
王振常
侯代伦
李维
金连宝
任玉雪
吕晗
魏璇
张茗昱
陈伟
吴伯阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhituo Vision Technology Co ltd
Dalian University of Technology
Beijing Friendship Hospital
Original Assignee
Beijing Zhituo Vision Technology Co ltd
Dalian University of Technology
Beijing Friendship Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhituo Vision Technology Co ltd, Dalian University of Technology, Beijing Friendship Hospital filed Critical Beijing Zhituo Vision Technology Co ltd
Priority to CN202110046474.XA priority Critical patent/CN112750110A/en
Publication of CN112750110A publication Critical patent/CN112750110A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to an assessment system and related products for assessing lung lesions based on a neural network. The evaluation system comprises a processing subsystem and a neural network subsystem, wherein the processing subsystem comprises one or more processors and the neural network subsystem comprises a first neural network element and a second neural network element, wherein the first neural network element receives and processes tensor data relating to geometric features of an image of a focal region of a lung to obtain target vector data. A second neural network unit receives and processes the target vector data to output an assessment result for assessing the lung focal zone. By using the scheme of the invention, the high-order geometric characteristics of the lung focus area can be extracted, and the lung diseases including the new coronary pneumonia can be effectively evaluated and predicted.

Description

Evaluation system for evaluating lung lesion based on neural network and related products
Technical Field
The present invention generally relates to the field of image processing. More particularly, the present invention relates to an assessment system, computing device and computer-readable storage medium for assessing a focal zone of a lung based on a neural network.
Background
As is well known, images of lung lesion regions contain rich information which is helpful for clinical diagnosis of lung diseases, and therefore, it is important to effectively extract and analyze image features of the lung lesion regions. The current traditional processing method is to extract the imaging characteristics of the lesion area and use the imaging characteristics for subsequent analysis and research in order to evaluate the lesion area. However, how to effectively extract the characteristics of the lesion region and effectively evaluate and predict the lesion region based on the characteristics becomes an urgent problem to be solved, especially when the lung lesion region includes a region infected by a new coronavirus.
Disclosure of Invention
To solve at least the above technical problems, the present invention provides an apparatus for evaluating a lung lesion based on a neural network model. In particular, the present invention uses neural network-based techniques to receive and process image data to output assessment results for assessing lung lesion areas. Using the assessment, the present approach may predict the development of the focal zone of the lung over time. To this end, the invention provides corresponding solutions in a number of aspects as follows.
In a first aspect, the invention is an assessment system for assessing a focal zone of a lung based on a neural network model, comprising: one or more processors; a first neural network unit; a second neural network unit; and one or more computer-readable storage media storing program instructions implementing the first and second neural network units, which when executed by the one or more processors, cause: receiving and processing image data related to a lung focal zone image by a first neural network unit to obtain target vector data, wherein the image data comprises original data related to the lung focal zone image and/or tensor data related to geometrical features of the lung focal zone image; and a second neural network unit receives and processes the target vector data to output an assessment result for assessing the lung focal zone.
In one embodiment, the lung focal zone image is an image of a lung region infected with a new coronavirus, and the first neural network unit comprises a plurality of encoders and a feature extractor, wherein: each encoder of the plurality of encoders comprises a plurality of convolution layers configured to perform a multi-layer convolution process on the image data to obtain a plurality of feature vectors for different geometric features from the image data; and the feature extractor configured to perform a feature fusion operation on the plurality of feature vectors to obtain the target vector data.
In one embodiment, the plurality of convolutional layers are connected in series, and the output of the last convolutional layer of the series connection is connected to the input of the feature extractor.
In one embodiment, the feature fusion operation comprises performing a data stitching operation on the plurality of feature vectors to output the target vector data.
In one embodiment, the second neural network unit comprises a long-short term memory neural network configured to receive and process the target vector data to output an assessment result for assessing the lung focal zone.
In one embodiment, the image data associated with the lung lesion image includes a plurality of sets of sub-image data associated with the lung lesion acquired at a plurality of different times.
In one embodiment, the tensor data comprises three-dimensional tensor data, and the one or more computer-readable storage media further store program instructions to obtain the three-dimensional tensor data, which when executed by the one or more processors, cause: generating a tetrahedral mesh based on the raw data; and determining geometric features using the tetrahedral mesh and representing the geometric features as three-dimensional tensor data.
In one embodiment, the geometric features include a curie curvature, a gradient, or a mean curvature, and the assessment includes lesion quality information of the lung lesion, the lesion quality information being used at least to predict or determine a severity of a condition and/or a trend of the condition of a patient infected with the new coronavirus.
In a second aspect, the invention provides a computing device comprising an evaluation system as described above.
In a third aspect, the invention provides a computer-readable storage medium comprising a computer program for assessing a focal zone of a lung based on a neural network model, which, when executed by one or more processors of an apparatus, causes the apparatus to perform the operations of the assessment system as described above.
From the above description of the inventive arrangements in several respects, it will be appreciated by those skilled in the art that the inventive arrangements are capable of efficiently utilizing neural network techniques to analyze and evaluate image data to make reasonable assessments and predictions of the development of lung lesions included in the images. In one application scenario, when the lung lesion comprises a lesion infected with a new coronavirus, the severity and progression potential of the new coronary pneumonia can be predicted by evaluating it using the apparatus of the present invention, thereby enabling effective medical intervention for the patient. Further, the tensor data of the present invention comprise data extracted from the geometric features of the focal zone of the lung, making the resulting assessment more explanatory of the patient's condition, and thus more accurate and referential. In addition, the neural network unit of the invention fuses the data by using the feature fusion operation, thereby effectively extracting and processing the features in the image data, and improving the accuracy of prediction and evaluation.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. In the accompanying drawings, several embodiments of the present invention are illustrated by way of example and not by way of limitation, and like reference numerals designate like or corresponding parts throughout the several views, in which:
FIG. 1 is a diagram illustrating a system architecture for neural network-based assessment of a focal zone of a lung in accordance with an embodiment of the present invention;
FIG. 2 is a flow chart illustrating a method for processing images of a focal zone of a lung according to an embodiment of the present invention;
FIG. 3 is an exemplary flow chart illustrating a method for generating a two-dimensional grid in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a tetrahedral mesh according to an embodiment of the present invention;
FIG. 5 is a flow chart illustrating a method of replacing voxel values with geometric feature values according to an embodiment of the invention;
FIG. 6 is an exemplary diagram illustrating a partial mesh vertex and its neighboring edges in accordance with an embodiment of the present invention;
FIG. 7 is a block diagram illustrating operation of a first neural network element, according to an embodiment of the present invention;
FIG. 8 illustrates a block diagram of the operation of an encoder according to an embodiment of the present invention;
FIG. 9 is a block diagram illustrating operation of a first neural network element and a second neural network element, according to an embodiment of the present invention;
FIG. 10 is a schematic diagram illustrating operation of a second neural network element, according to an embodiment of the present invention; and
FIG. 11 is a block diagram illustrating a computing device for assessing a focal zone of a lung in accordance with an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings. It should be understood that the embodiments described herein are only some of the embodiments of the invention provided to facilitate a clear understanding of the concepts and legal requirements, and that not all embodiments of the invention may be practiced. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed in the present specification without inventive step, are within the scope of the present invention.
FIG. 1 is a diagram illustrating a system architecture for neural network-based assessment of a focal zone of a lung in accordance with an embodiment of the present invention.
As shown therein, the system includes a computer tomography (or "CT") machine 102 for scanning a slice of a diseased or suspected diseased portion of a patient to obtain three-dimensional volumetric image data. In the context of the present invention, the diseased site may be the lung, particularly the region of the lung that may be infected or already infected with a new coronavirus. Thus, by scanning with a CT machine, three-dimensional image data 104 of the present invention can be obtained, as shown in the figure for a region of the lung infected with a new coronavirus.
After obtaining the three-dimensional image data described above, the evaluation system 106 of the present invention (which is disposed on the computer shown in the figure) saves the three-dimensional image data using a memory. Although not shown in the figures, in some scenarios, some pre-processing may also be performed on the three-dimensional image data before saving, including, for example, triangulating the three-dimensional image data to obtain, for example, a two-dimensional network.
As further shown in fig. 1, the evaluation system 106 of the present invention may include a processing subsystem 112 and a neural subsystem 114. In one embodiment, the processing subsystem includes one or more processors, which may include a general purpose processor ("CPU") or a dedicated graphics processor ("GPU"). Further, the neural network subsystem 114 of the present invention may include a first neural network element 112 and a second neural network element 114.
As an example, the above-described first and second neural network units of the present invention may be implemented as program instructions stored on a computer-readable storage medium (not shown in the figures). Depending on the application scenario, the computer-readable storage medium may be one or more of, and may be any type of storage medium capable of storing program instructions. During execution of the evaluation task of the present invention, the processor may execute program instructions stored on the computer-readable storage medium such that execution of the program instructions results in operations performed by the first and second neural network units of the present invention.
In particular, when the processor executes one or more of the program instructions described above, the first neural network element of the present invention may be configured to receive and process image data associated with an image of a focal region of the lung to obtain target vector data. In one embodiment, the image data includes tensor data related to geometric features of a lung focal zone image (e.g., the lung focal zone image 104 shown on the left in FIG. 1). Accordingly, the second neural network unit of the present invention may be configured to receive and process the target vector data to output an assessment result (e.g. reflected in the form of a mass or volume ratio) for assessing the lung lesion.
In one application scenario, the image data may also include raw data associated with the lung lesion region and/or two-dimensional data associated with geometric features of the lung lesion region. In one embodiment, raw data for an image region associated with a lesion region in the lung of a patient may be obtained from CT image data obtained, for example, by an electronic computed tomography technique or device. In one implementation scenario, the image data associated with the lung lesion image includes sets of sub-image data (shown as 104 in FIG. 1) associated with the lung lesion acquired at a plurality of different times.
Based on the foregoing description, the above computer-readable storage medium further stores program instructions to obtain the tensor data, which when executed by the one or more processors, causes a tetrahedral mesh to be generated based on raw data; and determining geometric features using the tetrahedral mesh and representing the geometric features as three-dimensional tensor data.
In one embodiment, the first neural network element in the inventive apparatus may comprise a plurality of encoders and feature extractors (e.g. as shown in fig. 7). In one implementation scenario, each of the aforementioned plurality of encoders may include a plurality of convolution layers configured to perform a multi-layer convolution process on image data to obtain a plurality of feature vectors for different geometric features from the image data. In one embodiment, the plurality of convolutional layers in each encoder may be connected in series, and the output of the last convolutional layer connected in series may be connected to the input of the feature extractor. In one implementation scenario, the feature extractor may be configured to perform a feature fusion operation on the aforementioned plurality of feature vectors to obtain target vector data. As an example, the feature fusion operation may include performing a data stitching operation on a plurality of feature vectors to output the target vector data.
In one embodiment, the second neural network element of the inventive apparatus comprises a Long Short-Term Memory neural network ("LSTM") configured to receive and process the above-mentioned target vector data to output an evaluation result for evaluating the lung lesion. In one application scenario, the assessment results may include lesion quality information of the lung lesion, which is at least used to predict or determine the severity of the condition and/or the trend of the condition of the patient infected with the new coronavirus.
The evaluation system of the present invention and the apparatus thereof are described above in connection with fig. 1. How the present invention performs the aforementioned extraction of geometric features will be described in detail below with reference to fig. 2-6.
FIG. 2 is a flow diagram illustrating a method 200 of processing images of a focal zone of a lung in accordance with multiple embodiments of the present invention. It is noted that the method 200 of the present invention may be implemented by various types of computing devices including, for example, a computer, and the three-dimensional image data of the lung lesion region involved therein may be three-dimensional image data obtained by, for example, a Computed Tomography (abbreviated as "CT") technique or device. Further, the three-dimensional image data of the lung lesion region of the present invention includes cubic structures such as volume elements (abbreviated as "voxels").
As known to those skilled in the art, voxels are mainly used in the fields of three-dimensional imaging, scientific data and medical imaging, which is the smallest unit of digital data that can be segmented and identified in three-dimensional space. Further, the values of the voxels (simply "voxel values") may represent different characteristics. For example, in a "CT" image, the aforementioned voxel value is Hounsfield Unit (abbreviated as "HU").
As shown in fig. 2, at step 202, the method 200 may acquire three-dimensional image data of a lesion region of the lung. In one embodiment, three-dimensional image data may be acquired by a device supporting the CT technique and the voxel values of the present invention are obtained by calculation. In this case, the voxel value is the gray value of the image (i.e., the gray value to which the discussion in the embodiments of the present invention below refers). Additionally, the foregoing grayscale values may be converted to obtain CT values in units of hounsfield units as described above.
After obtaining three-dimensional image data based on, for example, the CT technique discussed above, at step 204, the method 200 may generate a tetrahedral mesh connected by a plurality of vertices from the three-dimensional image data of the lung lesion area. In one embodiment, the operation of the present invention to generate a tetrahedral mesh may include generating the boundaries of the tetrahedral mesh followed by regenerating the internal vertices of the tetrahedral mesh. In this case, the boundary of the tetrahedral mesh may be a two-dimensional mesh generated from the boundary of the three-dimensional image data, and the internal vertices of the tetrahedral mesh may be the vertices of the voxels. Further, the generated two-dimensional mesh is used as the boundary of the tetrahedral mesh, so that the prior information of the external surface of the tetrahedral mesh can be obtained by using the boundary, and the generation speed of the tetrahedral mesh is accelerated. Further, the embodiment of the present invention can accurately describe the shape of the lung lesion region by constructing (or reconstructing) a tetrahedral mesh. Further, the present invention can determine the position of higher order geometric parameters (e.g., gradient) through the tetrahedral mesh to provide more accurate data for subsequent analysis of the lung lesion area. In one embodiment, the operation of generating the tetrahedral mesh may also be automatically implemented by a software package, for example, a software package containing a three-dimensional Constrained triangulation (CDT) function may be selected to directly generate the tetrahedral mesh.
After the tetrahedral mesh is generated by step 204, flow proceeds to step 206. At this step 206, the method 200 determines the geometric feature value at the vertex using the voxel value at the vertex. As mentioned before, the voxel value at the vertex can be obtained directly by a device or apparatus supporting, for example, CT technology, and the obtained voxel value is usually a gray value of the CT image (i.e. the lung lesion region image in the embodiment of the present invention), which may be any corresponding value between 0 and 255. According to one or more embodiments of the invention, the aforementioned geometric features may include, but are not limited to, a riech curvature, a gradient, or an average curvature. Next, at step 208, the method 200 replaces the voxel values with the geometric feature values to enable geometric feature extraction for the lung lesion region.
According to different implementation scenarios, the geometric feature of the present invention may be one of the above-mentioned riedge curvature, gradient or average curvature, so as to calculate the riedge value, gradient value or average value of the curvity at the vertex accordingly. Based on this, the reed-cookie value, the gradient value, or the average-cookie value at the vertex of the tetrahedral mesh obtained at the aforementioned step 206 is utilized as the gray value at the vertex to replace the voxel value. In one embodiment, the richness value, gradient value, or average curvature value obtained here may be tensor data having multiple dimensions, such as three-dimensional tensor data, for use in operations such as feature extraction for deep convolutional networks.
The geometric feature extraction for the focal zone of the lung of the present invention is described above in connection with fig. 2. Based on the above description, those skilled in the art will appreciate that the present invention computes geometric feature values, such as a richness value, a gradient value, or an average curvature value, by reconstructing a tetrahedral mesh for the lesion area and based on voxel values at vertices of the tetrahedral mesh. Further, the voxel values at the vertices are replaced with the obtained geometric feature values, thereby extracting geometric features. In one embodiment, the geometric features may be represented as three-dimensional tensor data for subsequent study and analysis, including, for example, training and predictive evaluation for neural network models.
With the above description, the present invention extracts the high-order geometric features of the lung lesion region (including new coronary pneumonia) image, so that the obtained geometric feature data contains more abundant feature information, and the intrinsic geometric attributes of the lung lesion region image region can be reflected. Meanwhile, compared with the traditional feature extraction, the high-order geometric features extracted based on the lung focus region image are more interpretable for evaluating lung diseases from multiple aspects and angles. Further, by utilizing the high-order geometric features obtained by the method and representing the high-order geometric features as three-dimensional tensors to serve as training data of a machine learning algorithm such as a deep neural network, a prediction model aiming at the development trend of the focus can be trained, so that the development of the focus area can be accurately predicted, and effective human intervention can be performed.
Fig. 3 is an exemplary flow diagram illustrating a method 300 for generating a two-dimensional grid in accordance with multiple embodiments of the present invention. In conjunction with the description of fig. 2 above, those skilled in the art will appreciate that the boundaries of the tetrahedral mesh may be a two-dimensional mesh generated from the boundaries of the three-dimensional image data. Thus, the present invention proposes to obtain the aforementioned two-dimensional grid using the method 300 illustrated in fig. 3. It should be noted that the method 300 is a specific implementation of some of the steps of the method 200 shown in fig. 2, and therefore the corresponding description of the method 200 is also applicable to the following discussion of the method 300.
As shown in fig. 3, at step 302, the method 300 marks the three-dimensional image region, e.g., including the new coronary pneumonia, with boolean variables. In one implementation scenario, one skilled in the art will appreciate the generation of a two-dimensional mesh, which is essentially a mesh generation for the outer surface of Boolean-type variable (bool) data. Specifically, a three-dimensional image region of a lesion region may be marked with a pool and made to be f (Ω), where f is a smooth function and Ω is a region where the three-dimensional image exists. Next, at step 304, the method 200 generates the two-dimensional mesh from the marked three-dimensional image region. For example, the three-dimensional image region f (Ω) based on the aforementioned markers can be passed through f-1(1) Representing internal voxels, f-1(0) Represents an external voxel, and f-1(a) (wherein 0)<a<1) The boundary of the three-dimensional image data is represented and the a-iso-surface mesh of the f-function is calculated by smooth interpolation. In one implementation scenario, the foregoing interpolation may be performed using, for example, a Computational Geometry Algorithms Library ("CGAL") to generate a two-dimensional mesh.
After the two-dimensional mesh is generated, the two-dimensional mesh may be subjected to the above-mentioned "CDT" processing by those skilled in the art, so as to ensure the consistency of the boundaries of the two-dimensional mesh and the tetrahedral mesh. That is, such that the generated two-dimensional mesh is exactly the boundary of the tetrahedral mesh. Further, the skilled person may also put a stronger constraint on the tetrahedral mesh, i.e. to use the voxel vertices of the three-dimensional data as the internal vertices of the tetrahedral mesh. Thus, a tetrahedral mesh formed by connecting a plurality of vertices is finally generated based on the obtained two-dimensional mesh and voxel vertices. For ease of understanding, FIG. 4 shows an exemplary schematic diagram of a portion of a tetrahedral mesh generated according to an embodiment of the present invention. From the obtained tetrahedral mesh, the gray values at the vertices of the mesh may be used to determine the geometric feature values at the vertices of the mesh. In one embodiment, the geometric feature value may be a reed cookie value, a gradient value, or an average cookie value. Further, the voxel values at the vertices of the tetrahedral mesh are replaced with the aforementioned geometric feature values.
Fig. 5 is a flow diagram illustrating a method 500 for replacing voxel values with geometric feature values in accordance with multiple embodiments of the present invention. It should be understood that the method 500 is a specific implementation of some of the steps of the method 200 shown in fig. 2, and therefore the corresponding description of the method 200 also applies to the method 500.
As described above, after generating a tetrahedral mesh formed by connecting a plurality of vertices, at step 502, the method 500 may calculate a reed cookie value, a gradient value, or an average cookie value at the vertices of the generated tetrahedral mesh according to voxel values at the vertices. In one implementation scenario, the cookie value may be calculated by mathematical operations described below. First, the weight f (e) of the edge where the vertices in the tetrahedral mesh adjoin can be defined and expressed as:
Figure RE-GDA0002979293470000091
wherein, ω iseThe weight value of the edge e is represented,
Figure RE-GDA0002979293470000101
and
Figure RE-GDA0002979293470000102
respectively represent the vertexes v1And v2The weight of (a) is determined,
Figure RE-GDA0002979293470000103
representing all and the vertex v1The adjacent edge (excluding edge e),
Figure RE-GDA0002979293470000104
representing all and the vertex v2The adjoining edge (excluding edge e). To facilitate understanding, FIG. 6 illustrates an exemplary diagram of a portion of mesh vertices and their neighbors in accordance with various embodiments of the invention.
As shown in fig. 6, v1And v2Two vertices of a common edge in the generated tetrahedral mesh can be represented, e being a vertex v1And vertex v2The connecting edge of (2). Further, vertex v1And also includes an edge adjacent thereto
Figure RE-GDA0002979293470000105
Figure RE-GDA0002979293470000106
And
Figure RE-GDA0002979293470000107
similarly, vertex v2Also includes an edge adjacent thereto
Figure RE-GDA0002979293470000108
And
Figure RE-GDA0002979293470000109
in one embodiment, vertex v is divided into two1The weight of (A) is defined as
Figure RE-GDA00029792934700001010
Vertex v2The weight of (A) is defined as
Figure RE-GDA00029792934700001011
The aforementioned weight
Figure RE-GDA00029792934700001012
And
Figure RE-GDA00029792934700001013
may be the vertex v1And vertex v2The voxel value (i.e., gray value) at (c). Thus, based on the vertex v1And vertex v2Weight of (1)
Figure RE-GDA00029792934700001014
And
Figure RE-GDA00029792934700001015
can obtain v1And v2Weight ω of common edge ee
Figure RE-GDA00029792934700001016
The weight f (e) of the edge where the vertices in the tetrahedral mesh adjoin can be obtained by combining the above equation (1) and equation (2). Based on the weight f (e) obtained in the foregoing, the richness curvature Ric at each vertex can be obtained further according to the following formula:
Figure RE-GDA00029792934700001017
in the above formula (3), evRepresenting an edge adjacent to the vertex v, evV represents all edges adjacent to vertex v, deg (v) may represent evI.e. the number of edges adjacent to point v. In this case, the result of the computation of the cookie value is a numerical value.
In another embodiment, the weights of edges where vertices in a tetrahedral mesh abut may be calculated based only on equations (1) and (2) above. For example, weights on three mutually orthogonal axes (i.e., x-axis, y-axis, z-axis) at the vertices may be calculated, respectively, and the weights of the three axes may be taken as the values of the richness. The aforementioned three axial weights may represent tensor data of a three-dimensional tensor. Thus, the cookie values may be represented as three-dimensional tensors.
An exemplary description of how the computation of the obtained cookie value is made above. Regarding the gradient values referred to by the above-mentioned geometric feature values, in one implementation scenario, a tetrahedral mesh may first be convolved with a gaussian function, and the gradient thereof may be calculated based on the convolved tetrahedral mesh. Further, the modulus length is calculated for the obtained gradient. Mathematically representing the gradient value calculation of a tetrahedral mesh as
Figure RE-GDA00029792934700001018
Specifically, G denotes a gaussian distribution with variance σ, x denotes convolution, and f denotes voxel values (i.e., gray values) in a tetrahedral mesh. For gaussian convolution operations, one skilled in the art can perform the calculations by directly calling a gaussian filter function through image processing software (e.g., MATLAB). It is to be understood that in this case, the gradient value at the aforementioned obtained vertex is a real number. One skilled in the art can also take the partial derivatives in the three axes (i.e., x-axis, y-axis, z-axis) as tensor data in three dimensions of the three-dimensional tensor by calculating the partial derivatives in the three axes at the vertices, respectively. Thus, the gradient values can also be expressed as a three-dimensional tensor.
Further, regarding the average curvature involved in the above-mentioned geometric feature values, in yet another implementation scenario, the lesion area image is assumed to be a function F, and the normal vector of the isosurface where the vertex x is located is
Figure RE-GDA0002979293470000111
The average curvature K at the vertex x can be defined as:
Figure RE-GDA0002979293470000112
the iso-surface is understood to be a surface consisting of a collection of points having the same grey value. For three-dimensional data, it can be seen as a collection of a plurality of the aforementioned iso-surfaces. It is to be understood that the mean curvature obtained based on the foregoing definition is a real number. Thus, the average curvature can be directly expressed by a real number calculated based on equation (4).
Returning to fig. 5, after obtaining a cookie value, gradient value, or average cookie value based on the above, the method 500 then proceeds to step 504. At this step 504, the method 400 replaces the voxel value at the vertex with the richness value, the gradient value, or the average curvature value. Specifically, the voxel value at each vertex may be replaced by the above obtained reed cookie value, gradient value, or average cookie value, where the reed cookie value and gradient value may be expressed as a three-dimensional tensor. Therefore, the invention realizes the extraction of the geometric characteristics of the lung focus area, particularly the new coronary pneumonia focus area by determining the Rich cookie value, the gradient value or the average cookie value.
In conjunction with the above description, the embodiment of the present invention extracts a higher-order geometric feature value, such as a richness value, a gradient value, or an average value of the cookie, for a lung lesion region, and represents the richness value and the gradient value as a three-dimensional tensor. Further, the voxel value of the lung lesion area is replaced by the geometric characteristic value, so that subsequent analysis and research are facilitated. In one implementation scenario, a person skilled in the art may apply the obtained geometric feature values including the new coronary pneumonia image as a data source to an artificial intelligence architecture such as a neural network, and after training or deep learning, a prediction model for the development trend of the lung lesion region may be obtained. Therefore, the high-order geometric characteristics of the invention can be used for accurately predicting the development of the lung focus region so as to facilitate medical care personnel to timely and effectively cure the lung focus region.
After acquiring image data including the geometric features (e.g., three-dimensional tensor data) based on the extraction method described in conjunction with fig. 2 to 6 or after acquiring raw data by the CT technique, the acquired image data usually needs to be preprocessed because the image data is usually represented by gray-scale values in the range of 0 to 255. In one embodiment, the present invention proposes normalizing the gray-level value of the image data to a floating-point number between 0 and 1 using the max-min criterion. Next, the first neural network unit of the present invention receives the preprocessed image data and processes it to obtain target vector data. In an implementation scenario in which the geometric feature extraction scheme of the present invention is applied, the aforementioned image data may also be one-dimensional data and/or three-dimensional data related to geometric features of the target image region.
Fig. 7 is a block diagram illustrating the operation of the first neural network unit 112, according to an embodiment of the present invention. It should be understood that the first neural network element shown in fig. 7 is a specific embodiment of the first neural network element in the evaluation system 106 shown in fig. 1. Thus, the relevant details and features of the evaluation system 106 described with respect to FIG. 1 also apply to the description of FIG. 7.
As shown in the figure, different types of image data are indicated in a left-side dashed line box, and raw data 701, three-dimensional data 702, two-dimensional data 703, and one-dimensional data 704 are indicated in this order from top to bottom. In one embodiment, one-dimensional data may be stored in TXT format, which may be 1 x 400 in size (i.e., one row of 400 data bits); the two-dimensional data may be stored in a picture (e.g., png) format, which may have a pixel size of 256 × 256, for example. The three-dimensional data may be stored in nii format, and may be 512 bits by 512 bits in size. As described above, the present invention proposes to perform normalization processing on the image data using the max-min criterion, thereby keeping the format and size of the processed data unchanged.
After receiving the image data (e.g., normalized preprocessed image data), the first neural network unit 112 of the present invention first passes the image data through different encoders to extract feature vectors corresponding to different types of image data.
Specifically, encoder 1 processes raw data 701 to output feature vector 701-1. Similarly, the three-dimensional data 702, the two-dimensional data 703 and the one-dimensional data 704 can be extracted by the encoder 2, the encoder 3 and the encoder 4 to obtain the corresponding feature vector 702-1, the feature vector 703-1 and the feature vector 704-1. It is to be understood that the number of dimensions of the image data dimensions and the number of encoders shown in fig. 7 are merely exemplary and not limiting, and that other image data formats or types may be selected by one skilled in the art as desired. For example, in some application scenarios, raw data, any of one-dimensional to three-dimensional data, may be employed for evaluation. In other application scenarios, any two or more of the foregoing raw data and one-dimensional to three-dimensional data may be combined for evaluation. Therefore, the present invention is not limited in terms of data format and data usage. Similarly, the present invention does not impose any limitations on the number and types of encoders corresponding to the aforementioned data formats.
In one embodiment, the encoder of the present invention may be implemented by convolutional layers (or convolutional operators) in a neural network. In one implementation scenario, the encoding operation on the data may be implemented by a layer structure including two convolutional layers and one adaptive convolutional layer as illustrated in fig. 8 to obtain the feature vector data as described above, which is described in detail below.
Fig. 8 illustrates an operational block diagram of an encoder 800 according to an embodiment of the present invention. It is to be understood that the encoder 800 may be any one of the encoders 1-4 of fig. 7. As shown, the encoder 800 may include convolutional layer 801, convolutional layer 802, and an adaptive convolutional layer 803. Assuming that the left data in the figure is the two-dimensional data 703 in the above figure (e.g., a picture represented by extracted gaussian curvature, mean curvature, or conformal factor), the encoder 800 is set as the encoder 3 in the above figure 7. Thereby, the two-dimensional data 703 is subjected to the first convolution through the convolution layer 801 in the encoder 800. Next, a second convolution is performed through convolution layer 802 and optionally a third convolution is performed through adaptive convolution 803 to obtain feature vector 703-1. Similarly, feature vector 702-1, feature vector 703-1, and feature vector 704-1 as in FIG. 7 may be obtained by the encoder process as described above.
The first two convolutional layers in the above-described encoder of the present invention may use 128 and 64 convolutional kernels to perform convolution operations, respectively, according to the practical application scenario. In this case, the inputs may be feature maps of size 256 × 256 and 128 × 128, respectively, and the outputs may be feature maps of size 128 × 128 and 64 × 64, respectively. For the third convolutional layer, it may be an adaptive convolutional layer using 32 convolutional kernels, and its output is a signature of size 32 x 32. Here, the purpose of adding the adaptive convolutional layer is only to fix the output size of the encoder, i.e. the encoder of the present disclosure always outputs a feature map of fixed size, such as the aforementioned feature map of 32 × 32. Based on this, those skilled in the art can appreciate that the adaptive convolutional layer of the present invention is an optional setting, and in some other application scenarios, another convolutional layer may not be used or replaced. Further, the convolution kernels in the neural network of the present invention may each be 3 x 3 arrays in size and may be initialized with a uniform distribution.
In conjunction with the descriptions of fig. 7 and 8, those skilled in the art will understand that the convolutional layers in the encoders in the first neural network unit of the present invention are serially connected, and the output terminal of the last convolutional layer serially connected is connected to the input terminal of the feature extractor (i.e., the feature extractor 705 in fig. 7) of the first neural network unit. With the convolutional layer structure shown in fig. 8, the output of the adaptive convolutional layer 803 in the encoder is connected to the input of the feature extractor, so that the feature extractor performs a data stitching operation on a plurality of vector data to obtain target vector data.
Referring to fig. 7, a feature extractor 705 is shown that performs a feature fusion operation (i.e., an operation in the middle dashed box in the figure) on, for example, the four feature vectors described above. Specifically, the feature vector 701-1, the feature vector 702-1, the feature vector 703-1, and the feature vector 704-1 are convolved once to obtain respective convolution results, and then the convolution results of each feature vector are fused (for example, spliced) to obtain the target vector data 706. For example, the feature vector 701-2 is obtained by concatenating the convolution results of the feature vector 702-1, the feature vector 703-1, and the feature vector 704-1 with the convolution result of the feature vector 701-1. Similarly, the convolution results of the feature vector 701-1, the feature vector 703-1, and the feature vector 704-1 are concatenated with the convolution result of the feature vector 702-1 to obtain the feature vector 702-2. Thus, the feature vector 703-2 and the feature vector 704-2 can be obtained. Then, the feature vector 701-2, the feature vector 702-2, the feature vector 703-2, and the feature vector 704-2 are performed a plurality of times (e.g., twice) to obtain the feature vector 701-10, the feature vector 702-10, the feature vector 703-10, and the feature vector 704-10, and they are concatenated to form the target vector data 706.
In one implementation scenario, the convolution kernel used in the above convolution may have a size of 3 × 3 or 1 × 1 (when convolved with itself), and the number of convolutions may be three, which is not limited by the present invention. In addition, the dimension of the target vector data can also be set according to requirements, and the invention is not limited. For example, the target vector data obtained in the present invention is 1024-dimensional, and the target vector data relates to image data obtained by one CT of the patient. In a scenario applied to focal zone image analysis, a plurality of CT images of a patient at different periods may be acquired respectively, and based on the operations described above for the first neural network unit, a plurality of target vector data at different periods may be obtained as an input of the second neural network unit, for example, as shown in fig. 9.
FIG. 9 is a block diagram illustrating operation of a first neural network element and a second neural network element, according to an embodiment of the present invention. The leftmost side of the figure shows that image data (for example, CT1 image data, CT2 image data, and CTn image data) of a patient at Tn times are acquired, and the first neural network unit 112 receives and processes the image data, respectively, to acquire target vector data 901 at time T1, target vector data 902 at time T2, and target vector data 910 at time T, respectively. Next, the second neural network unit 114 receives and acquires a plurality of target vector data, processes the target vector data, and finally obtains an evaluation result of the lesion area. In one embodiment, the second neural network element may be a Long Short-Term Memory neural network ("LSTM"), such as that shown in FIG. 10.
Fig. 10 is a schematic diagram illustrating the operation of the second neural network unit 114, according to an embodiment of the present invention. As described above, the second neural network element of the present invention may be implemented in one implementation scenario as an LSTM neural network, and may include an input layer, one or more hidden layers, and an output layer therein.
As shown in FIG. 10, a patient representative CTtEpoch, CTt-1Time of day and CTt+1The image data of the period is fused by the first neural network unit characteristics to respectively obtain target vector data XtTarget vector data Xt-1And target vector data Xt+1. LSTM neural network with target vector data XtTarget vector data Xt-1And target vector data Xt+1As input and using target vector data XtAnd the memory S of t-1 CT of the patient at the previous momentt-1To calculate the memory S of the t-th CT of the patient at the current momentt. Similarly, the patient 'S memory S may be based on the patient' S t-th CTtCalculating the memory S of the t +1 th CT of the patientt+1. In an implementation scenario, the foregoing operations (for example, adjusting the weight U from the input layer to the hidden layer, the weight V from the hidden layer to the output layer, and the weight W from the hidden layer at the previous moment to the hidden layer at the current moment) may be repeated multiple times, and the memory at all moments is calculated, so as to finally obtain the output Ot, where Ot is the evaluation result. Generally, the aforementioned evaluation result may be expressed as a quality (e.g., quality of a lesion region) and a volume ratio (volume ratio of the lesion region to the entire image). By analyzing the evaluation results, it is possible to determine the situation in which the target feature in the image data develops over time. For example, when the image data is image data containing a lesion region of new coronary pneumonia, the current state and development of the new coronary pneumonia can be evaluated and predicted.
For example, with the aforementioned high-order geometric features of the present invention extracted from the focal zone containing the new coronary pneumonia region, a user of the device of the present invention (e.g., a healthcare professional) can determine the severity of the current user's condition by analyzing the mass or volume ratio of the lung infection region as reflected by the assessment. Further, since the neural network unit of the present invention processes data in a time dimension, the user can also evaluate the state of illness development of the patient based on the evaluation result. For example, when the mass or volume ratio tends to decrease or decline, it can be judged that the patient is expected to recover within a certain period of time. In contrast, when the mass or volume ratio tends to increase or increase, it can be judged that the condition of the patient may be further deteriorated to some extent. In this case, the medical staff can perform necessary treatment on the patient in time in order to control the progress of the disease and prevent further deterioration of the disease.
Although the training process of the neural network unit of the present invention is not mentioned above, it can be understood by those skilled in the art based on the disclosure of the present invention that the neural network unit of the present invention can be trained by training data, thereby obtaining a neural network unit with high accuracy. For example, in the forward propagation process of neural network training, the present invention can train the neural network unit of the present invention using the image data (e.g., three-dimensional tensor data) including geometric features obtained by combining fig. 2 to fig. 6, and compare the training result with the expected result (or real value) to obtain the corresponding loss function. Further, in the back propagation process of neural network training, the present invention updates the weights (e.g., weights U, V and W in FIG. 10) based on, for example, a gradient descent algorithm using the obtained loss function to reduce the error of the output Ot from the true value.
With the image evaluation system according to the embodiment of the present invention, the first neural network unit performs feature fusion on the image data to obtain target vector data, and the second neural network unit processes the target vector data to obtain an evaluation result of the image. For example, a CT image of a patient may be input into the image evaluation system of the present invention, thereby directly obtaining an evaluation result (e.g., mass and volume ratio) of a lesion region of the patient. The quality or volume ratio is compared with the disease condition of the patient and the development trend of the lesion area to predict for manual intervention.
FIG. 11 is a block diagram illustrating an apparatus 1100 for assessing a lung lesion based on a neural network model in accordance with an embodiment of the present invention. As shown in fig. 11, device 1100 may include a central processing unit ("CPU") 1111, which may be a general purpose CPU, a special purpose CPU, or other execution unit where information processing and programs run. Further, device 1100 may also include a mass storage device 1112 and a read only memory ("ROM") 1113, wherein mass storage device 1112 may be configured to store various types of data including, for example, various image data associated with a lesion area, algorithm data, intermediate results, and various programs needed to operate device 1100. A read only memory ("ROM") 1113 may be configured to store power-on self-test for the device 1100, initialization of various functional blocks in the system, drivers for basic input/output of the system, and data required to boot the operating system.
Optionally, device 1100 may also include other hardware platforms or components, such as the illustrated tensor processing unit ("TPU") 1114, graphics processing unit ("GPU") 1115, field programmable gate array ("FPGA") 1116, and machine learning unit ("MLU") 1117. It is to be understood that although various hardware platforms or components are illustrated in the apparatus 1100 of the present invention, such is for illustration and not limitation, and one skilled in the art can add or remove corresponding hardware as may be required. For example, the apparatus 1100 may include only a CPU to implement the lung lesion assessment operation of the present invention.
In some embodiments, to facilitate the transfer and interaction of data with external networks, device 1100 of the present invention further includes a communication interface 1118 such that it may be connected to a local area network/wireless local area network ("LAN/WLAN") 1105, and in turn may be connected to a local server 1106 or to the Internet ("Internet") 1107 through the LAN/WLAN via communication interface 1118. Alternatively or additionally, device 1100 of the present invention may also be directly connected to the internet or a cellular network via communication interface 1118 based on a wireless communication technology, such as a 3 rd generation ("3G"), 4 th generation ("4G"), or 5 th generation ("5G") based wireless communication technology. In some application scenarios, the device 1100 of the present invention may also access the server 1108 and possibly the database 1109 of the external network as needed to obtain various known image models, data and modules, and may store various data remotely, such as various types of data used to present or evaluate images of lesion areas.
The peripheral devices of the apparatus 1100 of the present invention may include a display device 1102, an input device 1103, and a data transmission interface 1104. In one embodiment, the display device 1102 may include, for example, one or more speakers and/or one or more visual displays configured to provide voice prompts and/or visual displays of the operational procedures or final results of the present invention for displaying images of lesion areas. The input device 1103 may include, for example, a keyboard, mouse, microphone, gesture capture camera, or other input buttons or controls configured to receive input of lesion area image data and/or user instructions. The data transfer interface 1104 may include, for example, a serial interface, a parallel interface, or a universal serial bus interface ("USB"), a small computer system interface ("SCSI"), serial ATA, FireWire ("FireWire"), PCI Express, and a high-definition multimedia interface ("HDMI"), which are configured for data transfer and interaction with other devices or systems. In accordance with aspects of the present invention, the data transfer interface 1104 may receive a lesion area image or lesion area image data from a CT device (e.g., CT device 102 shown in fig. 1) and transmit the image data including the lesion area or various other types of data and results to device 1100.
The above-described CPU 1111, mass storage 1112, read only memory ROM 1113, TPU 1114, GPU 1115, FPGA 1116, MLU 1117 and communication interface 1118 of the device 1100 of the present invention may be interconnected by a bus 1119, and data interaction with peripheral devices is achieved through this bus. Through the bus 1119, the CPU 1111 may control other hardware components and their peripherals in the device 1100, in one embodiment.
An apparatus for neural network-based assessment of a focal zone of a lung that may be used to implement the present invention is described above in connection with fig. 11. It is to be understood that the device architectures or architectures herein are merely exemplary, and that the implementations and implementation entities of the present invention are not limited thereto, but may be varied without departing from the spirit of the invention.
It should also be appreciated that any module, unit, component, server, computer, terminal, or device executing instructions of the examples of the invention may include or otherwise have access to a computer-readable medium, such as a storage medium, computer storage medium, or data storage device (removable) and/or non-removable, such as a magnetic disk, optical disk, or tape. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules or other data.
It should be understood that the terms "first", "second", "third" and "fourth", etc. in the claims, the description and the drawings of the present invention are used for distinguishing different objects and are not used for describing a particular order. The terms "comprises" and "comprising," when used in the specification and claims of this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification and claims of this application, the singular form of "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the specification and claims of this specification refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
As used in this specification and claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]". Although the embodiments of the present invention are described above, the descriptions are only examples for facilitating understanding of the present invention, and are not intended to limit the scope and application scenarios of the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An assessment system for assessing a focal zone of a lung based on a neural network, comprising:
a processing subsystem comprising one or more processors;
a neural network subsystem comprising a first neural network element and a second neural network element; and
one or more computer-readable storage media storing program instructions implementing the neural network subsystem, which when executed by the one or more processors, cause:
receiving and processing image data related to a lung focal zone image by a first neural network unit to obtain target vector data, wherein the image data comprises tensor data related to geometrical features of the lung focal zone image; and
a second neural network unit receives and processes the target vector data to output an assessment result for assessing the lung focal zone.
2. The evaluation system of claim 1, wherein the lung focal zone image is an image of a lung region infected with a new coronavirus, and the first neural network unit comprises a plurality of encoders and a feature extractor, wherein:
each encoder of the plurality of encoders comprises a plurality of convolution layers configured to perform a multi-layer convolution process on the image data to obtain a plurality of feature vectors for different geometric features from the image data; and
the feature extractor is configured to perform a feature fusion operation on the plurality of feature vectors to obtain the target vector data.
3. The evaluation system of claim 2, wherein the plurality of convolutional layers are connected in series, and an output of a last convolutional layer of the series connection is connected to an input of the feature extractor.
4. The evaluation system of claim 2, wherein the feature fusion operation comprises performing a data stitching operation on the plurality of feature vectors to output the target vector data.
5. The assessment system of claim 4, wherein the second neural network unit comprises a long-short term memory neural network configured to receive and process the target vector data to output assessment results for assessing the pulmonary lesion.
6. The evaluation system of claim 1, wherein the image data associated with the lung lesion image includes a plurality of sets of sub-image data associated with the lung lesion acquired at a plurality of different times.
7. The evaluation system of claim 6, wherein the tensor data comprises three-dimensional tensor data, and the one or more computer-readable storage media further stores program instructions to obtain the three-dimensional tensor data, which when executed by the one or more processors, causes:
generating a tetrahedral mesh based on the raw data; and
geometric features are determined using the tetrahedral mesh and the geometric features are represented as three-dimensional tensor data.
8. The assessment system of claim 7, wherein the geometric features comprise a curie curvature, a gradient, or a mean curvature, and the assessment results comprise lesion quality information of the lung focal zone, which is used at least to predict or determine the severity of a condition and/or a trend of the condition of a patient infected with the new coronavirus.
9. A computing device comprising an evaluation system according to any of claims 1-8.
10. A computer-readable storage medium comprising a computer program for assessing a pulmonary lesion based on a neural network, which, when executed by one or more processors of an apparatus, causes the apparatus to perform operations of an assessment system according to any one of claims 1-8.
CN202110046474.XA 2021-01-13 2021-01-13 Evaluation system for evaluating lung lesion based on neural network and related products Pending CN112750110A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110046474.XA CN112750110A (en) 2021-01-13 2021-01-13 Evaluation system for evaluating lung lesion based on neural network and related products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110046474.XA CN112750110A (en) 2021-01-13 2021-01-13 Evaluation system for evaluating lung lesion based on neural network and related products

Publications (1)

Publication Number Publication Date
CN112750110A true CN112750110A (en) 2021-05-04

Family

ID=75651923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110046474.XA Pending CN112750110A (en) 2021-01-13 2021-01-13 Evaluation system for evaluating lung lesion based on neural network and related products

Country Status (1)

Country Link
CN (1) CN112750110A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708973A (en) * 2022-06-06 2022-07-05 首都医科大学附属北京友谊医院 Method for evaluating human health and related product
CN116778027A (en) * 2023-08-22 2023-09-19 中国空气动力研究与发展中心计算空气动力研究所 Curved surface parameterization method and device based on neural network

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708973A (en) * 2022-06-06 2022-07-05 首都医科大学附属北京友谊医院 Method for evaluating human health and related product
CN114708973B (en) * 2022-06-06 2022-09-13 首都医科大学附属北京友谊医院 Device and storage medium for evaluating human health
CN116778027A (en) * 2023-08-22 2023-09-19 中国空气动力研究与发展中心计算空气动力研究所 Curved surface parameterization method and device based on neural network
CN116778027B (en) * 2023-08-22 2023-11-07 中国空气动力研究与发展中心计算空气动力研究所 Curved surface parameterization method and device based on neural network

Similar Documents

Publication Publication Date Title
CN107492099B (en) Medical image analysis method, medical image analysis system, and storage medium
US9968257B1 (en) Volumetric quantification of cardiovascular structures from medical imaging
CN110599528B (en) Unsupervised three-dimensional medical image registration method and system based on neural network
US20220012890A1 (en) Model-Based Deep Learning for Globally Optimal Surface Segmentation
Li et al. Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images
CN111667459B (en) Medical sign detection method, system, terminal and storage medium based on 3D variable convolution and time sequence feature fusion
CN112767340A (en) Apparatus and related products for assessing focal zone based on neural network model
CN113744183A (en) Pulmonary nodule detection method and system
Liu et al. The measurement of Cobb angle based on spine X-ray images using multi-scale convolutional neural network
CN112750110A (en) Evaluation system for evaluating lung lesion based on neural network and related products
CN112381822B (en) Method for processing images of focal zones of the lungs and related product
CN115578404A (en) Liver tumor image enhancement and segmentation method based on deep learning
CN113256670A (en) Image processing method and device, and network model training method and device
CN112381824B (en) Method for extracting geometric features of image and related product
CN117710760B (en) Method for detecting chest X-ray focus by using residual noted neural network
CN111524109A (en) Head medical image scoring method and device, electronic equipment and storage medium
CN113850796A (en) Lung disease identification method and device based on CT data, medium and electronic equipment
CN112884706B (en) Image evaluation system based on neural network model and related product
CN113724185A (en) Model processing method and device for image classification and storage medium
JP7456928B2 (en) Abnormal display control method of chest X-ray image, abnormal display control program, abnormal display control device, and server device
Ankireddy Assistive diagnostic tool for brain tumor detection using computer vision
CN112785562B (en) System for evaluating based on neural network model and related products
CN112381825B (en) Method for focal zone image geometric feature extraction and related products
CN112766332A (en) Medical image detection model training method, medical image detection method and device
CN112862786A (en) CTA image data processing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination