CN116051476A - Automatic evaluation system for pneumosepsis based on scanning image analysis - Google Patents

Automatic evaluation system for pneumosepsis based on scanning image analysis Download PDF

Info

Publication number
CN116051476A
CN116051476A CN202211661985.3A CN202211661985A CN116051476A CN 116051476 A CN116051476 A CN 116051476A CN 202211661985 A CN202211661985 A CN 202211661985A CN 116051476 A CN116051476 A CN 116051476A
Authority
CN
China
Prior art keywords
data
channel
dimensional
images
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211661985.3A
Other languages
Chinese (zh)
Other versions
CN116051476B (en
Inventor
黄金桔
蔡颖
黄秀霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Panyu Central Hospital
Original Assignee
Guangzhou Panyu Central Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Panyu Central Hospital filed Critical Guangzhou Panyu Central Hospital
Priority to CN202211661985.3A priority Critical patent/CN116051476B/en
Publication of CN116051476A publication Critical patent/CN116051476A/en
Application granted granted Critical
Publication of CN116051476B publication Critical patent/CN116051476B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • G06T7/0016Biomedical image inspection using an image reference approach involving temporal comparison
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

According to the invention, by combining with the CT image time sequence of the case pneumonia development stage, the automatic evaluation system for the sepsis of the pneumonia based on the scan image analysis is provided, and the CT images of different stages of the case pneumonia development stage are sequentially and comprehensively analyzed, so that the probability of the sepsis caused by the pneumonia can be expected more accurately, and the automatic evaluation system can be used as an auxiliary means for early study, judgment, prevention and evaluation of the sepsis caused by the pneumonia.

Description

Automatic evaluation system for pneumosepsis based on scanning image analysis
Technical Field
The invention belongs to the field of medical equipment, and particularly relates to the field of automatic evaluation of pneumosepsis based on scanning image analysis.
Background
Sepsis: is a kind of severe septicemia, which refers to a critical and serious disease caused by a great deal of toxin generated by massive propagation of various pathogenic microorganisms such as bacteria, fungi, viruses and parasites after invasion into blood flow. The mortality rate of sepsis caused by severe pneumonia infection is as high as 35% -70%, so that the early judgment of lung infection has important significance for improving the treatment success rate and prognosis.
Chest Computerized Tomography (CT) is a typical tool for medical discrimination of pneumonia infections. However, there are few reports on early diagnosis and prevention evaluation means for sepsis caused by pneumonia by CT images in public information, and most of the methods are based on doctor experience and manual diagnosis. With the pursuit of people for continuously improving medical conditions and effects, the implementation of computer-aided diagnosis by adopting an automatic method is a trend of modern medical development. And the death rate of the sepsis caused by the pneumonia is high, and the intervention treatment is found to be an effective way for avoiding the serious consequences of the symptoms as soon as possible. Klebsiella pneumoniae is one of the most common pathogenic bacteria causing sepsis complications.
There are many documents for diagnosis of pneumonia based on CT images, and these documents generally judge whether a patient is infected with a virus or not based on diagnosis based on CT images of lungs during the development period of pneumonia. The feature difference between the case of the sepsis caused by the pneumonia and the other cases in the lung imaging is not large, so that the early automatic research and judgment difficulty is high, and the problem is difficult to solve by a classical machine learning method.
Therefore, how to construct a neural network model specially used for judging sepsis, so as to accurately judge whether the sepsis caused by pneumonia is a problem to be solved urgently. Meanwhile, the network model should maintain lower operation burden while judging accurately, so that the network model can be truly applied to clinic.
Disclosure of Invention
According to the invention, by combining with the CT image time sequence of the case pneumonia development stage, the automatic evaluation system for the sepsis of the pneumonia based on the scan image analysis is provided, and the CT images of different stages of the case pneumonia development stage are sequentially and comprehensively analyzed, so that the probability of the sepsis caused by the pneumonia can be expected more accurately, and the automatic evaluation system can be used as an auxiliary means for early study, judgment, prevention and evaluation of the sepsis caused by the pneumonia.
An automatic evaluation system for pneumosepsis based on scan image analysis,
step 1: at t 1 Day, t 2 Heaven, …, t T T groups of CT scanning images are obtained together, and four-dimensional data of the CT images are formed together and recorded as: d (x, y, n, t); performing space calibration on the T groups of CT scanning images to enable the space positions of any two-dimensional section images at different times to correspond;
based on the data D (x, y, n, t=1) of the first day, differential data is established, namely:
Figure SMS_1
wherein G represents a Gaussian convolution kernel in two dimensions,
Figure SMS_2
representing a convolution operation, t' representing the remaining days except the first day; />
Performing two-dimensional frequency domain transformation on the differential data:
Ω=UΔ…(4)
wherein ,
Figure SMS_3
wherein Y represents the width of the two-dimensional cross-sectional image, and p and q are subscripts of elements of the matrix U;
according to formula (4), the differential data delta (x, y, n, t ') is transformed to obtain data omega (x, y, n, t');
step 2: taking the first day data D (x, y, n, t=1) and the compressed data omega (x=1, y, n, t') after frequency domain transformation as the input of two channels of the neural network respectively;
the first channel input data D (x, y, n, t=1) passes through a plurality of hidden layers to output a group of first channel feature vectors H 14 (n); the second channel input data Ω (x=1, y, n, t') passes through several hidden layers, and outputs a set of second channel feature vectors H 22 (n);
Figure SMS_4
θ f (n, m) is the linear weight of the pseudo full connection layer, the fourth hidden layer of the first channel and the second hidden layer of the second channel; b 3 Is a linear offset;
and outputting the risk assessment value.
Figure SMS_5
in the formula θπ Is the linear weight corresponding to the pseudo full connection layer, b 4 Is a linear offset.
And training the model formed by the first channel, the second channel and the pseudo full connection layer, iteratively calculating all unknown parameters, and completing training after the convergence condition is met to obtain the values of all the parameters.
The system comprises a central server, a site processor, a data acquisition unit and transmission equipment.
The central server: the method is used for storing the neural network, receiving the preprocessing data obtained in the step 1, judging by using the neural network (step 2), and outputting a prediction result.
The neural network also performs training on the central server.
The on-site processor is used for receiving CT image data and performing preprocessing according to the step 1.
Data acquisition unit: the system is used for connecting the site processor and the CT machine, converting the data format in the CT machine and transmitting the data format to the site processor.
The transmission equipment is used for connecting the site processor and the central server and transmitting the preprocessed CT data to the central server.
The transmission device comprises a fiber optic communication system.
The transmission device comprises a 5G wireless communication device.
The transmission device comprises a wifi wireless communication device.
The invention has the following technical effects:
1. the invention combines CT tomographic images photographed in different periods into four-dimensional data, performs space calibration on the images photographed in different periods, performs frequency domain processing on the calibrated four-dimensional data, and obtains data of two channel types from the four-dimensional data, thereby providing proper data for the neural network and improving the processing efficiency and accuracy of the neural network.
2. A special neural network model is established and is specially used for identifying the sepsis risk possibly caused by pneumonia, and the huge CT data is more suitable to be processed due to low operation load by optimizing the neural network model structure; meanwhile, the special structure is more suitable for identifying tiny differences of the sepsis risks in CT images, and the accuracy of identification is improved.
Detailed Description
Step 1The CT image four-dimensional data generation method comprises a composition method, a calibration method and a compression method of the CT image four-dimensional data. Combining CT tomographic images photographed in different periods into four-dimensional data, performing space calibration on the images photographed in different periods, and compressing the calibrated four-dimensional data to obtain CT image data for learning.
S11The method for composing the CT image four-dimensional data is used for composing CT tomographic images photographed in different periods into four-dimensional data.
CT image aided diagnosis is adopted, and each time a scanning image of a part of a patient is obtained, the scanning image consists of N two-dimensional sectional images. Respectively at the t th 1 Day, t 2 Heaven, …, t T Day-to-day, T sets of CT scan images were obtained together. The data together form four-dimensional CT image data. The method is characterized by comprising the following steps:
D(x,y,n,t)
where x, y represents coordinates of one pixel in one two-dimensional sectional image, n=1, 2, & gt, N represents one two-dimensional sectional image obtained on one day, and t=t 1 ,t 2 ,....,t r Indicating the day on which the scan was performed.
S12The calibration method of CT image four-dimensional data is used for performing space calibration on images shot in different periods, so that the space positions of any two-dimensional section images in different periods are corresponding to each other, and image analysis is carried out by using the correlation of pixels.
Establishing a projection relation between two-dimensional sectional images:
Figure SMS_6
formula 1 describes the pixel space mapping relationship between any two-dimensional images, namely the projection relationship, wherein one image is marked as a target image, and the other image is marked as a target imageOne image is denoted as a reference image. X is x i ,y i Is the coordinates, x, of the pixels in the target image j ,y j Is the coordinates of the pixels in the reference image.
Figure SMS_7
Is a homogeneous coordinate expression form of two-dimensional coordinates. h is a 11 、h 12 、…、h 32 Is a projection relationship parameter.
For any pair of target and reference images, it is desirable that at least 4 pairs of corresponding pixels therein can determine the above projection relationship. In practical applications, at least 10 pairs of pixels are taken to reduce the data noise effect. For four-dimensional data from day 1 to day T, projection relationships can be established according to equation 1, and are recorded as:
Figure SMS_8
wherein ,tα 、t β Representing two days from day 1 to day T. n is n γ Representing the corresponding nth γ And (5) a two-dimensional cross-sectional image. Each set of projection relations P<t α ,t β ,n γ >Corresponding to the set of projection relation parameters, the projection relation parameters are calculated by corresponding pixels (more than 10 pairs) corresponding to the two-dimensional section images.
S13The method for compressing the CT image four-dimensional data compresses the calibrated four-dimensional data to obtain CT image data for learning.
With data of the first day
D(x,y,n,t=1)
As a benchmark, differential data is established, namely:
Figure SMS_9
wherein G represents a Gaussian convolution kernel in two dimensions,
Figure SMS_10
representing convolution operations, the partial graph being removed by convolutionLike noise, t' represents the remaining days except for the first day.
Performing two-dimensional frequency domain transformation on the differential data:
Ω=UΔ…(4)
wherein ,
Figure SMS_11
where Y represents the width of the two-dimensional cross-sectional image, pi is the circumference rate constant, and p, q are subscripts of the elements of the matrix U.
As can be seen from the definitions of equations 4 and 5, each line of the transformed matrix Ω is the sum of the sampling results of the original matrix sampled at different frequencies, which corresponds to "mashing" the information of the original matrix into one line.
Given the values of n, t ', equation 4 is applied to the differential data Δ (x, y, n, t ') to obtain transformed data Ω (x, y, n, t ').
Since the differential data reflects differences in CT image cross-section at different times, in fact, most of the information corresponding to the cross-sectional images is the same, with only a very small number of lesion areas varying. Through a large number of experiments, after the frequency domain transformation defined by the formula 4, the correlation of each row of the transformed matrix is higher, and most of effective information is mapped into the rows of the matrix. Thus, the first line of transformed data is taken:
Ω(x=1,y,n,t′)
as compressed data. The data volume is greatly reduced.
The first day data D (x, y, n, t=1) and the compressed data Ω (x=1, y, n, t') after the frequency domain transformation together form four-dimensional CT image data for analysis in the subsequent steps.
Step 2The analysis method and the risk prediction method for the CT image four-dimensional data are used for carrying out analysis according to the CT image four-dimensional data obtained in the step 1 and predicting the risk that a patient possibly causes sepsis under the current condition.
By establishing a two-channel neural network model, analysis is carried out on four-dimensional data of CT images, and risk assessment variables of the patient possibly causing sepsis under the current condition are output.
The two-channel neural network model refers to four-dimensional data of CT images with input values, and comprises two parts of information, wherein one part of information is CT image data D (x, y, n, t=1) of the first day of a patient, and reflects the physical condition of the patient in the early stage of illness; secondly, CT image data acquired by follow-up of the patient reflect the development condition of the body of the patient after the patient is ill, and the patient is recorded as omega (x=1, y, n, T '), T' =2, 3.
Assuming that the widths and heights of the CT two-dimensional section images are Y, X respectively, the input data of a first channel of the two-channel neural network model is an X Y X N three-dimensional matrix, and the input data of a second channel of the two-channel neural network model is a Y X N X (T-1) three-dimensional matrix.
S21, the overall architecture of the two-channel neural network model is as follows: the first channel input data passes through a plurality of hidden layers and outputs a group of first channel feature vectors; the input data of the second channel passes through a plurality of hidden layers and outputs a group of second channel feature vectors; after the first channel characteristic vector and the second channel characteristic vector pass through the pseudo-full connection layer, a unique risk assessment value is output.
S22, defining a plurality of hidden layers in the first channel as follows.
First channel first hidden layer:
Figure SMS_12
in the above, ω n Is a two-dimensional convolution kernel, (i, j) is the relative coordinates in the convolution kernel, in this example-3.ltoreq.i, j.ltoreq.3, n is the subscript of the convolution kernel, ω n The nth convolution kernel is represented and corresponds to the image ordinal number of the day of the input data. b 10 Is a linear offset. The sigma function realizes sample nonlinearity and improves sample classification effect of the model. The definition is as follows
Figure SMS_13
According to the application data characteristics of the invention, the nonlinear function is designed, wherein the parameter 0 < epsilon < 0.1 is beneficial to improving the prediction accuracy of the model; epsilon=0.06 is preferred over the sample test.
In the pneumonia diagnosis method based on CT images, a 3D convolution method is often adopted to model a prediction model, but the 2D convolution modeling method is adopted, so that the complexity of the model can be greatly reduced, the parameter number of a convolution network is reduced, and the characteristic of the ordinal dimension of the cross-section image of the 3D convolution network is independent of the space dimension to carry out operation processing by introducing a second channel, so that the calculation efficiency of the model is improved.
First channel second hidden layer:
Figure SMS_14
in the above formula, (i, j) represents an offset amount with respect to the coordinates (x, y), and n has the same meaning as in formula 6.b 11 Is a linear offset. Sigma function is as in equation 7. The layer reduces the spatial domain of the data to the original space
Figure SMS_15
For extracting spatial features of higher scale.
First channel third hidden layer:
Figure SMS_16
in the formula ,
Figure SMS_17
is a two-dimensional convolution kernel, (i, j) is the relative coordinates in the convolution kernel, in this example-3.ltoreq.i, j.ltoreq.3, n is the subscript of the convolution kernel, +.>
Figure SMS_18
Corresponding to the respective elements of ordinal number n in the respective layers described above. b 12 Is a linear offset. Sigma function is the same as 7. Different scale from the first hidden layer is extracted by this layerThe spatial characteristics of the degree and the combination of the spatial characteristics can better reflect the local spatial characteristics of the CT image.
First channel fourth hidden layer:
Figure SMS_19
wherein ρ (x, y) represents H 13 And mapping the three-dimensional data of the third hidden layer to the one-dimensional data of the fourth hidden layer through rho according to the linear weight corresponding to the space domain (namely the x and y dimensions), and identifying correlation features among CT two-dimensional cross-sectional images of the same day. b 13 Is a linear offset. Sigma function is the same as 7. The fourth hidden layer serves as the first channel feature vector.
S23, defining a plurality of hidden layers in the second channel as follows.
Second channel first hidden layer:
Figure SMS_20
in the formula ,μy (Y) represents the linear weights corresponding to the Y dimension of the spatial domain of Ω, and there are a total of Y. b 20 Is a linear offset. Sigma function is as in equation 7. This layer also reflects the spatial domain characteristics as a complement to the first, second and third hidden layers of the first channel.
Second channel second hidden layer:
Figure SMS_21
in the formula ,μt And (T ') represents a linear weight corresponding to the time dimension (T' dimension), for a total of T-1. b 21 Is a linear offset. Sigma function is as in equation 7. This layer is used to extract the time-varying features of the CT images.
The second hidden layer serves as a second channel feature vector.
S24, pseudo full connection layer:
Figure SMS_22
in the formula ,H3 The pseudo fully connected layer is represented, and the difference between the pseudo fully connected layer and the fully connected layer is that part of the weight is shared, and each node of the previous layer does not have independent weight. θ f And (n, m) is the linear weight of the pseudo full connection layer, the fourth hidden layer of the first channel and the second hidden layer of the second channel. The dimension of the pseudo full connection layer is N the same as that of the fourth hiding layer of the first channel and the second hiding layer of the second channel. b 3 Is a linear offset. Sigma function is as in equation 7. Because the N dimension on the first channel and the second channel is shared, the performance of the pseudo full connection layer adopted here is very similar to that of the full connection layer directly applied, but the pseudo full connection layer only has half of the free parameters of the full connection layer, so the calculation complexity is much lower.
In addition, the pseudo full connection layer combined with the first channel and the second channel can also supplement correlation characteristics among different section images which cannot be detected by the 2D convolution network, so that the 2D convolution provided by the invention achieves the same effect as the classical 3D convolution.
S25, outputting a risk evaluation value.
Figure SMS_23
in the formula θπ Is the linear weight corresponding to the pseudo full connection layer, b 4 Is a linear offset. Sigma function is as in equation 7. The output pi represents the risk that the patient may develop sepsis under the current conditions.
Training the model formed by the first channel, the second channel and the pseudo full connection layer by adopting a Belief Propogation (BP) algorithm, iteratively calculating the values of all unknown parameters (including all linear weights, convolution kernel parameters and linear offsets), and completing training after meeting convergence conditions. Values for the respective parameters are obtained.
The CT image data output risk of the patient with the known pneumonia causing the sepsis is set as 1, the CT image data output risk of the patient with the known pneumonia not causing the sepsis is set as 0, and after training is completed by adopting a BP algorithm, risk assessment is carried out on the test sample by utilizing a model defined by formulas 6-14, and the disease risk pi is output.
The invention provides an automatic evaluation method for pneumonitis sepsis based on scanning image analysis, which combines CT tomographic images shot in different periods into four-dimensional data, and predicts the risk of the patient possibly causing sepsis under the current condition by analyzing the four-dimensional data of the CT images. The following table shows the prediction accuracy of the method for patients of different ages, and experimental results show that the method has higher detection prediction accuracy and can be used as an auxiliary means for early diagnosis, prevention and evaluation of sepsis caused by pneumonia.
Figure SMS_24
By way of comparison, the following table is the accuracy of the traditional neural network predictions
Figure SMS_25
As can be seen by comparison, the neural network of the invention has greatly improved judgment accuracy compared with the traditional neural network.
To implement the above method, the following system is arranged:
the central server: the method is used for storing the neural network, receiving the preprocessing data obtained in the step 1, judging by using the neural network (step 2), and outputting a prediction result. At the same time, the neural network also performs training on the central server.
The site processor: for receiving CT image data, preprocessing is performed according to step 1.
Data acquisition unit: the system is used for connecting the site processor and the CT machine, converting the data format in the CT machine and transmitting the data format to the site processor.
Transmission equipment: the CT data processing system is used for connecting the site processor and the central server and transmitting the preprocessed CT data to the central server. Such as fiber optic communication devices, 5G wireless communication devices, wifi wireless communication devices.
It will be appreciated by those skilled in the art that while a number of exemplary embodiments of the invention have been shown and described herein in detail, many other variations or modifications which are in accordance with the principles of the invention may be directly ascertained or inferred from the present disclosure without departing from the spirit and scope of the invention. Accordingly, the scope of the present invention should be understood and deemed to cover all such other variations or modifications.

Claims (10)

1. Automatic evaluation system of pneumosepsis based on scanning image analysis, its characterized in that:
step 1: in the first place
Figure DEST_PATH_IMAGE001
Day, the first
Figure 186184DEST_PATH_IMAGE002
Tian, …, th
Figure DEST_PATH_IMAGE003
T groups of CT scanning images are obtained together, and four-dimensional data of the CT images are formed together and recorded as:
Figure 328453DEST_PATH_IMAGE004
the method comprises the steps of carrying out a first treatment on the surface of the Performing space calibration on the T groups of CT scanning images to enable the space positions of any two-dimensional section images at different times to correspond;
with data of the first day
Figure DEST_PATH_IMAGE005
As a benchmark, differential data is established, namely:
Figure DEST_PATH_IMAGE007
wherein ,
Figure 760702DEST_PATH_IMAGE008
representation ofA gaussian convolution kernel in a two-dimensional space,
Figure DEST_PATH_IMAGE009
a convolution operation is represented and is performed,
Figure 870478DEST_PATH_IMAGE010
representing the remaining days except the first day;
performing two-dimensional frequency domain transformation on the differential data:
Figure 538220DEST_PATH_IMAGE012
wherein ,
Figure 66153DEST_PATH_IMAGE014
wherein ,
Figure DEST_PATH_IMAGE015
representing the width of the two-dimensional cross-sectional image,
Figure 528359DEST_PATH_IMAGE016
,
Figure DEST_PATH_IMAGE017
is a matrix
Figure 361317DEST_PATH_IMAGE018
Subscripts of the elements of (2);
differential data according to equation (4)
Figure DEST_PATH_IMAGE019
After transformation, data are obtained
Figure 426224DEST_PATH_IMAGE020
Step 2: will first day data
Figure 684030DEST_PATH_IMAGE005
Compressed data after transforming with frequency domain
Figure DEST_PATH_IMAGE021
Respectively used as the input of two channels of the neural network;
first channel input data
Figure 691038DEST_PATH_IMAGE005
Through a plurality of hidden layers, a group of first channel feature vectors are output
Figure 667085DEST_PATH_IMAGE022
The method comprises the steps of carrying out a first treatment on the surface of the The second channel inputs data
Figure 676629DEST_PATH_IMAGE021
Through a plurality of hidden layers, a group of second channel feature vectors are output
Figure DEST_PATH_IMAGE023
Figure DEST_PATH_IMAGE025
Figure 382417DEST_PATH_IMAGE026
Linear weights of the pseudo full-connection layer, the first channel fourth hidden layer and the second channel second hidden layer;
Figure DEST_PATH_IMAGE027
is a linear offset;
outputting risk assessment value
Figure DEST_PATH_IMAGE029
in the formula
Figure 530633DEST_PATH_IMAGE030
Is a linear weight corresponding to the pseudo full link layer,
Figure DEST_PATH_IMAGE031
is a linear offset.
2. The system of claim 1, wherein: and training the model formed by the first channel, the second channel and the pseudo full connection layer, iteratively calculating all unknown parameters, and completing training after the convergence condition is met to obtain the values of all the parameters.
3. The system of claim 1, wherein: the system comprises a central server, a site processor, a data acquisition unit and transmission equipment.
4. A system as claimed in claim 3, wherein: the central server: and (2) storing the neural network, receiving the preprocessed data obtained in the step (1), judging by using the neural network, and outputting a prediction result.
5. The system as recited in claim 4, wherein: the neural network also performs training on the central server.
6. A system as claimed in claim 3, wherein: the on-site processor is used for receiving CT image data and performing preprocessing according to the step 1.
7. A system as claimed in claim 3, wherein: data acquisition unit: the system is used for connecting the site processor and the CT machine, converting the data format in the CT machine and transmitting the data format to the site processor.
8. A system as claimed in claim 3, wherein: the transmission equipment is used for connecting the site processor and the central server and transmitting the preprocessed CT data to the central server.
9. The system of claim 6, wherein: the transmission device comprises a fiber optic communication system.
10. The system of claim 6, wherein: the transmission device comprises a 5G wireless communication device.
CN202211661985.3A 2022-12-23 2022-12-23 Automatic evaluation system for pneumosepsis based on scanning image analysis Active CN116051476B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211661985.3A CN116051476B (en) 2022-12-23 2022-12-23 Automatic evaluation system for pneumosepsis based on scanning image analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211661985.3A CN116051476B (en) 2022-12-23 2022-12-23 Automatic evaluation system for pneumosepsis based on scanning image analysis

Publications (2)

Publication Number Publication Date
CN116051476A true CN116051476A (en) 2023-05-02
CN116051476B CN116051476B (en) 2023-08-18

Family

ID=86120984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211661985.3A Active CN116051476B (en) 2022-12-23 2022-12-23 Automatic evaluation system for pneumosepsis based on scanning image analysis

Country Status (1)

Country Link
CN (1) CN116051476B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111415341A (en) * 2020-03-17 2020-07-14 北京推想科技有限公司 Pneumonia stage evaluation method, pneumonia stage evaluation device, pneumonia stage evaluation medium and electronic equipment
CN113871009A (en) * 2021-09-27 2021-12-31 山东师范大学 Sepsis prediction system, storage medium and apparatus in intensive care unit
US20220148733A1 (en) * 2020-11-11 2022-05-12 Optellum Limited Using unstructed temporal medical data for disease prediction
CN115115620A (en) * 2022-08-23 2022-09-27 安徽中医药大学 Pneumonia lesion simulation method and system based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111415341A (en) * 2020-03-17 2020-07-14 北京推想科技有限公司 Pneumonia stage evaluation method, pneumonia stage evaluation device, pneumonia stage evaluation medium and electronic equipment
US20220148733A1 (en) * 2020-11-11 2022-05-12 Optellum Limited Using unstructed temporal medical data for disease prediction
CN113871009A (en) * 2021-09-27 2021-12-31 山东师范大学 Sepsis prediction system, storage medium and apparatus in intensive care unit
CN115115620A (en) * 2022-08-23 2022-09-27 安徽中医药大学 Pneumonia lesion simulation method and system based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
祖莅惠 等: "基于深度学习的新型冠状病毒肺炎CT征象检测研究", 中国医疗设备, vol. 35, no. 06, pages 89 - 92 *

Also Published As

Publication number Publication date
CN116051476B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN108389201B (en) Lung nodule benign and malignant classification method based on 3D convolutional neural network and deep learning
US8798345B2 (en) Diagnosis processing device, diagnosis processing system, diagnosis processing method, diagnosis processing program and computer-readable recording medium, and classification processing device
US11797846B2 (en) Learning assistance device, method of operating learning assistance device, learning assistance program, learning assistance system, and terminal device
Kadry et al. Extraction of abnormal skin lesion from dermoscopy image using VGG-SegNet
Barbosa et al. Detection of small bowel tumors in capsule endoscopy frames using texture analysis based on the discrete wavelet transform
CN110600109B (en) Diagnosis and monitoring comprehensive medical system with color image fusion and fusion method thereof
WO2022246677A1 (en) Method for reconstructing enhanced ct image
CN110575178B (en) Diagnosis and monitoring integrated medical system for judging motion state and judging method thereof
CN115601268A (en) LDCT image denoising method based on multi-scale self-attention generation countermeasure network
CN116051476B (en) Automatic evaluation system for pneumosepsis based on scanning image analysis
CN116703901B (en) Lung medical CT image segmentation and classification device and equipment
Mangalagiri et al. Toward generating synthetic CT volumes using a 3D-conditional generative adversarial network
CN111127371B (en) Image enhancement parameter automatic optimization method, storage medium and X-ray scanning device
US9224229B2 (en) Process and apparatus for data registration
CN115601535A (en) Chest radiograph abnormal recognition domain self-adaption method and system combining Wasserstein distance and difference measurement
CN114004912A (en) CBCT image artifact removing method
Kakani et al. Post-covid chest disease monitoring using self adaptive convolutional neural network
CN114334097A (en) Automatic assessment method based on lesion progress on medical image and related product
Alwash et al. Detection of COVID-19 based on chest medical imaging and artificial intelligence techniques
de Oliveira Torres et al. Texture analysis of lung nodules in computerized tomography images using functional diversity
Yang Data augmentation to improve the diagnosis of melanoma using convolutional neural networks
Hariharan et al. An algorithm for the enhancement of chest X-ray images of tuberculosis patients
Rehman et al. An efficient deep learning model for brain tumour detection with privacy preservation
US11786212B1 (en) Echocardiogram classification with machine learning
CN116402812B (en) Medical image data processing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant