CN113781461A - Intelligent patient monitoring and sequencing method - Google Patents
Intelligent patient monitoring and sequencing method Download PDFInfo
- Publication number
- CN113781461A CN113781461A CN202111089390.0A CN202111089390A CN113781461A CN 113781461 A CN113781461 A CN 113781461A CN 202111089390 A CN202111089390 A CN 202111089390A CN 113781461 A CN113781461 A CN 113781461A
- Authority
- CN
- China
- Prior art keywords
- network
- image
- constructing
- artifact
- identification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012163 sequencing technique Methods 0.000 title claims abstract description 28
- 238000012544 monitoring process Methods 0.000 title claims abstract description 27
- 238000003384 imaging method Methods 0.000 claims abstract description 34
- 238000005516 engineering process Methods 0.000 claims abstract description 28
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 20
- 238000005457 optimization Methods 0.000 claims abstract description 20
- 238000013526 transfer learning Methods 0.000 claims abstract description 15
- 238000004458 analytical method Methods 0.000 claims abstract description 12
- 238000002591 computed tomography Methods 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 53
- 238000012549 training Methods 0.000 claims description 37
- 238000000034 method Methods 0.000 claims description 32
- 210000000056 organ Anatomy 0.000 claims description 24
- 230000008569 process Effects 0.000 claims description 16
- 210000004204 blood vessel Anatomy 0.000 claims description 14
- 238000013508 migration Methods 0.000 claims description 11
- 230000005012 migration Effects 0.000 claims description 11
- 238000012937 correction Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 10
- 230000003044 adaptive effect Effects 0.000 claims description 9
- 238000013507 mapping Methods 0.000 claims description 9
- 230000003042 antagnostic effect Effects 0.000 claims description 6
- 230000000877 morphologic effect Effects 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 238000013135 deep learning Methods 0.000 claims description 5
- 238000002372 labelling Methods 0.000 claims description 5
- 239000002131 composite material Substances 0.000 claims description 4
- 238000013461 design Methods 0.000 claims description 4
- 238000012546 transfer Methods 0.000 claims description 4
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 3
- 230000002159 abnormal effect Effects 0.000 claims description 3
- 230000002547 anomalous effect Effects 0.000 claims description 3
- 230000002457 bidirectional effect Effects 0.000 claims description 3
- 230000004927 fusion Effects 0.000 claims description 3
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 238000011551 log transformation method Methods 0.000 claims description 3
- 238000010801 machine learning Methods 0.000 claims description 3
- 238000011160 research Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 238000001228 spectrum Methods 0.000 claims description 3
- 230000002792 vascular Effects 0.000 claims description 3
- 208000019553 vascular disease Diseases 0.000 claims description 3
- 230000002526 effect on cardiovascular system Effects 0.000 abstract description 10
- 208000024172 Cardiovascular disease Diseases 0.000 abstract description 7
- 230000001154 acute effect Effects 0.000 abstract description 6
- 230000010412 perfusion Effects 0.000 description 12
- 238000011158 quantitative evaluation Methods 0.000 description 8
- 238000013170 computed tomography imaging Methods 0.000 description 7
- 230000002107 myocardial effect Effects 0.000 description 6
- 210000004556 brain Anatomy 0.000 description 5
- 238000013473 artificial intelligence Methods 0.000 description 4
- 230000003788 cerebral perfusion Effects 0.000 description 4
- 238000003745 diagnosis Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 210000003484 anatomy Anatomy 0.000 description 3
- 208000004652 Cardiovascular Abnormalities Diseases 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 239000002184 metal Substances 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 208000006011 Stroke Diseases 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000008014 freezing Effects 0.000 description 1
- 238000007710 freezing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000007787 long-term memory Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000015654 memory Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 208000010125 myocardial infarction Diseases 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000013518 transcription Methods 0.000 description 1
- 230000035897 transcription Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/005—Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/006—Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- G06T5/73—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
- G06T2207/30104—Vascular flow; Blood flow; Perfusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2211/00—Image generation
- G06T2211/40—Computed tomography
- G06T2211/404—Angiography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2211/00—Image generation
- G06T2211/40—Computed tomography
- G06T2211/421—Filtered back projection [FBP]
Abstract
The invention provides a patient intelligent monitoring and sequencing method, which comprises S1, carrying out self-adaptive optimization on a CT scanning protocol based on a depth recognition model; s2, identifying and automatically correcting the CT artifact based on a countermeasure technology; s3, CT quantitative imaging is carried out based on the incremental learning technology; s4, carrying out intelligent analysis on the CT image based on the transfer learning technology; and S5, making an auxiliary decision on the CT image. The invention constructs a strategy network for intelligent monitoring and sequencing of emergency patients, and the network architecture can be based on various deep convolutional network technologies, such as a high-dimensional convolutional neural network, and aims to screen CT projection data of cardiovascular critical and severe patients; the high-dimensional convolutional neural network finally determines whether the image contains the acute cardiovascular disease, so that the image is diagnosed according to the urgent-heavy priority sequence instead of the original acquisition sequence, limited medical resources of a hospital are optimized, and the purpose of quickly diagnosing and treating the urgent-heavy priority patient is achieved.
Description
Technical Field
The invention relates to the field of sequencing, in particular to an intelligent patient monitoring and sequencing method.
Background
With the continuous improvement of the medical level of China, at present that CT equipment is increasingly popularized, the traditional technical innovation of CT products is kept since the time of marketization, new generation products are continuously developed so as to develop a new field of clinical application, the original products are qualitatively changed due to the updating and upgrading of the products, new functions are provided, new requirements of customers can be met, and sufficient profits are kept for enterprises while new using function values of the customers are created. In recent five years, the AI technology has become mature in data, algorithms, computing power and other aspects, and has often made breakthrough, thus actually solving the practical problems and actually creating economic effects. As a significant representative of large data distribution, the medical industry is expected to become one of the widest industries with application prospects, so that the method has great commercialization potential.
Therefore, it is particularly necessary to solve the problem of the imbalance of traditional medical resources by fusing the artificial intelligence technology and the large-scale medical CT data. Therefore, an intelligent patient monitoring and sequencing method is provided.
Disclosure of Invention
In view of the above problems, the present invention aims to provide a method for sequencing patient intelligent monitoring, comprising:
s1, carrying out self-adaptive optimization on the CT scanning protocol based on the depth recognition model;
s2, identifying and automatically correcting the CT artifact based on a countermeasure technology;
s3, CT quantitative imaging is carried out based on the incremental learning technology;
s4, carrying out intelligent analysis on the CT image based on the transfer learning technology;
and S5, making an auxiliary decision on the CT image.
Preferably, the S1 includes:
s11, constructing an organ region anatomical feature identification network;
and S12, constructing a CT personalized scanning protocol task guide network.
Preferably, the S11 includes:
s1101, constructing a three-dimensional positioning image imaging sub-network:
the depth residual error learning sub-network comprises a noise estimation block, an artifact estimation block and an image filtering block, wherein the noise estimation block, the artifact estimation block and the image filtering block are respectively processed through a residual error learning network, a multi-scale wavelet and a convolutional neural network;
carrying out artifact estimation based on a processing flow of multi-scale wavelet transform, further improving the image quality of the microdose image by an iteration method, and finally obtaining a three-dimensional positioning image;
s1102, constructing an organ region identification subnetwork:
taking the recovered micro-dose three-dimensional positioning image output by the deep residual learning network as the input of the identification network;
wherein the organ region identification network comprises an encoding process and a decoding process;
wherein, the coding process adopts the structure of residual block, and the decoding process adopts the full convolution network; the training set is defined as X, where XiFor the ith input image, YiLabels for the ith input image, wherein I is defined as a different reconstruction target region; simultaneous definition ofProbability of kth pixel of ith input image;the probability map for image level prediction can be obtained by all computations at pixel level, and the cost function is:
the three-dimensional positioning image imaging sub-network and the organ region identification sub-network are cascaded to obtain a final organ region anatomical feature identification network, L represents a cost function, and r represents a control parameter.
Preferably, the S12 includes:
s1201, constructing a depth recognition network, wherein the constructed depth recognition model is based on a bidirectional depth recursive network;
s1202, three-dimensional positioning image and target obtained from organ region anatomical feature identification network are utilizedThe target organ area is used as the input of a high-dimensional image feature extraction network to extract the morphological features of the three-dimensional positioning image and the target organ areaTexture features
S1203, personalized scanning parameter estimation, task-driven, patient-driven multi-objective optimization,
wherein, the optimization process of the personalized scanning parameter estimation in the step S1203 is solved through an objective equation
Wherein omegaAIs the adaptive scanning parameter to be solved; omegaRAdaptive reconstruction parameters to be solved; s represents an estimate of the local noise power spectrum, T represents an estimate of the local modulation transfer function,is based on task-driven and patient-driven parameter estimation, and contains patient core informationAnd high-dimensional morphological characteristics of three-dimensional positioning imageTexture featuresNamely, it isFinally, optimizing the self-adaptive exposure parameter omega under each angle through an ADMM-Net networkAAnd an adaptive reconstruction parameter omegaR(ii) a The cascade network adopts a small batch random gradient descent method to carry out hierarchical training on the network,fx,fy,fzIndicating the position information of the image f at coordinate points x, y, z.
Preferably, the S2 includes:
s21, constructing an antagonistic active learning network for automatic CT artifact identification;
s22, constructing a confrontation learning network for automatically correcting the CT artifact;
the S21 includes:
performing network research design based on active learning and antagonistic learning theory technology; the active learning network can assist artifact labeling in data, wherein the active learning network A is (C, Q, S, L and U), and C is a machine learning model and is used for CT artifact identification and identification; q is a query function, and a committee-based selection algorithm is adopted as a query strategy and is used for screening data containing large information amount from the unlabeled sample pool U; s is a supervisor, selects correct labels in the sample U, and trains a classifier and the next round of query by using the obtained new knowledge;
the S22 includes:
s2201, constructing an artifact automatic correction confrontation learning network with artifact identification type indexes, and identifying artifact types:
s2202, according to the artifact identification result, marking and indexing each type of artifact;
s2203, aiming at different artifacts, adopting different countermeasure networks to automatically correct;
s2204, according to the identification result and range in the artifact identification network, a feature extraction method is adopted to perform feature analysis and extraction on the artifact part in the chord graph, and specific loss functions are constructed for different features so as to realize artifact automatic correction based on artifact structural feature difference;
s2205, for the artifacts meeting the preset conditions, performing multi-scale directional field processing on the CT projection data according to the identification result, extracting multi-scale directional data information of the artifact part in the projection data, and setting a countermeasure network cost function according to the multi-scale directional data information to obtain the projection data after artifact correction.
Preferably, the S3 includes:
s31, constructing a CT quantitative imaging semi-supervised incremental learning network; constructing a low-dose CT quantitative imaging supervised learning network:
the low-dose CT quantitative imaging supervised learning network comprises a fully-connected filter layer, a sine back-projection layer and a residual convolutional neural network, wherein the fully-connected filter layer is optimally designed based on a filter kernel of a filter back-projection algorithm, the sine back-projection layer corresponds to a back-projection operator in the filter back-projection algorithm, the residual convolutional neural network is used for further optimizing a reconstruction result, and a network cost function is designed to be 2 norm root mean square error and is recorded as:whereinReconstructing the final predicted image of the network for CT, xH refThe CT image is a target CT image, N is the number of training samples, and theta is a parameter needing to be learned;
s32, constructing a semi-supervised incremental learning network: inputting noise-containing CT projection data acquired from an external source into a pre-trained FBP net after passing through various deformation fields, performing self-adaptive weighted fusion on CT image results to obtain a fused high-quality CT target image, and performing depth optimization on FBP net parameters again to obtain a semi-supervised quantitative imaging model;
s33, constructing a CT quantitative imaging unsupervised incremental learning network; designing a CT quantitative imaging unsupervised incremental learning network based on the statistical characteristics of noisy CT projection data before Log transformation and the sparsity of a high-order derivative of the CT projection data; an objective function in a deep learning network is constructed based on a maximum posterior probability framework, the objective function comprises a data fidelity item based on a likelihood function and a prior information item based on piecewise linearity, the objective function can more accurately describe CT projection data and can carry out training optimization along a gradient direction, and the expression is as follows:
wherein, C1Representing noisy CT projection training data set, yiThe ith noisy CT projection training data is obtained, N represents the number of data sets, G represents the number of X-ray photons satisfying the composite Poisson statistical characteristics, I represents the number of photons received by a detector, and I represents the number of the X-ray photons received by the detectoriRepresenting the number of photons received by the i-th detector, GiRepresents IiApproximately middle value of, Gj| A Denotes the multiplication value of G in the j-th projection, I0jRepresenting the number of incident photons in the j projection, ε represents the electronic noise, and obeys a mean of 0 and a variance of σ2Gaussian distribution of (D)2Representing a second order difference operator, fθRepresenting unsupervised learning network mapping, wherein theta is a parameter needing to be learned in the network mapping, a high-quality CT image is obtained through reconstruction by utilizing a filtering back-projection algorithm, and fθ(yi)jRepresenting the training data y for the ith noisy CT projectioniUnsupervised learning network mapping on pixel j.
Preferably, the S4 includes:
s41, constructing a vascular qualitative identification network based on the video depth migration learning technology, including:
s4101, constructing a dynamic depth of field area identification network: the training set is defined as s { (X)i,Yi) I ═ 1,2,3, …, n }, where X isiInputting images for a three-dimensional sequence of ith time frame, YiFor the ith time frame of tag data,for the network output result, the cost function is:
s4102, constructing a video depth of field migration learning network;
s42, constructing a blood vessel quantitative recognition network based on medical multi-modal migration imaging, comprising the following steps:
s4201, constructing a blood vessel quantitative identification network: the training set is defined as S { (X)i,Yi) I ═ 1,2,3, …, n }, where X isiFor the ith input image, YiE {0,1} is the label for the ith input image, where Yi1 is defined as an abnormal image, Yi0 is defined as a non-anomalous image;
definition ofIs the probability of the kth pixel of the ith input image, where k ═ 1,2,3, …, | Xi|},|Xi| represents XiTotal number of pixels, ifFor a probability map of image level prediction, thenAll through the pixel levelAnd calculating to obtain the cost function:
wherein I (-) is an indicator function,calculated by the Soft max Function, r is a control parameter; l isMILRepresenting a cost function;
s4202, constructing the multi-mode migration learning network.
Preferably, the S5 includes:
s51, constructing a strategy network for intelligent monitoring and sequencing of emergency patients;
and S52, constructing a monitoring sequencing result obtained by a depth network generated by the auxiliary report facing the vascular diseases.
The invention has the following effective effects:
1. the invention constructs a strategy network for intelligent monitoring and sequencing of emergency patients, and the network architecture can be based on various deep convolutional network technologies, such as a high-dimensional convolutional neural network, and aims to screen CT projection data of cardiovascular critical and severe patients; the high-dimensional convolutional neural network finally determines whether the image contains the acute cardiovascular disease, so that the image is diagnosed according to the urgent-heavy priority sequence instead of the original acquisition sequence, limited medical resources of a hospital are optimized, and the purpose of quickly diagnosing and treating the urgent-heavy priority patient is achieved.
2. Aiming at three problems faced by CT imaging in clinical application, the invention combines an Artificial Intelligence (AI) technology, brings each problem into a deep learning network, sequentially progresses each sub-network simultaneously, integrates the sub-networks into a whole, has definite target, and constructs a new framework of CT imaging and intelligent analysis facing multi-task scanning protocol self-adaptive optimization, micro-radiation dose CT quantitative imaging and CT imaging intelligent auxiliary diagnosis.
3. The new framework of the CT-AI image system constructed by the invention is clinical task oriented and has more clinical practical significance, and the framework deeply analyzes the potential and the demand of each component of the CT image system based on the CT image scanning-reconstruction-auxiliary analysis integrated system, thereby being convenient for the subsequent reconstruction and upgrade of the CT image system.
4. The invention effectively solves the problems that the current CT CAD x technology is mostly based on CT images, and omits richer and more detailed medical information contained in projection data; according to the integrated framework, the prediction error of the output end can be finally transmitted to the CT projection data of the input end, which is equivalent to positioning and identifying cardiovascular abnormalities by utilizing the projection information under multiple angles from the projection directly, namely predicting from the original data.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
Fig. 1 is a diagram of an exemplary embodiment of a patient intelligent monitoring and sequencing method according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In one embodiment, as shown in FIG. 1, the invention provides a patient intelligent monitoring sequencing method, comprising
The technical scheme adopted by the invention is as follows:
an intelligent patient monitoring and sequencing method comprises the following steps:
s1, carrying out self-adaptive optimization on the CT scanning protocol based on the depth recognition model, including:
s11, constructing an organ region anatomical feature identification network;
s12, constructing a task guide network of the CT personalized scanning protocol;
the identification network can accurately estimate the target organ anatomical region under the micro radiation dose, and provides guidance for the optimization of a subsequent CT scanning protocol;
s2, identifying and automatically correcting the CT artifact based on a countermeasure technology;
s3, CT quantitative imaging is carried out based on the incremental learning technology;
s4, carrying out intelligent analysis on the CT image based on the transfer learning technology;
and S5, making an auxiliary decision on the CT image.
Wherein the S11 includes the following steps:
s1101, constructing a three-dimensional positioning image imaging sub-network; the depth residual error learning sub-network comprises a noise estimation block, an artifact estimation block and an image filtering block, wherein the noise estimation, the artifact estimation and the image filtering are respectively processed through a residual error learning network, a multi-scale wavelet and a convolutional neural network; carrying out artifact estimation based on a processing flow of multi-scale wavelet transform, further improving the image quality of the microdose image by an iteration method, and finally obtaining a high-quality three-dimensional positioning image;
s1102, construction of organsArea recognition subnetwork: the recovered micro-dose three-dimensional positioning image output by the deep residual learning network is used as the input of an identification network, and the segmentation and identification effects at the pixel level are achieved by utilizing the segmentation and identification network structure based on the semantics; wherein the organ region identification network comprises an encoding process and a decoding process; wherein, the coding process adopts the structure of residual block, and the decoding process adopts the full convolution network; the training set is defined as X, where XiFor the ith input image, YiLabels for the ith input image, wherein I is defined as a different reconstruction target region; simultaneous definition ofProbability of kth pixel of ith input image;the probability map for image level prediction can be obtained by all computations at pixel level, and the cost function is:
the three-dimensional positioning image imaging sub-network and the organ region identification sub-network are cascaded to obtain a final organ region anatomical feature identification network, L represents a cost function, and r represents a control parameter.
S12, constructing a CT personalized scanning protocol task guide network, comprising the following steps:
s1201, constructing a deep recognition network, and quickly and accurately extracting the physical sign information of the patient, including biochemical indexes, age, height and the like. The built depth recognition model is based on a bi-directional depth recursive network, wherein the recursive network has been used with great success and wide application in a plurality of natural language processing. The core information of the patient can be efficiently extracted by utilizing a bidirectional deep recursive network structureAnd implements two-dimensional vectorization thereofCan assist in optimizing CT adaptive scanning parameters according to the high-dimensional sign information
S1202, extracting morphological characteristics of the three-dimensional positioning image and the target organ region by using the high-quality three-dimensional positioning image and the target organ region obtained from the organ region anatomical feature identification network as input of a high-dimensional image feature extraction networkTexture featuresDifferent from the conventional adaptive scanning protocol which only sets the adaptive scanning protocol according to the two-dimensional positioning image, the self-adaptive scanning parameter of each exposure projection can be accurately estimated by the self-adaptive scanning protocol. Wherein the high-dimensional image feature extraction network may be constituted by a full convolution network.
S1203, personalized scanning parameter estimation, task-driven and patient-driven multi-objective optimization is performed, wherein the multi-objective optimization comprises a scanning range, tube current, tube voltage and the like, and the purpose is to reduce radiation damage of a patient and ensure quality of a reconstructed image and diagnosis precision.
Wherein, the optimization process of the personalized scanning parameter estimation in the step S1203 is solved through an objective equation
Wherein omegaAIs the adaptive scanning parameter to be solved; omegaRAdaptive reconstruction parameters to be solved; s represents an estimate of the local noise power spectrum, T represents an estimate of the local modulation transfer function,is based on task-driven and patient-driven parameter estimation, and contains patient core informationAnd high-dimensional morphological characteristics of three-dimensional positioning imageTexture featuresNamely, it isFinally, optimizing the self-adaptive exposure parameter omega under each angle through an ADMM-Net networkAAnd an adaptive reconstruction parameter omegaR(ii) a The cascade network adopts a small batch random gradient descent method to carry out grading training on the network, fx,fy,fzThe position information of the image f at coordinate points x, y and z, and the convergence of the network training are analyzed in a preliminary experiment.
Wherein S2 includes:
s21, constructing an antagonistic active learning network for automatic identification of CT artifacts;
s22, constructing a confrontation learning network for automatically correcting the CT artifact;
s21, performing network research design based on active learning and antagonistic learning theory technology; the active learning network can assist artifact labeling in data, wherein the active learning network A is (C, Q, S, L and U), and C is a machine learning model and is used for CT artifact identification and identification; q is a query function, and a committee-based selection algorithm is adopted as a query strategy and is used for screening data containing large information amount from the unlabeled sample pool U; s is a supervisor, selects correct labels in the sample U, and trains a classifier and the next round of query by using the obtained new knowledge; finally, high identification and fine labeling of common artifacts of high-precision CT under less identification data are realized;
s22, constructing a confrontation learning network for CT artifact automatic correction, which comprises the following steps:
s2201, constructing an artifact automatic correction and confrontation learning network with artifact identification type indexes, and identifying different artifact types, such as metal artifacts, ring artifacts, windmill artifacts and the like:
s2202, according to the artifact identification result, marking and indexing each type of artifact;
s2203, automatically correcting by adopting different countermeasure networks according to different artifacts;
s2204, under the condition that aliasing exists in various artifacts or under the condition that different artifact representations are similar, according to the identification result and range in the artifact identification network, firstly, a feature extraction method is adopted to carry out feature analysis and extraction on the artifact part in the chord graph, and specific loss functions are constructed for different features so as to realize artifact automatic correction based on artifact structural feature difference;
s2105, for the artifacts meeting preset conditions, such as strong artifacts affecting accurate identification of anatomical structures or affecting standard values of CT images, such as metal artifacts, ring artifacts and the like, multi-scale directional field processing is performed on the CT projection data according to identification results, such as a multi-scale directional transformation field and a multi-scale contour transformation field, so as to extract multi-scale directional data information of artifact parts in the projection data, and a targeted countermeasure network cost function is designed according to the multi-scale data information, so that projection data after artifact correction are finally obtained.
Wherein the S3 includes the following steps:
s31, constructing a CT quantitative imaging semi-supervised incremental learning network; constructing a low-dose CT quantitative imaging supervised learning network:
for existing sample data of CT 'noisy image-target image', estimating internal features of the sample by using a 'projection domain-image domain' depth filtering back projection network, and using the estimated features as a supervised learning imaging network; the low-dose CT quantitative imaging supervised learning network comprises a fully-connected filter layer, a sine back-projection layer and a residual convolutional neural network, wherein the fully-connected filter layer is optimally designed based on a filter kernel of a filter back-projection algorithm, the sine back-projection layer corresponds to a back-projection operator in the filter back-projection algorithm, the residual convolutional neural network is used for further optimizing a reconstruction result, and a network cost function is designed to be 2 norm root mean square error and is recorded as:whereinReconstructing the final predicted image of the network for CT, xH refThe CT image is a target CT image, N is the number of training samples, and theta is a parameter needing to be learned;
s32, constructing a semi-supervised incremental learning network: according to the project, firstly, noise-containing CT projection data acquired from an external source are input into FBP net trained in advance after passing through various deformation fields, then, the network reconstruction CT image result is subjected to self-adaptive weighted fusion to obtain a fused high-quality CT target image, further, the FBP net parameters are subjected to depth optimization again by using continuously-increased noise-containing CT projection data-target CT image data, and finally, semi-supervised quantitative imaging model building is completed;
s33, constructing a CT quantitative imaging unsupervised incremental learning network; and designing a CT quantitative imaging unsupervised incremental learning network based on the statistical characteristics of noisy CT projection data before Log transformation, such as composite Poisson-Gaussian noise distribution and the sparsity of the high-order derivative of the CT projection data. Based on a maximum posterior probability framework, the project designs an objective function in a deep learning network, the objective function comprises a data fidelity item based on a likelihood function and a prior information item based on piecewise linearity, the objective function can more accurately describe CT projection data and can carry out training optimization along a gradient direction, and the expression is as follows:
wherein, C1Representing noisy CT projection training data set, yiThe ith noisy CT projection training data is obtained, N represents the number of data sets, G represents the number of X-ray photons satisfying the composite Poisson statistical characteristics, I represents the number of photons received by a detector, and I represents the number of the X-ray photons received by the detectoriRepresenting the number of photons received by the i-th detector, GiRepresents IiApproximately middle value of, Gj| A Denotes the multiplication value of G in the j-th projection, I0jRepresenting the number of incident photons in the j projection, ε represents the electronic noise, and obeys a mean of 0 and a variance of σ2Gaussian distribution of (D)2Representing a second order difference operator, fθRepresenting unsupervised learning network mapping, wherein theta is a parameter needing to be learned in the network mapping, a high-quality CT image is obtained through reconstruction by utilizing a filtering back-projection algorithm, and fθ(yi)jRepresents y for the ith measurementiUnsupervised learning network mapping on pixel j;
wherein the S4 includes the following steps: s41, constructing a vascular qualitative identification network based on a video depth of field migration learning technology; the cardiovascular perfusion CT imaging data comprises a plurality of image data of continuous time frames, the data has high time continuity and spatial correlation, and the characteristics also exist in the natural scene video data. Compared with the perfusion CT image which is difficult to directly train the network due to the lack of the labeling data, the natural video data has a fine label and is easy to train the network. Therefore, based on 4, the study is intended to adopt a transfer learning technique to use a video depth of field analysis network for quantitative assessment of cardiovascular state
S4101, constructing a dynamic depth of field/area identification network: the training set is defined as s { (X)i,Yi) I ═ 1,2,3, …, n }, where X isiInputting images for a three-dimensional sequence of ith time frame, YiFor ith time frame label data (such as video depth of field image/CT cardiovascular region image), i.e. two-dimensional probability hotspot map (Hot map), different regions can be quantitatively distinguished,for the network output result, the cost function is:
s4102, constructing a video depth of field migration learning network; the method comprises the steps of firstly inputting natural video data pairs into the recognition network, performing network pre-training to obtain a depth-of-field network, then formulating a transfer learning strategy, and performing transfer learning training on the depth-of-field network by using cardiovascular perfusion CT data to obtain a blood vessel region quantitative evaluation network, wherein partial parameters of the depth-of-field network in the evaluation network are frozen, only parameters of other layers are trained, high-dimensional robust characteristics of cardiovascular and cerebrovascular CT sequence images are deeply mined, cardiovascular qualitative recognition S42 is realized, and a blood vessel quantitative recognition network based on medical multi-modal transfer imaging is constructed;
the method comprises the following steps of S4201, constructing a blood vessel quantitative identification network: the training set is defined as S { (X)i,Yi) I ═ 1,2,3, …, n }, where X isiFor the ith input image, YiE {0,1} is the label for the ith input image, where Yi1 is defined as an abnormal image, Yi0 is defined as a non-anomalous image. Simultaneous definition ofIs the probability of the kth pixel of the ith input image, where k ═ 1,2,3, …, | Xi|},|Xi| represents XiTotal number of pixels. If it is notFor a probability map of image level prediction, thenCan pass through all of the pixel levelsAnd calculating to obtain the cost function:
wherein I (-) is an indicator function,calculated by the Soft max Function, r is a control parameter; l isMILRepresenting a cost function;
s4202, constructing a multi-mode migration learning network: for cerebral perfusion, firstly inputting natural image data pairs into the recognition network, performing network pre-training, and then performing transfer learning training on the recognition network after natural image training by using cerebral perfusion MRI data pairs to obtain a blood vessel region quantitative evaluation network; finally, a transfer learning strategy is formulated, brain perfusion CT projection data are firstly reconstructed by brain perfusion CT data, then a blood vessel region quantitative evaluation network is subjected to transfer learning, brain MRI is fused, the relevance of an internal anatomical structure of a CT image and the diversity of image information of different modes are optimized, and blood vessel region quantitative evaluation network parameters are optimized, wherein the transfer learning strategy can be selected according to the size of a brain perfusion CT data set and the number of parameters: (1) freezing the parameters of the first n layers, namely not changing the values of the n layers when training the cerebral perfusion CT blood vessel region quantitative evaluation network; (2) the first n layers of parameters are not frozen, but the values of the network parameters are continuously adjusted, namely two strategies of fine adjustment are used for carrying out transfer learning; finally, quantitatively identifying a cerebral stroke region of the cerebral perfusion CT image; similarly, for myocardial perfusion data, firstly inputting the natural image data pair into the identification network, performing network pre-training, and then performing transfer learning training on the identification network trained by the natural image by using myocardial perfusion PET data to obtain a blood vessel region quantitative evaluation network; finally, a transfer learning strategy is formulated, the myocardial perfusion CT data is used for reconstructing brain perfusion CT projection data, then transfer learning is carried out on a blood vessel region quantitative evaluation network, myocardial PET is fused, the correlation of an internal anatomical structure and the diversity of different modal image information in a CT image are optimized, and the parameters of the blood vessel region quantitative evaluation network are optimized, wherein two strategies similar to the former strategies are selected for parameter training according to the size of a myocardial perfusion CT data set and the number of parameters; finally, myocardial perfusion CT image myocardial infarction area is identified quantitatively.
Wherein the S5 includes the following steps: s51, constructing a strategy network for intelligent monitoring and sequencing of emergency patients; a strategy network for intelligent monitoring and sequencing of emergency patients is provided, and a network architecture can be based on a plurality of deep convolutional network technologies, such as a high-dimensional convolutional neural network, and aims to screen CT projection data of cardiovascular critical and serious patients. The high-dimensional convolutional neural network finally determines whether the image contains the acute cardiovascular disease, so that the image is diagnosed according to the urgent and serious priority sequence instead of the original acquisition sequence, limited medical resources of a hospital are optimized, and the purpose of quickly diagnosing and treating the urgent and serious priority patient is achieved;
s52, constructing a monitoring sequencing result obtained by a depth network generated by a vascular disease auxiliary report, preferentially judging the CT projection data of patients with acute and severe cardiovascular diseases according to the optimal sequence of 'acute and severe priority', and generating a disease auxiliary report by using a recurrent neural network according to a cardiovascular disease sample library.
For CT projection data, a description image maximum probability expression form is constructed:
where θ is the model parameter, I is the CT projection data, and S represents the correct transcription, and there is no length constraint. Chain rules are typically applied to simulate S0,…,SNN is the length of the particular example.
When the model is trained, (S, I) forms a training example pair, and a random gradient descent algorithm is used for optimizing the sum log p (S | I) of logarithmic probabilities.
Wherein, the method also comprises the construction probability p of the long-term and short-term memory network LSTM for short; LSTM is designed in a web-expanded form, and a copy of the LSTM memory can be created for the image and each sentence, so that all LSTM share the same parameters in time, and all repeated joins are converted to feed-forward joins in an expanded version, then the expansion process is as follows:
x-1=CNN(I)
xt=WeSt,t∈{0,…,N-1}
pt+1=LSTM(xt),t∈{0,…,N-1}
where we represent each word as a single heat vector StThe size of which is equal to the size of the dictionary. Note that we use two special words S0And SNRepresenting the beginning and end of a sentence, respectively. Both images and words are mapped to the same space, the images being viewed by use of the viewFeel CNN and embed W by worde. Image I is input only once at t-1 to inform LSTM of the image content. Our penalty function is the sum of the negative log likelihood of the correct word at each step and can be expressed as follows:
by optimizing all parameters of the LSTM, the top level of the image embedder CNN and the word embedding WeTo minimize the loss function of the above equation; the network finally inputs the cardiovascular disease classification recognition result for assisting the subsequent medical diagnosis.
The invention has the following effective effects:
1. the invention constructs a strategy network for intelligent monitoring and sequencing of emergency patients, and the network architecture can be based on various deep convolutional network technologies, such as a high-dimensional convolutional neural network, and aims to screen CT projection data of cardiovascular critical and severe patients; the high-dimensional convolutional neural network finally determines whether the image contains the acute cardiovascular disease, so that the image is diagnosed according to the urgent-heavy priority sequence instead of the original acquisition sequence, limited medical resources of a hospital are optimized, and the purpose of quickly diagnosing and treating the urgent-heavy priority patient is achieved.
2. Aiming at three problems faced by CT imaging in clinical application, the invention combines an Artificial Intelligence (AI) technology, brings each problem into a deep learning network, sequentially progresses each sub-network simultaneously, integrates the sub-networks into a whole, has definite target, and constructs a new framework of CT imaging and intelligent analysis facing multi-task scanning protocol self-adaptive optimization, micro-radiation dose CT quantitative imaging and CT imaging intelligent auxiliary diagnosis.
3. The new framework of the CT-AI image system constructed by the invention is clinical task oriented and has more clinical practical significance, and the framework deeply analyzes the potential and the demand of each component of the CT image system based on the CT image scanning-reconstruction-auxiliary analysis integrated system, thereby being convenient for the subsequent reconstruction and upgrade of the CT image system.
4. The invention effectively solves the problems that the current CT CAD x technology is mostly based on CT images, and omits richer and more detailed medical information contained in projection data; according to the integrated framework, the prediction error of the output end can be finally transmitted to the CT projection data of the input end, which is equivalent to positioning and identifying cardiovascular abnormalities by utilizing the projection information under multiple angles from the projection directly, namely predicting from the original data.
While embodiments of the invention have been shown and described, it will be understood by those skilled in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
Claims (8)
1. An intelligent patient monitoring and sequencing method is characterized by comprising the following steps:
s1, carrying out self-adaptive optimization on the CT scanning protocol based on the depth recognition model;
s2, identifying and automatically correcting the CT artifact based on a countermeasure technology;
s3, CT quantitative imaging is carried out based on the incremental learning technology;
s4, carrying out intelligent analysis on the CT image based on the transfer learning technology;
and S5, making an auxiliary decision on the CT image.
2. The method for sequencing intelligent patient monitoring of claim 1, wherein said S1 comprises:
s11, constructing an organ region anatomical feature identification network;
and S12, constructing a CT personalized scanning protocol task guide network.
3. The method for sequencing intelligent patient monitoring of claim 2, wherein said S11 comprises:
s1101, constructing a three-dimensional positioning image imaging sub-network:
the depth residual error learning sub-network comprises a noise estimation block, an artifact estimation block and an image filtering block, wherein the noise estimation block, the artifact estimation block and the image filtering block are respectively processed through a residual error learning network, a multi-scale wavelet and a convolutional neural network;
carrying out artifact estimation based on a processing flow of multi-scale wavelet transform, further improving the image quality of the microdose image by an iteration method, and finally obtaining a three-dimensional positioning image;
s1102, constructing an organ region identification subnetwork:
taking the recovered micro-dose three-dimensional positioning image output by the deep residual learning network as the input of the identification network;
wherein the organ region identification network comprises an encoding process and a decoding process;
wherein, the coding process adopts the structure of residual block, and the decoding process adopts the full convolution network; the training set is defined as X, where XiFor the ith input image, YiLabels for the ith input image, wherein I is defined as a different reconstruction target region; simultaneous definition ofProbability of kth pixel of ith input image;the probability map for image level prediction can be obtained by all computations at pixel level, and the cost function is:
the three-dimensional positioning image imaging sub-network and the organ region identification sub-network are cascaded to obtain a final organ region anatomical feature identification network, L represents a cost function, and r represents a control parameter.
4. The method for sequencing intelligent patient monitoring of claim 3, wherein said S12 comprises:
s1201, constructing a depth recognition network, wherein the constructed depth recognition model is based on a bidirectional depth recursive network;
s1202, extracting morphological characteristics of the three-dimensional positioning image and the target organ region by using the three-dimensional positioning image and the target organ region obtained from the organ region anatomical feature identification network as input of a high-dimensional image feature extraction networkTexture features
S1203, personalized scanning parameter estimation, task-driven, patient-driven multi-objective optimization,
wherein, the optimization process of the personalized scanning parameter estimation in the step S1203 is solved through an objective equation
Wherein omegaAIs the adaptive scanning parameter to be solved; omegaRAdaptive reconstruction parameters to be solved; s represents an estimate of the local noise power spectrum, T represents an estimate of the local modulation transfer function,is based on task-driven and patient-driven parameter estimation, and contains patient core informationAnd high-dimensional morphological characteristics of three-dimensional positioning imageTexture featuresNamely, it isFinally, optimizing the self-adaptive exposure parameter omega under each angle through an ADMM-Net networkAAnd an adaptive reconstruction parameter omegaR(ii) a The cascade network adopts a small batch random gradient descent method to carry out grading training on the network, fx,fy,fzIndicating the position information of the image f at coordinate points x, y, z.
5. The method for sequencing intelligent patient monitoring of claim 1, wherein said S2 comprises:
s21, constructing an antagonistic active learning network for automatic CT artifact identification;
s22, constructing a confrontation learning network for automatically correcting the CT artifact;
the S21 includes:
performing network research design based on active learning and antagonistic learning theory technology; the active learning network can assist artifact labeling in data, wherein the active learning network A is (C, Q, S, L and U), and C is a machine learning model and is used for CT artifact identification and identification; q is a query function, and a committee-based selection algorithm is adopted as a query strategy and is used for screening data containing large information amount from the unlabeled sample pool U; s is a supervisor, selects correct labels in the sample U, and trains a classifier and the next round of query by using the obtained new knowledge;
the S22 includes:
s2201, constructing an artifact automatic correction confrontation learning network with artifact identification type indexes, and identifying artifact types:
s2202, according to the artifact identification result, marking and indexing each type of artifact;
s2203, aiming at different artifacts, adopting different countermeasure networks to automatically correct;
s2204, according to the identification result and range in the artifact identification network, a feature extraction method is adopted to perform feature analysis and extraction on the artifact part in the chord graph, and specific loss functions are constructed for different features so as to realize artifact automatic correction based on artifact structural feature difference;
s2205, for the artifacts meeting the preset conditions, performing multi-scale directional field processing on the CT projection data according to the identification result, extracting multi-scale directional data information of the artifact part in the projection data, and setting a countermeasure network cost function according to the multi-scale directional data information to obtain the projection data after artifact correction.
6. The method for sequencing intelligent patient monitoring of claim 1, wherein said S3 comprises:
s31, constructing a CT quantitative imaging semi-supervised incremental learning network; constructing a low-dose CT quantitative imaging supervised learning network:
the low-dose CT quantitative imaging supervised learning network comprises a fully-connected filter layer, a sine back-projection layer and a residual convolutional neural network, wherein the fully-connected filter layer is optimally designed based on a filter kernel of a filter back-projection algorithm, the sine back-projection layer corresponds to a back-projection operator in the filter back-projection algorithm, the residual convolutional neural network is used for further optimizing a reconstruction result, and a network cost function is designed to be 2 norm root mean square error and is recorded as:whereinReconstructing the final predicted image of the network for CT, xH refThe CT image is a target CT image, N is the number of training samples, and theta is a parameter needing to be learned;
s32, constructing a semi-supervised incremental learning network: inputting noise-containing CT projection data acquired from an external source into a pre-trained FBPnet after passing through various deformation fields, performing self-adaptive weighted fusion on CT image results to obtain a fused high-quality CT target image, and performing depth optimization on FBPnet parameters again to obtain a semi-supervised quantitative imaging model;
s33, constructing a CT quantitative imaging unsupervised incremental learning network; designing a CT quantitative imaging unsupervised incremental learning network based on the statistical characteristics of noisy CT projection data before Log transformation and the sparsity of a high-order derivative of the CT projection data; an objective function in a deep learning network is constructed based on a maximum posterior probability framework, the objective function comprises a data fidelity item based on a likelihood function and a prior information item based on piecewise linearity, the objective function can more accurately describe CT projection data and can carry out training optimization along a gradient direction, and the expression is as follows:
wherein, C1Representing noisy CT projection training data set, yiThe ith noisy CT projection training data is obtained, N represents the number of data sets, G represents the number of X-ray photons satisfying the composite Poisson statistical characteristics, I represents the number of photons received by a detector, and I represents the number of the X-ray photons received by the detectoriRepresenting the number of photons received by the i-th detector, GiRepresents IiApproximately middle value of, Gj| A Denotes the multiplication value of G in the j-th projection, I0jRepresenting the number of incident photons in the j projection, ε represents the electronic noise, and obeys a mean of 0 and a variance of σ2Gaussian distribution of (D)2Representing a second order difference operator, fθRepresenting unsupervised learning network mapping, wherein theta is a parameter needing to be learned in the network mapping, a high-quality CT image is obtained through reconstruction by utilizing a filtering back-projection algorithm, and fθ(yi)jRepresenting the training data y for the ith noisy CT projectioniUnsupervised learning network mapping on pixel j.
7. The method for sequencing intelligent patient monitoring of claim 1, wherein said S4 comprises:
s41, constructing a vascular qualitative identification network based on the video depth migration learning technology, including:
s4101, constructing a dynamic depth of field area identification network: the training set is defined as s { (X)i,Yi) 1,2,3, aiInputting images for a three-dimensional sequence of ith time frame, YiFor the ith time frame of tag data,for the network output result, the cost function is:
s4102, constructing a video depth of field migration learning network;
s42, constructing a blood vessel quantitative recognition network based on medical multi-modal migration imaging, comprising the following steps:
s4201, constructing a blood vessel quantitative identification network: the training set is defined as S { (X)i,Yi) 1,2,3, aiFor the ith input image, YiE {0,1} is the label for the ith input image, where Yi1 is defined as an abnormal image, Yi0 is defined as a non-anomalous image;
definition ofIs the probability of the kth pixel of the ith input image, where k ═ 1,2,3i|},|Xi| represents XiTotal number of pixels, ifFor a probability map of image level prediction, thenAll through the pixel levelAnd calculating to obtain the cost function:
wherein I (-) is an indicator function,calculated by the Soft max Function, r is a control parameter; l isMILRepresenting a cost function;
s4202, constructing the multi-mode migration learning network.
8. The method for sequencing intelligent patient monitoring of claim 1, wherein said S5 comprises:
s51, constructing a strategy network for intelligent monitoring and sequencing of emergency patients;
and S52, constructing a monitoring sequencing result obtained by a depth network generated by the auxiliary report facing the vascular diseases.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111089390.0A CN113781461A (en) | 2021-09-16 | 2021-09-16 | Intelligent patient monitoring and sequencing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111089390.0A CN113781461A (en) | 2021-09-16 | 2021-09-16 | Intelligent patient monitoring and sequencing method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113781461A true CN113781461A (en) | 2021-12-10 |
Family
ID=78851677
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111089390.0A Pending CN113781461A (en) | 2021-09-16 | 2021-09-16 | Intelligent patient monitoring and sequencing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113781461A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115345803A (en) * | 2022-10-19 | 2022-11-15 | 合肥视微科技有限公司 | Residual error network-based annular artifact correction method in CT (computed tomography) tomography |
CN115836855A (en) * | 2023-02-22 | 2023-03-24 | 首都医科大学附属北京朝阳医院 | Mobile magnetic resonance equipment imaging method and device, storage medium and terminal |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107481297A (en) * | 2017-08-31 | 2017-12-15 | 南方医科大学 | A kind of CT image rebuilding methods based on convolutional neural networks |
US20170372193A1 (en) * | 2016-06-23 | 2017-12-28 | Siemens Healthcare Gmbh | Image Correction Using A Deep Generative Machine-Learning Model |
CN109035169A (en) * | 2018-07-19 | 2018-12-18 | 西安交通大学 | A kind of unsupervised/semi-supervised CT image reconstruction depth network training method |
CN111968110A (en) * | 2020-09-02 | 2020-11-20 | 广州海兆印丰信息科技有限公司 | CT imaging method, device, storage medium and computer equipment |
CN111968111A (en) * | 2020-09-02 | 2020-11-20 | 广州海兆印丰信息科技有限公司 | Method and device for identifying visceral organs or artifacts of CT (computed tomography) image |
CN111968167A (en) * | 2020-09-02 | 2020-11-20 | 广州海兆印丰信息科技有限公司 | Image processing method and device for CT three-dimensional positioning image and computer equipment |
CN111968108A (en) * | 2020-09-02 | 2020-11-20 | 广州海兆印丰信息科技有限公司 | CT intelligent imaging method, device and system based on intelligent scanning protocol |
CN112037146A (en) * | 2020-09-02 | 2020-12-04 | 广州海兆印丰信息科技有限公司 | Medical image artifact automatic correction method and device and computer equipment |
CN112348792A (en) * | 2020-11-04 | 2021-02-09 | 广东工业大学 | X-ray chest radiography image classification method based on small sample learning and self-supervision learning |
CN112508063A (en) * | 2020-11-23 | 2021-03-16 | 刘勇志 | Medical image classification method based on incremental learning |
-
2021
- 2021-09-16 CN CN202111089390.0A patent/CN113781461A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170372193A1 (en) * | 2016-06-23 | 2017-12-28 | Siemens Healthcare Gmbh | Image Correction Using A Deep Generative Machine-Learning Model |
CN107481297A (en) * | 2017-08-31 | 2017-12-15 | 南方医科大学 | A kind of CT image rebuilding methods based on convolutional neural networks |
CN109035169A (en) * | 2018-07-19 | 2018-12-18 | 西安交通大学 | A kind of unsupervised/semi-supervised CT image reconstruction depth network training method |
CN111968110A (en) * | 2020-09-02 | 2020-11-20 | 广州海兆印丰信息科技有限公司 | CT imaging method, device, storage medium and computer equipment |
CN111968111A (en) * | 2020-09-02 | 2020-11-20 | 广州海兆印丰信息科技有限公司 | Method and device for identifying visceral organs or artifacts of CT (computed tomography) image |
CN111968167A (en) * | 2020-09-02 | 2020-11-20 | 广州海兆印丰信息科技有限公司 | Image processing method and device for CT three-dimensional positioning image and computer equipment |
CN111968108A (en) * | 2020-09-02 | 2020-11-20 | 广州海兆印丰信息科技有限公司 | CT intelligent imaging method, device and system based on intelligent scanning protocol |
CN112037146A (en) * | 2020-09-02 | 2020-12-04 | 广州海兆印丰信息科技有限公司 | Medical image artifact automatic correction method and device and computer equipment |
CN112348792A (en) * | 2020-11-04 | 2021-02-09 | 广东工业大学 | X-ray chest radiography image classification method based on small sample learning and self-supervision learning |
CN112508063A (en) * | 2020-11-23 | 2021-03-16 | 刘勇志 | Medical image classification method based on incremental learning |
Non-Patent Citations (3)
Title |
---|
PRATYUSH KUMAR.ET AL: "Example Mining for Incremental Learning in Medical Imaging", 《ARXIV》, 31 December 2018 (2018-12-31) * |
施俊等: "深度学习在医学影像中的应用综述", 《中国图象图形学报》, vol. 25, no. 10, 31 December 2020 (2020-12-31) * |
朱森华等: "人工智能技术在医学影像产业的应用与思考", 《人工智能》, no. 03, 10 June 2020 (2020-06-10) * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115345803A (en) * | 2022-10-19 | 2022-11-15 | 合肥视微科技有限公司 | Residual error network-based annular artifact correction method in CT (computed tomography) tomography |
CN115836855A (en) * | 2023-02-22 | 2023-03-24 | 首都医科大学附属北京朝阳医院 | Mobile magnetic resonance equipment imaging method and device, storage medium and terminal |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Liang et al. | MCFNet: Multi-layer concatenation fusion network for medical images fusion | |
CN110503654B (en) | Medical image segmentation method and system based on generation countermeasure network and electronic equipment | |
CN111476292B (en) | Small sample element learning training method for medical image classification processing artificial intelligence | |
CN110992351B (en) | sMRI image classification method and device based on multi-input convolution neural network | |
CN109544518B (en) | Method and system applied to bone maturity assessment | |
CN110930421A (en) | Segmentation method for CBCT (Cone Beam computed tomography) tooth image | |
CN113314205B (en) | Efficient medical image labeling and learning system | |
CN111583285B (en) | Liver image semantic segmentation method based on edge attention strategy | |
CN111667483B (en) | Training method of segmentation model of multi-modal image, image processing method and device | |
CN113781461A (en) | Intelligent patient monitoring and sequencing method | |
CN111667027B (en) | Multi-modal image segmentation model training method, image processing method and device | |
CN113688862B (en) | Brain image classification method based on semi-supervised federal learning and terminal equipment | |
CN111798439A (en) | Medical image quality interpretation method and system for online and offline fusion and storage medium | |
CN114897914A (en) | Semi-supervised CT image segmentation method based on confrontation training | |
CN112529063A (en) | Depth domain adaptive classification method suitable for Parkinson voice data set | |
CN110477907A (en) | A kind of intelligence assists in identifying the method for epilepsy outbreak | |
La Rosa | A deep learning approach to bone segmentation in CT scans | |
CN111784713A (en) | Attention mechanism-introduced U-shaped heart segmentation method | |
CN116563549A (en) | Magnetic resonance image heart segmentation method based on coarse-granularity weak annotation | |
Zhao et al. | MPSHT: multiple progressive sampling hybrid model multi-organ segmentation | |
CN117274599A (en) | Brain magnetic resonance segmentation method and system based on combined double-task self-encoder | |
CN112750131A (en) | Pelvis nuclear magnetic resonance image musculoskeletal segmentation method based on scale and sequence relation | |
CN116433679A (en) | Inner ear labyrinth multi-level labeling pseudo tag generation and segmentation method based on spatial position structure priori | |
CN110613445A (en) | DWNN framework-based electrocardiosignal identification method | |
CN115798709A (en) | Alzheimer disease classification device and method based on multitask graph isomorphic network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |