CN110942462B - Organ deep learning segmentation method in medical image fused with discrete features - Google Patents
Organ deep learning segmentation method in medical image fused with discrete features Download PDFInfo
- Publication number
- CN110942462B CN110942462B CN201811110632.8A CN201811110632A CN110942462B CN 110942462 B CN110942462 B CN 110942462B CN 201811110632 A CN201811110632 A CN 201811110632A CN 110942462 B CN110942462 B CN 110942462B
- Authority
- CN
- China
- Prior art keywords
- medical image
- deep learning
- discrete
- vectors
- feature set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Abstract
The invention belongs to the technical field of medical images and artificial intelligence, and relates to a medical image organ deep learning segmentation method, equipment and a storage medium fusing discrete features. The method comprises the following steps: respectively representing the elements of each discrete feature set as word vectors represented by the unique heat vectors; converting the real number feature vectors into real number feature vectors with the same fixed length, and converting the real number feature vectors into one-dimensional feature vectors with the same number as the pixels or voxels of the medical image to be processed after fusion; reconstructing a two-dimensional or three-dimensional matrix representation with the same size as the medical image to be processed, fusing the two-dimensional or three-dimensional matrix representation with the medical image to be processed to obtain the input of a semantic segmentation deep learning network, and further performing image semantic segmentation network training on the input. In the image learning network input by the network, the invention allows the discrete characteristic information of the non-image and the image to enter the network for learning; the method overcomes the defects of high data acquisition cost and even inexecubility of the traditional method for network training by means of data expansion.
Description
Technical Field
The invention belongs to the technical field of medical images and artificial intelligence, and relates to a medical image organ deep learning segmentation method, equipment and a storage medium fusing discrete features.
Background
Currently, when organ segmentation is performed on medical images (e.g., CT, MRI, PET, etc.) using a deep learning neural network architecture, the typical network input is limited to one or more types of medical image data of fixed size, which are all characterized on computer input as matrices or vectors of the same size. However, considering that the medical image data may come from different scan centers (e.g. different hospitals and laboratories), may come from different manufacturers' scan machines (e.g. philips, siemens, general electric), may come from different regions, different ethnic groups, different ages, different sexes of subjects, and these various discrete information has important auxiliary guidance for the image-based organ segmentation task. However, in the network implementation of medical image organ segmentation based on deep learning, no method exists for effectively integrating the discrete information into a deep learning network model to directly serve as effective characteristic auxiliary network training. At most, the robustness of model learning is enhanced by performing multi-source data enhancement on data of different centers and different scanning machines, but the method requires to collect as much data as possible, and is a great challenge to data cost, operability of data collection, difficulty and time of network training, and virtually all possible diversified samples cannot be obtained fundamentally through exhaustion.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a medical image organ deep learning segmentation method, equipment and a storage medium for fusing discrete features.
In order to achieve the purpose, the invention adopts the following technical scheme:
a medical image organ deep learning segmentation method fused with discrete features is suitable for being executed in a computing device and comprises the following steps:
(1) Respectively representing the elements of each discrete feature set as word vectors represented by one-hot vectors (one-hot vectors) based on a bag of words model (bag of word);
(2) Inputting the word vectors represented by the unique heat vectors in each discrete feature set into a nested layer (Embedding layer) of a semantic segmentation pre-preprocessing neural network, and respectively converting the unique heat vectors in each discrete feature set into real number feature vectors with the same fixed length by the nested layer;
(3) Fusing the real number feature vectors obtained in the step (2) in a semantic segmentation pre-preprocessing neural network to obtain feature vectors fused with all discrete feature sets;
(4) Inputting the fused feature vector obtained in the step (3) into a full connection layer (full connection layer) in a semantic segmentation pre-processing neural network, wherein the full connection layer converts the fused feature vector into a one-dimensional feature vector with the same number of pixels or voxels as that of the medical image to be processed;
(5) Reconstructing (reshape) the one-dimensional feature vector obtained in the step (4) into a two-dimensional or three-dimensional matrix representation with the same size as the medical image to be processed;
(6) And (5) fusing the two-dimensional or three-dimensional matrix representation obtained in the step (5) with the medical image to be processed, inputting a fusion result into a semantic segmentation deep learning network, and further performing semantic segmentation network training on the semantic segmentation deep learning network.
The present invention further preferably:
in step (1), the discrete feature set includes one or more of an imaging center feature set, a scanning machine feature set, a scanning sequence feature set, a gender feature set, an age-level feature set, a regional feature set, a human feature set, and the like.
The length of the one-hot vector in the discrete feature set is equal to the number of elements defined in the discrete feature set.
In the step (2), the nesting layer converts the unique heat vectors of different discrete feature sets into real number vectors of fixed length respectively in a word nesting (word nesting) mode. Therefore, the unification of the characteristic lengths of different discrete characteristic sets can be realized, and the problem of high sparsity of the expression of the unique heat vector can be avoided. The word nesting operation can be simply realized in a deep learning network architecture through tf.nn.embedding _ lookup function of TensorFlow or an Embedding layer of Keras.
And (4) the fusion in the step (3) or the step (6) is realized by adding, differencing, dot multiplying, splicing or averaging.
In the step (5), the size of the matrix after the one-dimensional feature vector is reconstructed is the same as the size of the matrix of the input medical image.
In step (6), the two-dimensional or three-dimensional matrix representation is fused as a channel of the medical image.
The dimensionality of the semantic segmentation deep learning neural network is consistent with the dimensionality of the input medical image.
And (3) performing steps (2) to (6) in an input layer of the semantic segmentation pre-processing neural network.
The present invention also provides a computing device comprising:
one or more processors;
a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing a method for medical image organ deep learning segmentation fusing discrete features.
The present invention also provides a computer readable storage medium storing one or more programs, the one or more programs comprising instructions adapted to be loaded from a memory and to perform the above-described method for organ deep learning segmentation of medical images incorporating discrete features.
The invention has the following beneficial results:
1. in the network input image learning network, allowing various non-image discrete feature information related to the image to enter the network together with the image for learning;
2. the image auxiliary information is entered into an image segmentation network together for training, so that the network can learn and optimize segmentation parameters for medical image segmentation in a self-adaptive manner according to different auxiliary information;
3. the defects that the traditional method of expanding data to carry out network training has high data acquisition cost and even inexecutability are overcome;
4. the characterization of the discrete feature set on the network level is a basic module, can be expanded to any other tasks requiring the discrete feature set information characterization for deep learning network training, and has high adaptability;
5. the discrete feature set vector representation implementation mode provided by the invention can be further expanded to the condition that the representation length of each discrete feature set is different, and the discrete feature sets are further connected with a full-connection layer, a convolution layer, a pooling layer and the like in a deep learning network through splicing and other operations, so that feature expression forms suitable for different network learning tasks are derived, and the method has high flexibility.
Drawings
Fig. 1 is a schematic diagram of a medical image organ deep learning segmentation method fusing discrete features according to a preferred embodiment of the present invention.
Detailed Description
The invention is further illustrated by the following examples in conjunction with the drawings.
A method for organ deep learning segmentation in medical images with fusion of discrete features, which is suitable for being executed in a computing device, comprises the following steps:
(1) Respectively representing the elements of each discrete feature set as word vectors (word vectors) represented by one-hot vectors (one-hot vectors) based on bag of words model (bag of word);
the discrete feature set of the present embodiment further preferably includes one or more of a collection center feature set, a scanning machine feature set, a scanning sequence feature set, a gender feature set, an age-level feature set, a regional feature set, an anthropogenic feature set, and the like.
Wherein the length of the one-hot vector (one-hot vector) is equal to the number of features defined in the discrete feature set. In one example embodiment, [ Philips, siemens, general electric ] is defined in sequence as an element of a discrete feature set of the manufacturer of the scanning machine, any of these three manufacturers can be represented as a unique heat vector of length 3, respectively [1,0], [0,1,0], [0,1]; in another example embodiment, sequentially defining gender [ male, female ] as elements in a discrete feature set of gender, then gender may be represented as a one-hot vector of length 2, where male, female are [1,0], [0,1] respectively;
(2) Respectively inputting word vectors represented by the unique heat vectors in each discrete feature set into nested layers as the input of a semantic segmentation pre-preprocessing neural network, and respectively converting the unique heat vectors in each discrete feature set into real number feature vectors with the same fixed length by each nested layer;
the one-hot vectors of different discrete feature sets are converted into real vectors with the same fixed length by word nesting (word embedding). Therefore, the unification of the characteristic lengths of different discrete characteristic sets can be realized, and the problem of high sparsity of the expression of the unique heat vector can be avoided. The operation of word nesting can be simply realized in a deep learning network architecture through tf.nn.embedding _ lookup function of TensorFlow or an Embedding layer of Keras.
(3) Fusing the real number feature vectors obtained in the step (2) in a semantic segmentation pre-preprocessing neural network to obtain fused feature vectors of all discrete feature sets;
preferably, the fusion in this step can be implemented by addition, subtraction, dot multiplication, concatenation or averaging.
(4) Inputting the fused feature vector obtained in the step (3) into a full connection layer in a semantic segmentation pre-processing neural network, and converting the fused feature vector into a one-dimensional feature vector with the same number of pixels or voxels as that of the medical image to be processed by the full connection layer;
(5) Reconstructing the one-dimensional characteristic vector obtained in the step (4) into a two-dimensional or three-dimensional matrix representation with the same size as the medical image to be processed;
(6) And (5) fusing the two-dimensional or three-dimensional matrix representation obtained in the step (5) with the medical image to be processed to obtain the input of the semantic segmentation deep learning network, and further performing image semantic segmentation network training on the input. The dimensionality of the input medical image is consistent with the dimensionality of the semantic segmentation deep learning neural network, namely, the dimensionality of the semantic segmentation deep learning neural network is two-dimensional when the 2D medical image is input; and inputting the 3D medical image, and then, semantically segmenting the dimensionality of the deep learning neural network into three dimensions. Preferably, the fusion in this step can be implemented by addition, subtraction, dot multiplication, concatenation or averaging.
In the embodiment, steps (2) to (6) are performed in the semantic segmentation pre-processing neural network.
The present invention is further described with reference to fig. 1 by taking the 2D medical image as an example, in which the Row and column size [ Row, col ] of the 2D medical image is (128 ). If there are multi-sequence or multi-modality video data, they can be combined into a form of [ Row, col, channel ] ([ Row, column, channel ]) (where the Channel represents the number of channels of the image) as an image input, and do not affect the subsequent operation (because of the special case of the number of channels =1 in the case of a single sequence or a single modality).
Representing elements of each discrete feature set, such as a scanning machine, a scanning place and the like shown in fig. 1, as word vectors represented by one-hot vectors respectively based on a bag of words model (bag of word); respectively inputting word vectors represented by the unique heat vectors in each discrete feature set into the nested layers as the input of a semantic segmentation pre-preprocessing neural network, and according to the length and width of the medical image, designating the real number vector length of each discrete feature set as a real number feature vector with the same fixed length, wherein Row _ Col is (1, 128); splicing (concatentate) the obtained Row _ Col (1, 128) of the n different discrete feature sets to obtain a vector of n _ Row _ Col (1, 128 _ n); mapping the spliced vector to a Row _ Col vector through a full connection layer to obtain (1, 128 _ Col); obtaining the number of elements which are the same as the number of elements of the input medical image through the full-connection mapping in the step; reconstructing (reshape) the obtained Row-Col vector (1, 128-Col) into a matrix form (128 ) of Row, col, so that the reconstructed vector matrix has the same size as the input medical image; and (3) directly splicing (registering) the 2D image [ Row, col ] input with the obtained discrete feature set matrix morphological representation to obtain a dual-channel matrix input with the [ Row, col,2] of (128, 2), and then transmitting the [ Row, col,2] feature into a general 2D image semantic segmentation network (such as 2D U-Net) to start model training as a conventional method.
The present invention also provides a computing device comprising:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing a method for deep learning segmentation of medical image organs by fusing discrete features, the method comprising the steps of:
(1) Respectively representing the elements of each discrete feature set as word vectors represented by a unique heat vector based on a bag-of-words model;
(2) Inputting the word vectors represented by the unique heat vectors in each discrete feature set into a nested layer of a semantic segmentation pre-processing neural network, and respectively converting the unique heat vectors in each discrete feature set into real number feature vectors with the same fixed length by the nested layer;
(3) Fusing the real number feature vectors obtained in the step (2) in a semantic segmentation pre-preprocessing neural network to obtain fused feature vectors of all discrete feature sets;
(4) Inputting the fused feature vector obtained in the step (3) into a full link layer in a semantic segmentation pre-processing neural network, and converting the fused feature vector into a one-dimensional feature vector with the same number of pixels or voxels as that of the medical image to be processed by the full link layer;
(5) Reconstructing the one-dimensional characteristic vector obtained in the step (4) into a two-dimensional or three-dimensional matrix representation with the same size as the medical image to be processed;
(6) And (4) fusing the two-dimensional or three-dimensional matrix representation obtained in the step (5) with the medical image to be processed, taking a fusion result as the input of the semantic segmentation deep learning network, and further performing semantic segmentation network training on the semantic segmentation deep learning network.
The present invention also provides a computer readable storage medium storing one or more programs, the one or more programs comprising instructions adapted to be loaded from a memory and to execute the above method for organ deep learning segmentation of medical images with fused discrete features, the method comprising the steps of:
(1) Respectively representing the elements of each discrete feature set as word vectors represented by the unique heat vectors on the basis of the bag-of-words model;
(2) Inputting the word vectors represented by the unique heat vectors in each discrete feature set into a nested layer of a semantic segmentation pre-processing neural network, and respectively converting the unique heat vectors in each discrete feature set into real number feature vectors with the same fixed length by the nested layer;
(3) Fusing the real number feature vectors obtained in the step (2) in a semantic segmentation pre-preprocessing neural network to obtain fused feature vectors of all discrete feature sets;
(4) Inputting the fused feature vector obtained in the step (3) into a full connection layer in a semantic segmentation pre-processing neural network, and converting the fused feature vector into a one-dimensional feature vector with the same number of pixels or voxels as that of the medical image to be processed by the full connection layer;
(5) Reconstructing the one-dimensional characteristic vector obtained in the step (4) into a two-dimensional or three-dimensional matrix representation with the same size as the medical image to be processed;
(6) And (4) fusing the two-dimensional or three-dimensional matrix representation obtained in the step (5) with the medical image to be processed, taking a fusion result as the input of the semantic segmentation deep learning network, and further performing semantic segmentation network training on the semantic segmentation deep learning network.
It should be understood that the various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of computer readable media.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The embodiments described above are intended to facilitate a person skilled in the art to understand and use the invention. It will be readily apparent to those skilled in the art that various modifications to these embodiments may be made, and the generic principles described herein may be applied to other embodiments without the use of the inventive faculty. Therefore, the present invention is not limited to the embodiments described herein, and those skilled in the art should make improvements and modifications within the scope of the present invention based on the disclosure of the present invention.
Claims (10)
1. A medical image organ deep learning segmentation method fusing discrete features is characterized in that: adapted to be executed in a computing device, comprising the steps of:
(1) Respectively representing the elements of each discrete feature set as word vectors represented by the unique heat vectors on the basis of the bag-of-words model;
(2) Inputting the word vectors represented by the unique heat vectors in each discrete feature set into a nested layer of a semantic segmentation pre-preprocessing neural network, and respectively converting the unique heat vectors in each discrete feature set into real number feature vectors with the same fixed length by the nested layer;
(3) Fusing the real number feature vectors obtained in the step (2) in a semantic segmentation pre-preprocessing neural network to obtain fused feature vectors of all discrete feature sets;
(4) Inputting the fused feature vector obtained in the step (3) into a full connection layer in a semantic segmentation pre-processing neural network, and converting the fused feature vector into a one-dimensional feature vector with the same number of pixels or voxels as that of the medical image to be processed by the full connection layer;
(5) Reconstructing the one-dimensional characteristic vector obtained in the step (4) into a two-dimensional or three-dimensional matrix representation with the same size as the medical image to be processed;
(6) And (5) fusing the two-dimensional or three-dimensional matrix representation obtained in the step (5) with the medical image to be processed, taking a fusion result as the input of the semantic segmentation deep learning network, and further performing semantic segmentation network training on the semantic segmentation deep learning network.
2. The method for deep learning and segmenting medical image organs by fusing discrete features as claimed in claim 1, wherein: in step (1), the discrete feature set includes one or more of an imaging center feature set, a scanning machine feature set, a scanning sequence feature set, a gender feature set, an age-level feature set, a regional feature set, or a human species feature set.
3. The method for deep learning and segmenting medical image organs by fusing discrete features as claimed in claim 1, wherein: the length of the one-hot vector in the discrete feature set is equal to the number of elements defined in the discrete feature set.
4. The method for deep learning and segmenting medical image organs by fusing discrete features as claimed in claim 1, wherein: in the step (2), the nesting layer converts the unique heat vectors of different discrete feature sets into real number vectors with the same fixed length respectively in a word nesting mode.
5. The method for deep learning and segmenting medical image organs by fusing discrete features as claimed in claim 1, wherein: in the step (5), the size of the matrix after the one-dimensional feature vector is reconstructed is the same as the size of the matrix of the input medical image.
6. The method for deep learning and segmenting medical image organs by fusing discrete features as claimed in claim 1, wherein: the fusion in the step (3) or the step (6) is realized by adding, subtracting, dot multiplying, splicing or averaging.
7. The method for deep learning and segmenting medical image organs by fusing discrete features as claimed in claim 1, wherein: in step (6), the two-dimensional or three-dimensional matrix representation is fused as a channel of the medical image.
8. The method for deep learning and segmenting medical image organs by fusing discrete features as claimed in claim 1, wherein: the dimensionality of the image semantic segmentation network is consistent with the dimensionality of the input medical image.
9. A computing device, comprising:
one or more processors;
a memory; and
one or more programs stored in the memory and configured to be executed by one or more processors, the one or more programs comprising instructions for performing the method for organ deep learning segmentation of medical images with fused discrete features according to any one of claims 1 to 8.
10. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions adapted to be loaded from a memory and to carry out the method for organ deep learning segmentation of medical images fusing discrete features according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811110632.8A CN110942462B (en) | 2018-09-21 | 2018-09-21 | Organ deep learning segmentation method in medical image fused with discrete features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811110632.8A CN110942462B (en) | 2018-09-21 | 2018-09-21 | Organ deep learning segmentation method in medical image fused with discrete features |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110942462A CN110942462A (en) | 2020-03-31 |
CN110942462B true CN110942462B (en) | 2022-12-13 |
Family
ID=69904669
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811110632.8A Active CN110942462B (en) | 2018-09-21 | 2018-09-21 | Organ deep learning segmentation method in medical image fused with discrete features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110942462B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113744845A (en) * | 2021-09-17 | 2021-12-03 | 平安好医投资管理有限公司 | Medical image processing method, device, equipment and medium based on artificial intelligence |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101443811A (en) * | 2005-09-23 | 2009-05-27 | 皇家飞利浦电子股份有限公司 | A method, a system and a computer program for image segmentation |
CN103140855A (en) * | 2010-07-28 | 2013-06-05 | 瓦里安医疗系统公司 | Knowledge-based automatic image segmentation |
CN103514456A (en) * | 2013-06-30 | 2014-01-15 | 安科智慧城市技术(中国)有限公司 | Image classification method and device based on compressed sensing multi-core learning |
CN105095964A (en) * | 2015-08-17 | 2015-11-25 | 杭州朗和科技有限公司 | Data processing method and device |
WO2016161195A1 (en) * | 2015-03-31 | 2016-10-06 | Cortechs Labs, Inc. | Covariate modulate atlas |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10949975B2 (en) * | 2015-04-13 | 2021-03-16 | Siemens Healthcare Gmbh | Patient management based on anatomic measurements |
-
2018
- 2018-09-21 CN CN201811110632.8A patent/CN110942462B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101443811A (en) * | 2005-09-23 | 2009-05-27 | 皇家飞利浦电子股份有限公司 | A method, a system and a computer program for image segmentation |
CN103140855A (en) * | 2010-07-28 | 2013-06-05 | 瓦里安医疗系统公司 | Knowledge-based automatic image segmentation |
CN103514456A (en) * | 2013-06-30 | 2014-01-15 | 安科智慧城市技术(中国)有限公司 | Image classification method and device based on compressed sensing multi-core learning |
WO2016161195A1 (en) * | 2015-03-31 | 2016-10-06 | Cortechs Labs, Inc. | Covariate modulate atlas |
CN105095964A (en) * | 2015-08-17 | 2015-11-25 | 杭州朗和科技有限公司 | Data processing method and device |
Non-Patent Citations (2)
Title |
---|
MTBI Identification From Diffusion MR Images Using Bag of Adversarial Visual Features;Shervin Minaee;《arXiv:1806.10419v1》;20180627;全文 * |
医学图像分类中的特征融合与特征学习研究;何乐乐;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160315;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110942462A (en) | 2020-03-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11158069B2 (en) | Unsupervised deformable registration for multi-modal images | |
US10740897B2 (en) | Method and device for three-dimensional feature-embedded image object component-level semantic segmentation | |
US10810735B2 (en) | Method and apparatus for analyzing medical image | |
CN109285200B (en) | Multimode medical image conversion method based on artificial intelligence | |
KR102053527B1 (en) | Method for image processing | |
US20120170823A1 (en) | System and method for image based multiple-modality cardiac image alignment | |
CN111597946B (en) | Processing method of image generator, image generation method and device | |
WO2017158575A1 (en) | Method and system for processing a task with robustness to missing input information | |
US11080889B2 (en) | Methods and systems for providing guidance for adjusting an object based on similarity | |
KR101977067B1 (en) | Method for reconstructing diagnosis map by deep neural network-based feature extraction and apparatus using the same | |
US20220058798A1 (en) | System and method for providing stroke lesion segmentation using conditional generative adversarial networks | |
CN110544275A (en) | Methods, systems, and media for generating registered multi-modality MRI with lesion segmentation tags | |
US20220253977A1 (en) | Method and device of super-resolution reconstruction, computer device and storage medium | |
CN113554742A (en) | Three-dimensional image reconstruction method, device, equipment and storage medium | |
CN110942462B (en) | Organ deep learning segmentation method in medical image fused with discrete features | |
CN111243052A (en) | Image reconstruction method and device, computer equipment and storage medium | |
CN111062944B (en) | Network model training method and device and image segmentation method and device | |
CN113724185A (en) | Model processing method and device for image classification and storage medium | |
CN115131361A (en) | Training of target segmentation model, focus segmentation method and device | |
CN113053496B (en) | Deep learning method for low-dose estimation of medical image | |
CN112801908B (en) | Image denoising method and device, computer equipment and storage medium | |
US20220180194A1 (en) | Method for improving reproduction performance of trained deep neural network model and device using same | |
CN111369564A (en) | Image processing method, model training method and model training device | |
KR20200131722A (en) | Method for improving reproducibility of trained deep neural network model and apparatus using the same | |
CN117174261B (en) | Multi-type labeling flow integrating system for medical images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |