WO2023098524A1 - 多模态医学数据融合的评估方法、装置、设备及存储介质 - Google Patents
多模态医学数据融合的评估方法、装置、设备及存储介质 Download PDFInfo
- Publication number
- WO2023098524A1 WO2023098524A1 PCT/CN2022/133614 CN2022133614W WO2023098524A1 WO 2023098524 A1 WO2023098524 A1 WO 2023098524A1 CN 2022133614 W CN2022133614 W CN 2022133614W WO 2023098524 A1 WO2023098524 A1 WO 2023098524A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- fusion
- feature vector
- matrix
- medical data
- Prior art date
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 78
- 238000011156 evaluation Methods 0.000 title claims abstract description 44
- 239000013598 vector Substances 0.000 claims abstract description 88
- 238000000034 method Methods 0.000 claims abstract description 41
- 238000013210 evaluation model Methods 0.000 claims abstract description 32
- 238000000605 extraction Methods 0.000 claims abstract description 22
- 239000011159 matrix material Substances 0.000 claims description 48
- 206010028980 Neoplasm Diseases 0.000 claims description 43
- 238000003062 neural network model Methods 0.000 claims description 43
- 208000015634 Rectal Neoplasms Diseases 0.000 claims description 39
- 206010038038 rectal cancer Diseases 0.000 claims description 39
- 201000001275 rectum cancer Diseases 0.000 claims description 39
- 230000006870 function Effects 0.000 claims description 32
- 238000002595 magnetic resonance imaging Methods 0.000 claims description 19
- 238000004891 communication Methods 0.000 claims description 18
- 210000004369 blood Anatomy 0.000 claims description 11
- 239000008280 blood Substances 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 11
- 238000012417 linear regression Methods 0.000 claims description 11
- 239000000439 tumor marker Substances 0.000 claims description 11
- 238000012549 training Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000011176 pooling Methods 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 3
- 230000008447 perception Effects 0.000 claims description 3
- 230000017105 transposition Effects 0.000 claims description 3
- 201000010099 disease Diseases 0.000 abstract description 4
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 abstract description 4
- 230000001575 pathological effect Effects 0.000 abstract description 4
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 8
- 238000009099 neoadjuvant therapy Methods 0.000 description 8
- 230000004069 differentiation Effects 0.000 description 5
- 230000000968 intestinal effect Effects 0.000 description 5
- 238000001356 surgical procedure Methods 0.000 description 5
- 239000003814 drug Substances 0.000 description 4
- 239000000427 antigen Substances 0.000 description 3
- 102000036639 antigens Human genes 0.000 description 3
- 108091007433 antigens Proteins 0.000 description 3
- 201000011510 cancer Diseases 0.000 description 3
- 150000001720 carbohydrates Chemical class 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 229940079593 drug Drugs 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000001839 endoscopy Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000001959 radiotherapy Methods 0.000 description 3
- 210000000664 rectum Anatomy 0.000 description 3
- 102100023635 Alpha-fetoprotein Human genes 0.000 description 2
- 108010074051 C-Reactive Protein Proteins 0.000 description 2
- 102100032752 C-reactive protein Human genes 0.000 description 2
- 102000012406 Carcinoembryonic Antigen Human genes 0.000 description 2
- 108010022366 Carcinoembryonic Antigen Proteins 0.000 description 2
- 108010026331 alpha-Fetoproteins Proteins 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000002271 resection Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 108010088751 Albumins Proteins 0.000 description 1
- 102000009027 Albumins Human genes 0.000 description 1
- 102000001554 Hemoglobins Human genes 0.000 description 1
- 108010054147 Hemoglobins Proteins 0.000 description 1
- 108010071690 Prealbumin Proteins 0.000 description 1
- 102000007584 Prealbumin Human genes 0.000 description 1
- 206010064390 Tumour invasion Diseases 0.000 description 1
- 210000001015 abdomen Anatomy 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000011226 adjuvant chemotherapy Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000009400 cancer invasion Effects 0.000 description 1
- 238000002512 chemotherapy Methods 0.000 description 1
- 210000000038 chest Anatomy 0.000 description 1
- 238000011157 data evaluation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000001079 digestive effect Effects 0.000 description 1
- 210000003743 erythrocyte Anatomy 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 210000000265 leukocyte Anatomy 0.000 description 1
- 210000004698 lymphocyte Anatomy 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 210000001616 monocyte Anatomy 0.000 description 1
- 238000011227 neoadjuvant chemotherapy Methods 0.000 description 1
- 210000000440 neutrophil Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000007170 pathology Effects 0.000 description 1
- 210000004197 pelvis Anatomy 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 238000002203 pretreatment Methods 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 102000004169 proteins and genes Human genes 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 238000011127 radiochemotherapy Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 210000002966 serum Anatomy 0.000 description 1
- 210000005070 sphincter Anatomy 0.000 description 1
- 238000011272 standard treatment Methods 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
- 238000002626 targeted therapy Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30028—Colon; Small intestine
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Definitions
- the present application relates to the field of medical technology, for example, to an evaluation method, device, equipment and storage medium for multimodal medical data fusion.
- Rectal cancer is one of the main cancers that threaten the life and health of Chinese residents, and has caused a serious social burden.
- the main treatment methods for rectal cancer include comprehensive treatment methods such as surgery, radiotherapy, chemotherapy, and targeted therapy.
- comprehensive treatment methods such as surgery, radiotherapy, chemotherapy, and targeted therapy.
- the damage caused by tumor or surgery in patients with low rectal cancer may lead to impaired anal function, anal loss, and colostomy, which seriously affect the survival and treatment of patients.
- Many patients with locally advanced rectal cancer are not suitable for surgical treatment because one-stage surgery cannot achieve the goal of radical cure.
- the standard treatment for locally advanced rectal cancer ( ⁇ cT 3 or N+) is neoadjuvant chemoradiotherapy combined with total mesorectal resection and adjuvant chemotherapy.
- Neoadjuvant therapy can effectively achieve tumor downstaging, improve the rate of resection and sphincter preservation. Neoadjuvant therapy also provides better options for preserving organ function in patients with low rectal cancer.
- neoadjuvant therapy for rectal cancer
- most clinical guidelines and expert consensus suggest that multimodal data such as endoscopy, digital rectal examination, rectal MRI, serum tumor marker levels, and enhanced CT of the chest, abdomen, and pelvis should be used to comprehensively judge whether a patient has reached the clinical stage. remission or near clinical remission.
- the evaluation of the effect of neoadjuvant therapy for rectal cancer relies on a multidisciplinary tumor diagnosis and treatment team with experienced experts from departments such as surgery, internal medicine, radiotherapy, imaging, digestive endoscopy, and pathology. Due to the lack of experts in certain professional directions, many medical institutions cannot carry out neoadjuvant treatment of rectal cancer well.
- Embodiments of the present disclosure provide an evaluation method, device, equipment, and storage medium for multi-modal medical data fusion to solve the problem that it is difficult for clinicians to accurately evaluate the patient's condition in a manual way in related technologies, resulting in relatively low medical risk for patients. high technical problems.
- an embodiment of the present disclosure provides an evaluation method for multimodal medical data fusion, including:
- Feature extraction is performed on the medical data to be evaluated for each modality to obtain multiple feature vectors
- the fusion feature vector is input into a pre-trained multimodal fusion evaluation model, so as to obtain the evaluation results of the multi-modal medical data to be evaluated outputted by the pre-trained multi-modal fusion evaluation model.
- the fusion feature vector is input into a pre-trained multi-modal fusion evaluation model, so as to obtain the to-be-evaluated values of the various modes output by the pre-trained multi-modal fusion evaluation model Results of evaluation of medical data, including:
- Each eigenvector in the fusion eigenvector is horizontally spliced to obtain the first matrix W(In) of the eigenvector, and the first function is used to encode the position of the first matrix W(In) of the eigenvector to obtain the second matrix of the eigenvector W(P), the formula used is as follows:
- t represents a sub-vector in the first matrix W(In) of the eigenvector
- p(t) represents the encoding result corresponding to the t value
- pos represents the number of eigenvectors that the vector t belongs to
- i represents the number of eigenvectors that the vector t belongs to
- d represents the matrix horizontal direction dimension quantity of the first matrix W (In) of the feature vector
- the second matrix W (P) of the eigenvector is input to the second function, and the high-dimensional feature representation matrix W (M) on the subspace is calculated, and the formula adopted is as follows:
- W(M) Concat(F(1), F(2), . . . , F(i)) W 0 ;
- the CONCAT function represents the second function
- F(1), F(2)...F(i) represents the formula F calculation for the i-th eigensubvector in the second matrix W(P) of the eigenvector
- W o represents the transposition of the first matrix W(In) of the eigenvector
- the x in F(i) represents the i-th eigensubvector in the second matrix W(P) of the input eigenvector;
- Q, K, and V represent the linear perception of the parameter n of the hidden layer of the multimodal fusion evaluation model layer;
- Q(x) means linear regression on x;
- the feature vector of each image is encoded by the encoder of the multi-modal fusion evaluation model, the output W (Out) of the encoder is input to the linear regression layer, and W (Out) is converted to a low-level linear regression layer through the linear regression layer. Dimensional feature representation matrix, and finally output the evaluation result through the operation of the softmax function.
- obtaining the medical data to be evaluated in multiple modalities of the target object includes at least three of the following methods:
- the rectal cancer image data set at least includes a macroscopic perspective image, a close perspective image and a microscopic perspective image determined according to the tumor area or the retreated tumor area;
- the rectal cancer magnetic resonance imaging data set of the target object Acquiring the rectal cancer magnetic resonance imaging data set of the target object as the second modality data, wherein the rectal cancer magnetic resonance imaging data set includes initial rectal cancer magnetic resonance imaging data and target rectal cancer magnetic resonance imaging data;
- the initial rectal cancer magnetic resonance imaging data and the target rectal cancer magnetic resonance imaging data are marked with the tumor region or the retracted tumor region, and several slice images containing the tumor region or the retracted tumor region are obtained;
- the initial clinical data set and the target clinical data set of the target object Acquiring the initial clinical data set and the target clinical data set of the target object as the third modality data, wherein the initial clinical data set and the target clinical data set include at least the personal information and case information of the target object;
- the initial tumor marker information, target tumor marker information, initial blood information, and target blood information of the target subject are acquired as fourth modality data.
- feature extraction is performed on the medical data to be evaluated for each modality to obtain multiple feature vectors, including:
- the high-dimensional feature map extracted by the last three-dimensional convolution kernel is converted into a one-dimensional feature vector through ⁇ upsampling modules and a fully connected layer of the neural network model, and the first feature vector and the second feature vector are respectively obtained.
- feature extraction is performed on the medical data to be evaluated for each modality to obtain multiple feature vectors, including:
- the numerical features are mapped into a two-dimensional matrix to obtain a third feature vector and a fourth feature vector respectively.
- the training process of the neural network model includes:
- the initial neural network model is trained successfully, and the pre-trained neural network model is obtained;
- the initial feature vector does not meet the preset requirements, continue to train the initial neural network model by adjusting the loss parameters in the initial neural network model until the loss parameters fit and reach the preset loss parameters Threshold to obtain the pre-trained neural network model.
- a cross-entropy loss function is used to carry out parameter backpropagation and update until the cross-entropy loss function is fitted.
- an embodiment of the present disclosure provides an evaluation device for multimodal medical data fusion, including:
- the medical data acquisition module is configured to acquire the medical data to be evaluated in multiple modalities of the target object
- the feature vector extraction module is configured to perform feature extraction on the medical data to be evaluated for each modality to obtain multiple feature vectors
- a feature vector fusion module configured to fuse the multiple feature vectors to obtain a fusion feature vector
- the multi-modal fusion evaluation module is configured to input the fusion feature vector into the pre-trained multi-modal fusion evaluation model, so as to obtain the various modes output by the pre-trained multi-modal fusion evaluation model The evaluation results of the state medical data to be evaluated.
- an embodiment of the present disclosure provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory complete mutual communication through the communication bus;
- the processor is used to implement the above method steps when executing the program stored in the memory.
- the embodiments of the present disclosure provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the above-mentioned method steps are implemented.
- An evaluation method, device, equipment, and storage medium for multimodal medical data fusion provided by embodiments of the present disclosure can achieve the following technical effects:
- the embodiments of the present disclosure perform feature extraction on multimodal medical data based on artificial intelligence, obtain multiple feature vectors, fuse the obtained multiple feature vectors to obtain a fusion feature vector, and use the trained multimodal fusion method based on the fusion feature vector
- the evaluation model predicts and evaluates the degree of remission of the target object, which can assist in the accurate evaluation of the degree of remission of the target object's disease at the pathological level after treatment, thereby improving the accuracy of judgment and reducing the medical risk of the target object.
- FIG. 1 is a schematic flowchart of an evaluation method for multimodal medical data fusion provided by an embodiment of the present disclosure
- FIG. 2 is a schematic diagram of feature extraction and data evaluation of multimodal medical data provided by an embodiment of the present disclosure
- Fig. 3 is a schematic structural diagram of an evaluation device for multimodal medical data fusion provided by an embodiment of the present disclosure
- Fig. 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
- An embodiment of the present disclosure provides an evaluation method for multimodal medical data fusion, as shown in FIG. 1 , including the following steps:
- S102 Perform feature extraction on the medical data to be evaluated for each modality to obtain multiple feature vectors.
- obtaining the medical data to be evaluated in multiple modalities of the target object includes at least three of the following methods:
- the rectal cancer image data set of the target object is acquired through an endoscope as the first modality data, wherein the rectal cancer image data set at least includes a macroscopic perspective image (usually one image) determined according to the tumor area or the retreated tumor area, near Perspective images (usually 1) and microscopic perspective images (usually 2); macroscopic perspective images refer to the images of the area within the first preset distance interval from the tumor area or from the shrinking tumor area and facing the center of the intestinal lumen Panoramic images, for example, a panoramic image taken at a distance of 0.8mm-20mm from the "tumor area” or "regressed tumor area” and facing the center of the intestinal lumen as a macro perspective image; a near perspective image refers to the tumor area or the retracted tumor area An image in which the longest border of the tumor area is smaller than the preset zoom ratio of the visual field border, for example, an image taken when the longest border of the "tumor area” or "regressed tumor area” is less than 10% of the visual field border
- the rectal cancer magnetic resonance imaging data set of the target object as the second modality data
- the rectal cancer magnetic resonance imaging data set includes initial rectal cancer magnetic resonance imaging data and target rectal cancer magnetic resonance imaging data
- automatic labeling or manual The way of labeling is to mark the tumor area or the shrinking tumor area in the initial rectal cancer magnetic resonance imaging data and the target rectal cancer magnetic resonance imaging data, and obtain several slice images containing the tumor area or the shrinking tumor area.
- the initial rectal cancer magnetic resonance image data may be data of the target object before receiving treatment
- the target rectal cancer magnetic resonance image data may be data of the target object after receiving treatment.
- the initial clinical data set and the target clinical data set of the target object are acquired as the third modality data, wherein the initial clinical data set and the target clinical data set include at least personal information and case information of the target object.
- the initial clinical data set may be the data of the target subject before receiving treatment
- the target clinical data set may be the data of the target subject after receiving treatment.
- the personal information of the target object may include but not limited to age, height, weight, etc.
- the case information of the target object may include but not limited to family history of malignant tumor, personal history of tumor, treatment plan, tumor location, degree of tumor differentiation, pre-treatment T Staging, N stage before treatment, depth of tumor invasion, distance from tumor to anal verge, etc.
- the initial tumor marker information, target tumor marker information, initial blood information, and target blood information of the target subject are acquired as fourth modality data.
- the initial tumor marker information and initial blood information may be the data of the target subject before receiving treatment
- the target tumor marker information and target blood information may be the data of the target subject after receiving treatment.
- initial tumor marker information and target tumor marker information may include but not limited to carbohydrate antigen 125 (CA125), carbohydrate antigen 153 (CA153), carbohydrate antigen 199 (CA199), carcinoembryonic antigen (CEA) and alpha-fetoprotein (AFP) data; initial blood information and target blood information may include, but not limited to, red blood cells, hemoglobin, platelets, platelet volume, white blood cells, neutrophils, lymphocytes, monocytes, C-reactive protein, ultra Blood routine data such as sensitive C-reactive protein, total protein, albumin and prealbumin.
- CA125 carbohydrate antigen 125
- CA153 carbohydrate antigen 153
- CA199 carbohydrate antigen 199
- CEA carcinoembryonic antigen
- AFP alpha-fetoprotein
- initial blood information and target blood information may include, but not limited to, red blood cells, hemoglobin, platelets, platelet volume, white blood cells, neutrophils, lymphocytes, monocytes, C-
- feature extraction is performed on the medical data to be evaluated for each modality to obtain multiple feature vectors, including:
- the convolution calculation and the maximum pooling operation are performed on the matrix-connected macro-view image, close-view image and micro-view image, and a high-dimensional feature map is extracted;
- feature extraction is performed on the medical data to be evaluated for each modality to obtain multiple feature vectors, including:
- the training process of the first neural network model and the second neural network model includes:
- the initial neural network model training is successful, and a pre-trained neural network model is obtained;
- the initial feature vector does not meet the preset requirements, then by adjusting the loss parameters in the initial neural network model, continue to train the initial neural network model until the loss parameter fits and reaches the preset loss parameter threshold, and the pre-trained neural network model is obtained. network model.
- the first neural network model and the second neural network model may use a three-dimensional convolutional network (3DCNN), which is not limited in this embodiment of the present disclosure.
- 3DCNN three-dimensional convolutional network
- feature extraction is performed on the medical data to be evaluated for each modality to obtain multiple feature vectors, including:
- the numerical features are mapped into a two-dimensional matrix to obtain a third eigenvector and a fourth eigenvector, respectively.
- the target object has no family history of malignant tumors, it is mapped to a number 0; if the target object has a family history of malignant tumors, it is mapped to a number 1; similarly , and map other text description features into corresponding numerical features as follows:
- tumor Personal history of tumor (no 0, yes 1), recurrent tumor (yes 1, no 0), neoadjuvant chemotherapy (yes 1, no 0), neoadjuvant radiotherapy (yes 1, no 0), treatment plan (single drug 1 , double-drug 2, triple-drug 3), tumor location (upper rectum 1, middle rectum 2, lower rectum 3), degree of tumor differentiation (high degree of differentiation 1, degree of differentiation 2, degree of differentiation 3), size (accounting for intestinal 1/3 of the circumference is 0, accounting for 2/3 of the intestinal circumference is 1, accounting for 1 week of the intestinal circumference is 2).
- the fusion feature vector is input to the pre-trained multi-modal fusion evaluation model to obtain the multiple modes to be evaluated outputted by the pre-trained multi-modal fusion evaluation model Results of evaluation of medical data, including:
- Each eigenvector in the fused eigenvector is spliced horizontally to obtain the first matrix W(In) of the eigenvector, and the first function is used to encode the position of the first matrix W(In) of the eigenvector to obtain the second matrix W(In) of the eigenvector P), using the following formula:
- t represents a sub-vector in the first matrix W(In) of the eigenvector
- p(t) represents the encoding result corresponding to the t value
- pos represents the number of eigenvectors that the vector t belongs to
- i represents the number of eigenvectors that the vector t belongs to
- d represents the matrix horizontal direction dimension quantity of the first matrix W (In) of the feature vector
- W(M) Concat(F(1), F(2), . . . , F(i)) W 0 ;
- the CONCAT function represents the second function
- F(1), F(2)...F(i) represents the formula F calculation for the i-th eigensubvector in the second matrix W(P) of the eigenvector
- W o represents the transposition of the first matrix W(In) of the eigenvector
- the x in F(i) represents the i-th eigensubvector in the second matrix W(P) of the input eigenvector;
- Q, K, and V represent the linear perception of the parameter n of the hidden layer of the multimodal fusion evaluation model layer;
- Q(x) means linear regression on x;
- the cross-entropy loss function is used to carry out parameter backpropagation and update until the cross-entropy loss function is fitted.
- An embodiment of the present disclosure also provides an evaluation device for multimodal medical data fusion, as shown in FIG. 3 , including:
- the medical data acquisition module 301 is configured to acquire medical data to be evaluated in multiple modalities of the target object;
- the feature vector extraction module 302 is configured to perform feature extraction on the medical data to be evaluated for each modality to obtain multiple feature vectors;
- the feature vector fusion module 303 is configured to fuse a plurality of feature vectors to obtain a fusion feature vector
- the multi-modal fusion evaluation module 304 is configured to input the fusion feature vector into the pre-trained multi-modal fusion evaluation model, so as to obtain the multiple modalities output by the pre-trained multi-modal fusion evaluation model. The results of the evaluation of the data.
- An embodiment of the present disclosure also provides an electronic device, the structure of which is shown in FIG. 4 , including:
- a processor (processor) 400 and a memory (memory) 401 may also include a communication interface (Communication Interface) 402 and a communication bus 403. Wherein, the processor 400 , the communication interface 402 , and the memory 401 can communicate with each other through the communication bus 403 . Communication interface 402 may be used for information transfer.
- the processor 400 can invoke logic instructions in the memory 401 to execute the evaluation method for multimodal medical data fusion in the above-mentioned embodiments.
- the above logic instructions in the memory 401 may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as an independent product.
- the memory 401 can be used to store software programs and computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure.
- the processor 400 executes the function application and data processing by running the program instructions/modules stored in the memory 401 , that is, realizes the evaluation method of multimodal medical data fusion in the above method embodiments.
- the memory 401 may include a program storage area and a data storage area, wherein the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the terminal device, and the like.
- the memory 401 may include a high-speed random access memory, and may also include a non-volatile memory.
- An embodiment of the present disclosure also provides a computer-readable storage medium storing computer-executable instructions configured to execute the above-mentioned multimodal medical data fusion evaluation method.
- An embodiment of the present disclosure provides a computer program product, including a computer program stored on a computer-readable storage medium.
- the computer program includes program instructions.
- the above-mentioned computer is made to execute the above-mentioned multimodal Evaluation methods for medical data fusion
- the above-mentioned computer-readable storage medium may be a transitory computer-readable storage medium, or a non-transitory computer-readable storage medium.
- An evaluation method, device, device, and storage medium for multi-modal medical data fusion provided by the embodiments of the present disclosure, using three-dimensional convolutional network (3DCNN) technology to fuse multi-view images, to evaluate the macroscopic view of rectal cancer under endoscopy Image, close-view image and micro-view image are fused for feature extraction.
- 3DCNN three-dimensional convolutional network
- the multi-modal fusion evaluation model based on artificial intelligence proposed in this application in addition to having excellent Performance, also has self-attention weight, can rely on its self-perception ability, in the case of missing data (four modal data in the present invention should input at least three modal data), still has relatively excellent performance, can quickly And the output evaluation results are accurate, which is closer to clinical use scenarios. It can assist in the precise evaluation of the degree of remission of the target object's disease under the pathological level after treatment, thereby improving the accuracy of judgment and reducing the medical risk of the target object.
- the technical solutions of the embodiments of the present disclosure can be embodied in the form of software products, which are stored in a storage medium and include at least one instruction to enable a computer device (which may be a personal computer, a server, or a network device, etc.) ) Execute all or part of the steps of the methods of the embodiments of the present disclosure.
- the aforementioned storage medium can be a non-transitory storage medium, including: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc, etc.
- first element could be called a second element, and likewise, a second element could be called a first element, without changing the meaning of the description, as long as all occurrences of "first element” are renamed consistently and all occurrences of "Second component” can be renamed consistently.
- the first element and the second element are both elements, but may not be the same element.
- the terms used in the present application are used to describe the embodiments only and are not used to limit the claims. As used in the examples and description of the claims, the singular forms "a”, “an” and “the” are intended to include the plural forms as well unless the context clearly indicates otherwise .
- the term “and/or” as used in this application is meant to include any and all possible combinations of one or more of the associated listed ones.
- the term “comprise” and its variants “comprises” and/or comprising (comprising) etc. refer to stated features, integers, steps, operations, elements, and/or The presence of a component does not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groupings of these.
- an element defined by the statement “comprising a " does not preclude the presence of additional identical elements in the process, method or apparatus comprising the element.
- the disclosed methods and products can be implemented in other ways.
- the device embodiments described above are only illustrative.
- the division of units may only be a logical function division.
- multiple units or components may be combined or may be Integrate into another system, or some features may be ignored, or not implemented.
- the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
- a unit described as a separate component may or may not be physically separated, and a component displayed as a unit may or may not be a physical unit, that is, it may be located in one place, or may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to implement this embodiment.
- each functional unit in the embodiments of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
- each block in a flowchart or block diagram may represent a module, program segment, or portion of code that includes at least one executable instruction for implementing a specified logical function .
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Public Health (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- Pathology (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Multimedia (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
Description
Claims (10)
- 一种多模态医学数据融合的评估方法,其特征在于,包括:获取目标对象的多种模态的待评估医学数据;分别对每种模态的待评估医学数据进行特征提取,得到多个特征向量;对所述多个特征向量进行融合,得到融合特征向量;将所述融合特征向量输入至预先训练好的多模态融合评估模型,以获取所述预先训练好的多模态融合评估模型输出的所述多种模态的待评估医学数据的评估结果。
- 根据权利要求1所述的方法,其特征在于,所述将所述融合特征向量输入至预先训练好的多模态融合评估模型,以获取所述预先训练好的多模态融合评估模型输出的所述多种模态的待评估医学数据的评估结果,包括:将所述融合特征向量中的各个特征向量进行水平拼接,得到特征向量第一矩阵W(In),通过第一函数对特征向量第一矩阵W(In)进行位置编码,得到特征向量第二矩阵W(P),采用的公式如下:其中,t表示特征向量第一矩阵W(In)中的某一个子向量;p(t)表示t值对应的编码结果;pos表示向量t属于第几特征向量;i表示向量t在特征向量第一矩阵W(In)中的序号位;d表示特征向量第一矩阵W(In)的矩阵水平方向维度数量;将所述特征向量第二矩阵W(P)输入至第二函数,计算得到在子空间上的高维特征表示矩阵W(M),采用的公式如下:W(M)=Concat(F(1),F(2),...,F(i))·W 0;其中,CONCAT函数表示第二函数,F(1)、F(2)……F(i)表示对特征向量第二矩阵W(P)中的第i个特征子向量进行公式F计算;W 0表示特征向量第一矩阵W(In)的转置;F(i)中的x表示输入的特征向量第二矩阵W(P)中的第i个特征子向量;Q、K、V 表示多模态融合评估模型的隐含层的参数n的线性感知层;Q(x)表示对x进行线性回归;通过多模态融合评估模型的编码器对各个图像的所述特征向量进行编码,将所述编码器的输出W(Out)输入至线性回归层,通过线性回归层将W(Out)转换到低维特征表示矩阵,最终经过softmax函数的运算输出评估结果。
- 根据权利要求1所述的方法,其特征在于,所述获取目标对象的多种模态的待评估医学数据,包括以下方式中的至少三种:获取所述目标对象的直肠癌图像数据集作为第一模态数据,其中,所述直肠癌图像数据集至少包括根据肿瘤区域或已退缩肿瘤区域确定的宏观视角图像、近视角图像和微观视角图像;获取所述目标对象的直肠癌磁共振影像数据集作为第二模态数据,其中,所述直肠癌磁共振影像数据集包括初始直肠癌磁共振影像数据和目标直肠癌磁共振影像数据;分别对所述初始直肠癌磁共振影像数据和目标直肠癌磁共振影像数据中的肿瘤区域或已退缩肿瘤区域进行标注,得到若干张包含肿瘤区域或已退缩肿瘤区域的切片图像;获取所述目标对象的初始临床数据集和目标临床数据集作为第三模态数据,其中,所述初始临床数据集和目标临床数据集至少包括目标对象的个人信息和病例信息;获取所述目标对象的初始肿瘤标志物信息、目标肿瘤标志物信息、初始血液信息以及目标血液信息作为第四模态数据。
- 根据权利要求3所述的方法,其特征在于,分别对每种模态的待评估医学数据进行特征提取,得到多个特征向量,包括:将所述第一模态数据、第二模态数据分别输入至预先训练好的神经网络模型;通过所述神经网络模型的硬连线层分别对所述第一模态数据、第二模态数据中的医学图像进行矩阵连接;通过所述神经网络模型的α个三维卷积模块对矩阵连接后的所述医学图像进行卷积计算和最大池化操作,提取出高维特征图;通过所述神经网络模型的β个上采样模块和一个全连接层将最后一个三维卷积核提取出来的高维特征图转换为一维特征向量,分别得到第一特征向量和第二特征向量。
- 根据权利要求3所述的方法,其特征在于,分别对每种模态的待评估医学数据进行特征提取,得到多个特征向量,包括:将所述第三模态数据、第四模态数据中的文字描述特征映射成相应的数值特征;将所述数值特征映射到二维矩阵中,分别得到第三特征向量和第四特征向量。
- 根据权利要求4所述的方法,其特征在于,所述神经网络模型的训练过程包括:将获取到的预设的待评估医学数据作为训练样本输入至相应的初始神经网络模型,以使所述初始神经网络模型输出相应的初始特征向量;若所述初始特征向量满足预设要求,则所述初始神经网络模型训练成功,得到所述预先训练好的神经网络模型;若所述初始特征向量不满足预设要求,则通过调整所述初始神经网络模型中的损失参数,继续对所述初始神经网络模型进行训练,直至所述损失参数拟合并达到预设损失参数阈值,得到所述预先训练好的神经网络模型。
- 根据权利要求1所述的方法,其特征在于,所述多模态融合评估模型的训练过程中采用交叉熵损失函数进行参数反向传播与更新,直至所述交叉熵损失函数拟合。
- 一种多模态医学数据融合的评估装置,其特征在于,包括:医学数据获取模块,被配置为获取目标对象的多种模态的待评估医学数据;特征向量提取模块,被配置为分别对每种模态的待评估医学数据进行特征提取,得到多个特征向量;特征向量融合模块,被配置为对所述多个特征向量进行融合,得到融合特征向量;多模态融合评估模块,被配置为将所述融合特征向量输入至预先训练好的多模态融合评估模型,以获取所述预先训练好的多模态融合评估模型输出的所述多种模态的待评估医学数据的评估结果。
- 一种电子设备,其特征在于,包括处理器、通信接口、存储器和通信总线,其中,处理器,通信接口,存储器通过通信总线完成相互间的通信;存储器,用于存放计算机程序;处理器,用于执行存储器上所存放的程序时,实现权利要求1所述的方法步骤。
- 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1所述的方法步骤。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22900329.8A EP4432219A1 (en) | 2021-12-02 | 2022-11-23 | Multi-modal medical data fusion evaluation method and apparatus, device, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111454543.7A CN113870259B (zh) | 2021-12-02 | 2021-12-02 | 多模态医学数据融合的评估方法、装置、设备及存储介质 |
CN202111454543.7 | 2021-12-02 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023098524A1 true WO2023098524A1 (zh) | 2023-06-08 |
Family
ID=78985439
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/133614 WO2023098524A1 (zh) | 2021-12-02 | 2022-11-23 | 多模态医学数据融合的评估方法、装置、设备及存储介质 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4432219A1 (zh) |
CN (1) | CN113870259B (zh) |
WO (1) | WO2023098524A1 (zh) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116452593A (zh) * | 2023-06-16 | 2023-07-18 | 武汉大学中南医院 | 血管性认知障碍的ai评估模型的构建方法、装置及系统 |
CN116504393A (zh) * | 2023-06-09 | 2023-07-28 | 山东大学 | 基于多模态数据的脊髓型颈椎病运动功能辅助评估系统 |
CN116630386A (zh) * | 2023-06-12 | 2023-08-22 | 新疆生产建设兵团医院 | Cta扫描图像处理方法及其系统 |
CN116883995A (zh) * | 2023-07-07 | 2023-10-13 | 广东食品药品职业学院 | 一种乳腺癌分子亚型的识别系统 |
CN117350926A (zh) * | 2023-12-04 | 2024-01-05 | 北京航空航天大学合肥创新研究院 | 一种基于目标权重的多模态数据增强方法 |
CN117370933A (zh) * | 2023-10-31 | 2024-01-09 | 中国人民解放军总医院 | 多模态统一特征提取方法、装置、设备及介质 |
CN117422704A (zh) * | 2023-11-23 | 2024-01-19 | 南华大学附属第一医院 | 一种基于多模态数据的癌症预测方法、系统及设备 |
CN117481672A (zh) * | 2023-10-25 | 2024-02-02 | 深圳医和家智慧医疗科技有限公司 | 一种乳腺组织硬化初期快速筛查智能方法 |
CN117495693A (zh) * | 2023-10-24 | 2024-02-02 | 北京仁馨医疗科技有限公司 | 用于内窥镜的图像融合方法、系统、介质及电子设备 |
CN117558451A (zh) * | 2024-01-11 | 2024-02-13 | 广州中大医疗器械有限公司 | 一种基于大数据的神经损失程度评估方法 |
CN117649344A (zh) * | 2024-01-29 | 2024-03-05 | 之江实验室 | 磁共振脑影像超分辨率重建方法、装置、设备和存储介质 |
CN117807559A (zh) * | 2024-02-28 | 2024-04-02 | 卓世科技(海南)有限公司 | 一种多模态数据融合方法及系统 |
CN117829098A (zh) * | 2024-03-06 | 2024-04-05 | 天津创意星球网络科技股份有限公司 | 多模态作品评审方法、装置、介质和设备 |
CN117894468A (zh) * | 2024-03-18 | 2024-04-16 | 天津市肿瘤医院(天津医科大学肿瘤医院) | 基于人工智能的乳腺癌复发风险预测系统 |
CN117274316B (zh) * | 2023-10-31 | 2024-05-03 | 广东省水利水电科学研究院 | 一种河流表面流速的估计方法、装置、设备及存储介质 |
CN118006429A (zh) * | 2024-02-01 | 2024-05-10 | 中国计量科学研究院 | 一种血培养仪通用型全自动校准装置及方法 |
CN118090743A (zh) * | 2024-04-22 | 2024-05-28 | 山东浪潮数字商业科技有限公司 | 一种基于多模态图像识别技术的瓷质酒瓶质量检测系统 |
CN118280522A (zh) * | 2024-04-19 | 2024-07-02 | 徐州医科大学附属医院 | 多模态数据融合的个性化冠状动脉介入治疗规划系统 |
CN118333466A (zh) * | 2024-06-12 | 2024-07-12 | 山东理工职业学院 | 教学水平评估方法、装置、电子设备及存储介质 |
CN118471542A (zh) * | 2024-07-12 | 2024-08-09 | 杭州城市大脑技术与服务有限公司 | 一种基于大数据的医疗健康管理系统 |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113870259B (zh) * | 2021-12-02 | 2022-04-01 | 天津御锦人工智能医疗科技有限公司 | 多模态医学数据融合的评估方法、装置、设备及存储介质 |
CN114358662B (zh) * | 2022-03-17 | 2022-09-13 | 北京闪马智建科技有限公司 | 一种数据质量评估方法、装置、存储介质及电子装置 |
CN114678105B (zh) * | 2022-03-21 | 2023-10-17 | 南京圣德医疗科技有限公司 | 一种结合人工智能技术自动计算球囊参数的方法 |
CN114708971A (zh) * | 2022-04-20 | 2022-07-05 | 推想医疗科技股份有限公司 | 一种风险评估方法及装置、存储介质及电子设备 |
CN114898155B (zh) * | 2022-05-18 | 2024-05-28 | 平安科技(深圳)有限公司 | 车辆定损方法、装置、设备及存储介质 |
CN115068737B (zh) * | 2022-07-27 | 2022-11-11 | 深圳市第二人民医院(深圳市转化医学研究院) | 化疗输液药物剂量控制方法、装置、系统和存储介质 |
CN115130623B (zh) * | 2022-09-01 | 2022-11-25 | 浪潮通信信息系统有限公司 | 数据融合方法、装置、电子设备及存储介质 |
CN115564203B (zh) * | 2022-09-23 | 2023-04-25 | 杭州国辰智企科技有限公司 | 基于多维数据协同的设备实时性能评估系统及其方法 |
CN115762796A (zh) * | 2022-09-27 | 2023-03-07 | 京东方科技集团股份有限公司 | 目标模型的获取方法、预后评估值确定方法、装置、设备及介质 |
CN115579130B (zh) * | 2022-11-10 | 2023-03-14 | 中国中医科学院望京医院(中国中医科学院骨伤科研究所) | 一种患者肢体功能的评估方法、装置、设备及介质 |
WO2024108483A1 (zh) * | 2022-11-24 | 2024-05-30 | 中国科学院深圳先进技术研究院 | 多模态神经生物信号处理方法、装置、服务器及存储介质 |
CN115830017B (zh) * | 2023-02-09 | 2023-07-25 | 智慧眼科技股份有限公司 | 基于图文多模态融合的肿瘤检测系统、方法、设备及介质 |
CN116524248B (zh) * | 2023-04-17 | 2024-02-13 | 首都医科大学附属北京友谊医院 | 医学数据处理装置、方法及分类模型训练装置 |
CN116228206B (zh) * | 2023-04-25 | 2024-09-06 | 宏景科技股份有限公司 | 数据中心运维管理方法、装置、电子设备及运维管理系统 |
CN116313019B (zh) * | 2023-05-19 | 2023-08-11 | 青岛市妇女儿童医院(青岛市妇幼保健院、青岛市残疾儿童医疗康复中心、青岛市新生儿疾病筛查中心) | 一种基于人工智能的医疗护理数据处理方法及系统 |
CN117391847B (zh) * | 2023-12-08 | 2024-07-23 | 国任财产保险股份有限公司 | 一种基于多层多视图学习的用户风险评估方法及系统 |
CN117524501B (zh) * | 2024-01-04 | 2024-03-19 | 长春职业技术学院 | 基于特征挖掘的多模态医学数据分析系统及方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180367871A1 (en) * | 2017-06-14 | 2018-12-20 | GM Global Technology Operations LLC | Apparatus, method and system for multi-mode fusion processing of data of multiple different formats sensed from heterogeneous devices |
CN111079864A (zh) * | 2019-12-31 | 2020-04-28 | 杭州趣维科技有限公司 | 一种基于优化视频关键帧提取的短视频分类方法及系统 |
CN111931795A (zh) * | 2020-09-25 | 2020-11-13 | 湖南大学 | 基于子空间稀疏特征融合的多模态情感识别方法及系统 |
CN113657503A (zh) * | 2021-08-18 | 2021-11-16 | 上海交通大学 | 一种基于多模态数据融合的恶性肝肿瘤分类方法 |
CN113870259A (zh) * | 2021-12-02 | 2021-12-31 | 天津御锦人工智能医疗科技有限公司 | 多模态医学数据融合的评估方法、装置、设备及存储介质 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110533683B (zh) * | 2019-08-30 | 2022-04-29 | 东南大学 | 一种融合传统特征与深度特征的影像组学分析方法 |
CN110727824B (zh) * | 2019-10-11 | 2022-04-01 | 浙江大学 | 利用多重交互注意力机制解决视频中对象关系问答任务的方法 |
CN111145718B (zh) * | 2019-12-30 | 2022-06-07 | 中国科学院声学研究所 | 一种基于自注意力机制的中文普通话字音转换方法 |
CN113450294A (zh) * | 2021-06-07 | 2021-09-28 | 刘星宇 | 多模态医学图像配准融合方法、装置及电子设备 |
CN113486395B (zh) * | 2021-07-02 | 2024-07-23 | 南京大学 | 一种采用多元信息融合的科研数据匿名化方法及系统 |
-
2021
- 2021-12-02 CN CN202111454543.7A patent/CN113870259B/zh active Active
-
2022
- 2022-11-23 WO PCT/CN2022/133614 patent/WO2023098524A1/zh active Application Filing
- 2022-11-23 EP EP22900329.8A patent/EP4432219A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180367871A1 (en) * | 2017-06-14 | 2018-12-20 | GM Global Technology Operations LLC | Apparatus, method and system for multi-mode fusion processing of data of multiple different formats sensed from heterogeneous devices |
CN111079864A (zh) * | 2019-12-31 | 2020-04-28 | 杭州趣维科技有限公司 | 一种基于优化视频关键帧提取的短视频分类方法及系统 |
CN111931795A (zh) * | 2020-09-25 | 2020-11-13 | 湖南大学 | 基于子空间稀疏特征融合的多模态情感识别方法及系统 |
CN113657503A (zh) * | 2021-08-18 | 2021-11-16 | 上海交通大学 | 一种基于多模态数据融合的恶性肝肿瘤分类方法 |
CN113870259A (zh) * | 2021-12-02 | 2021-12-31 | 天津御锦人工智能医疗科技有限公司 | 多模态医学数据融合的评估方法、装置、设备及存储介质 |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116504393A (zh) * | 2023-06-09 | 2023-07-28 | 山东大学 | 基于多模态数据的脊髓型颈椎病运动功能辅助评估系统 |
CN116504393B (zh) * | 2023-06-09 | 2024-04-26 | 山东大学 | 基于多模态数据的脊髓型颈椎病运动功能辅助评估系统 |
CN116630386B (zh) * | 2023-06-12 | 2024-02-20 | 新疆生产建设兵团医院 | Cta扫描图像处理方法及其系统 |
CN116630386A (zh) * | 2023-06-12 | 2023-08-22 | 新疆生产建设兵团医院 | Cta扫描图像处理方法及其系统 |
CN116452593B (zh) * | 2023-06-16 | 2023-09-05 | 武汉大学中南医院 | 血管性认知障碍的ai评估模型的构建方法、装置及系统 |
CN116452593A (zh) * | 2023-06-16 | 2023-07-18 | 武汉大学中南医院 | 血管性认知障碍的ai评估模型的构建方法、装置及系统 |
CN116883995A (zh) * | 2023-07-07 | 2023-10-13 | 广东食品药品职业学院 | 一种乳腺癌分子亚型的识别系统 |
CN117495693A (zh) * | 2023-10-24 | 2024-02-02 | 北京仁馨医疗科技有限公司 | 用于内窥镜的图像融合方法、系统、介质及电子设备 |
CN117495693B (zh) * | 2023-10-24 | 2024-06-04 | 北京仁馨医疗科技有限公司 | 用于内窥镜的图像融合方法、系统、介质及电子设备 |
CN117481672A (zh) * | 2023-10-25 | 2024-02-02 | 深圳医和家智慧医疗科技有限公司 | 一种乳腺组织硬化初期快速筛查智能方法 |
CN117370933A (zh) * | 2023-10-31 | 2024-01-09 | 中国人民解放军总医院 | 多模态统一特征提取方法、装置、设备及介质 |
CN117370933B (zh) * | 2023-10-31 | 2024-05-07 | 中国人民解放军总医院 | 多模态统一特征提取方法、装置、设备及介质 |
CN117274316B (zh) * | 2023-10-31 | 2024-05-03 | 广东省水利水电科学研究院 | 一种河流表面流速的估计方法、装置、设备及存储介质 |
CN117422704A (zh) * | 2023-11-23 | 2024-01-19 | 南华大学附属第一医院 | 一种基于多模态数据的癌症预测方法、系统及设备 |
CN117350926B (zh) * | 2023-12-04 | 2024-02-13 | 北京航空航天大学合肥创新研究院 | 一种基于目标权重的多模态数据增强方法 |
CN117350926A (zh) * | 2023-12-04 | 2024-01-05 | 北京航空航天大学合肥创新研究院 | 一种基于目标权重的多模态数据增强方法 |
CN117558451A (zh) * | 2024-01-11 | 2024-02-13 | 广州中大医疗器械有限公司 | 一种基于大数据的神经损失程度评估方法 |
CN117649344B (zh) * | 2024-01-29 | 2024-05-14 | 之江实验室 | 磁共振脑影像超分辨率重建方法、装置、设备和存储介质 |
CN117649344A (zh) * | 2024-01-29 | 2024-03-05 | 之江实验室 | 磁共振脑影像超分辨率重建方法、装置、设备和存储介质 |
CN118006429A (zh) * | 2024-02-01 | 2024-05-10 | 中国计量科学研究院 | 一种血培养仪通用型全自动校准装置及方法 |
CN117807559A (zh) * | 2024-02-28 | 2024-04-02 | 卓世科技(海南)有限公司 | 一种多模态数据融合方法及系统 |
CN117829098A (zh) * | 2024-03-06 | 2024-04-05 | 天津创意星球网络科技股份有限公司 | 多模态作品评审方法、装置、介质和设备 |
CN117829098B (zh) * | 2024-03-06 | 2024-05-28 | 天津创意星球网络科技股份有限公司 | 多模态作品评审方法、装置、介质和设备 |
CN117894468A (zh) * | 2024-03-18 | 2024-04-16 | 天津市肿瘤医院(天津医科大学肿瘤医院) | 基于人工智能的乳腺癌复发风险预测系统 |
CN118280522A (zh) * | 2024-04-19 | 2024-07-02 | 徐州医科大学附属医院 | 多模态数据融合的个性化冠状动脉介入治疗规划系统 |
CN118090743A (zh) * | 2024-04-22 | 2024-05-28 | 山东浪潮数字商业科技有限公司 | 一种基于多模态图像识别技术的瓷质酒瓶质量检测系统 |
CN118333466A (zh) * | 2024-06-12 | 2024-07-12 | 山东理工职业学院 | 教学水平评估方法、装置、电子设备及存储介质 |
CN118471542A (zh) * | 2024-07-12 | 2024-08-09 | 杭州城市大脑技术与服务有限公司 | 一种基于大数据的医疗健康管理系统 |
Also Published As
Publication number | Publication date |
---|---|
CN113870259B (zh) | 2022-04-01 |
EP4432219A1 (en) | 2024-09-18 |
CN113870259A (zh) | 2021-12-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023098524A1 (zh) | 多模态医学数据融合的评估方法、装置、设备及存储介质 | |
Jiang et al. | MRI based radiomics approach with deep learning for prediction of vessel invasion in early-stage cervical cancer | |
Wu et al. | Prediction of molecular subtypes of breast cancer using BI-RADS features based on a “white box” machine learning approach in a multi-modal imaging setting | |
Christ et al. | SurvivalNet: Predicting patient survival from diffusion weighted magnetic resonance images using cascaded fully convolutional and 3D convolutional neural networks | |
KR20200080626A (ko) | 병변 진단에 대한 정보 제공 방법 및 이를 이용한 병변 진단에 대한 정보 제공용 디바이스 | |
CN112884759A (zh) | 一种乳腺癌腋窝淋巴结转移状态的检测方法及相关装置 | |
Jia et al. | Artificial intelligence-based medical image segmentation for 3D printing and naked eye 3D visualization | |
Khor et al. | Anatomically constrained and attention-guided deep feature fusion for joint segmentation and deformable medical image registration | |
Celik et al. | Forecasting the “T” Stage of Esophageal Cancer by Deep Learning Methods: A Pilot Study | |
Yang et al. | Diagnostic efficacy of ultrasound combined with magnetic resonance imaging in diagnosis of deep pelvic endometriosis under deep learning | |
Xue et al. | Region-of-interest aware 3D ResNet for classification of COVID-19 chest computerised tomography scans | |
Bao et al. | Automatic identification and segmentation of orbital blowout fractures based on artificial intelligence | |
CN115953781B (zh) | 基于热层析影像的乳腺人工智能分析系统及方法 | |
Chang et al. | Image segmentation in 3D brachytherapy using convolutional LSTM | |
CN117198511A (zh) | 一种基于深度学习的儿童后颅窝肿瘤诊断方法 | |
Mao et al. | Quantitative evaluation of myometrial infiltration depth ratio for early endometrial cancer based on deep learning | |
Poce et al. | Pancreas Segmentation in CT Images: State of the Art in Clinical Practice. | |
Singh et al. | Precision Kidney Disease Classification Using EfficientNet-B3 and CT Imaging | |
Ibrahim et al. | Liver Multi-class Tumour Segmentation and Detection Based on Hyperion Pre-trained Models. | |
AlZoubi et al. | Varicocele detection in ultrasound images using deep learning | |
Liu et al. | HSIL colposcopy image segmentation using improved U-Net | |
Hamdy et al. | Densely convolutional networks for breast cancer classification with multi-modal image fusion | |
Turk et al. | Renal segmentation using an improved U-Net3D model | |
Jia et al. | Multi-parametric MRIs based assessment of hepatocellular carcinoma differentiation with multi-scale ResNet | |
Dong et al. | Primary brain tumors Image segmentation based on 3D-UNET with deep supervision and 3D brain modeling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22900329 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2024532692 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022900329 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2022900329 Country of ref document: EP Effective date: 20240613 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11202403636Y Country of ref document: SG |