CN115719329A - Method and system for fusing RA ultrasonic modal synovial membrane scores based on deep learning - Google Patents
Method and system for fusing RA ultrasonic modal synovial membrane scores based on deep learning Download PDFInfo
- Publication number
- CN115719329A CN115719329A CN202211015198.1A CN202211015198A CN115719329A CN 115719329 A CN115719329 A CN 115719329A CN 202211015198 A CN202211015198 A CN 202211015198A CN 115719329 A CN115719329 A CN 115719329A
- Authority
- CN
- China
- Prior art keywords
- score
- ultrasound
- doppler
- classifier
- blood flow
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
A method and a system for fusing RA ultrasonic modal synovial scores based on deep learning share one extractor, a gray scale ultrasonic input feature extractor obtains gray scale features, a Doppler ultrasonic input feature extractor obtains Doppler features, end-to-end training is carried out in a multi-task learning mode, a convolutional neural network feature extractor, an SH score four classifier and a blood flow score four classifier are trained simultaneously, the gray scale features and the Doppler features are spliced into a feature vector, the SH score is obtained by inputting the SH score four classifier, the blood flow score is obtained by inputting the Doppler characteristic into the blood flow score four classifier, the SH score and the blood flow score are synthesized to obtain a combined score, the original ultrasonic image, the heat map superposed image and the prediction result are displayed, the gray scale ultrasonic image and the Doppler ultrasonic image of the joint synovium can be respectively evaluated and comprehensively evaluated, the purpose of assisting an ultrasonic doctor in diagnosis is achieved, and the ultrasonic image scoring method has accurate ultrasonic image scoring capability.
Description
Technical Field
The invention relates to the technical field of medical image processing, in particular to a method for fusing RA ultrasonic modal synovial membrane scores based on deep learning and a system for fusing RA ultrasonic modal synovial membrane scores based on deep learning, which are mainly used for obtaining tissue oxygen Saturation (SO) measured by photoacoustic imaging 2 ) And clinical criteria scores, and determining its potential utility in assessing RA disease activity.
Background
Rheumatoid Arthritis (RA) is an inflammatory peripheral polyarthritis of unknown etiology with typical pathological changes including synovitis, cartilage and bone erosion that can cause joint deformity and joint destruction. If RA is not treated or effectively treated, inflammatory reactions and joint destruction will further result in the loss of normal motor physiology and the inability to perform activities and tasks of daily living. RA may also present other systemic manifestations and may carry other long-term health risks, including a higher incidence of cardiovascular disease and osteoporosis. The current standard treatment for RA is standard treatment with disease-modifying antirheumatic drugs (DMARDs) for different patients 'disease activity according to a targeted treatment strategy, with the treatment regimen adjusted to the patient's disease activity. Among these, accurate assessment of RA disease activity is critical to reducing disease burden.
Ultrasound imaging (US) examination is characterized by being radiation-free, non-invasive and low-cost, and has been widely used in clinical practice for the assessment of RA. Ultrasound imaging generally includes two main modes: gray Scale Ultrasound (GSUS) and doppler ultrasound (color doppler CDUS or power doppler PDUS). GSUS can show morphological changes in Synovial Hypertrophy (SH), while doppler ultrasound images are used to detect the abundance of synovial hyperplasia. The european union of antirheumatics-the rheumatology society (EULAR-OMERACT) published an expert consensus-based scoring system for standard joint synovial ultrasound, the EOSS system, in 2017. The system emphasizes the importance of analyzing two mode images (GSUS and doppler ultrasound), specifies specific rules for semi-quantitative 0-3 point scores for both modes of joint synovium, and proposes to use a composite score of 0-3 to assess RA synovitis. This score provides a reference for standardized assessment of joint synovial ultrasound images and is widely used in follow-up studies.
Although the EOSS system has been established, since the ultrasonic diagnosis depends on the experience of the operator, there is a large subjectivity, and the consistency between the inside of the observer and the observer in the ultrasonic image evaluation is generally low, so that the quantitative evaluation of the rheumatoid arthritis disease activity using the ultrasonic image still has a large obstacle. In addition, joint ultrasound has certain difficulty, and sonographers often show a long learning curve in learning joint ultrasound and need a large number of courses and practice connections, so that training sonographers with musculoskeletal ultrasound experience to learn an EOSS system and perform standardized arthritic synovial membrane scoring often requires considerable time and cost.
Deep learning has shown great potential in a variety of medical imaging tasks, including breast cancer prediction, thyroid cancer diagnosis, lung cancer screening, cardiac function assessment, and musculoskeletal image analysis. Recently, the researchers of Andersen et al and Christensen et al developed a deep neural network to predict synovitis composite score using doppler ultrasound images. And Wu et al propose to analyze GSUS images using deep learning techniques to determine a composite score for arthritic synovium. However, the deep learning method has certain limitations, and clinical application of the deep learning method for assisting ultrasound RA assessment is not realized. Firstly, previous studies do not perform multi-modal data integration, and only analyze ultrasound images of one modality; in addition, the image data volume applied in the past research is small, and the result accuracy is not high. Previous studies have not been concerned with their clinical effectiveness and benefits, and have not been compared and combined with clinician evaluations.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a method for scoring and fusing RA ultrasonic modal synovial membranes based on deep learning, which can respectively evaluate and comprehensively evaluate ultrasonic images of gray-scale ultrasonic and Doppler ultrasonic of joint synovial membranes so as to achieve the purpose of assisting ultrasonic doctors in diagnosis, has accurate ultrasonic image scoring capability, and can greatly improve the scoring accuracy of inexperienced ultrasonic doctors.
The technical scheme of the invention is as follows: the method for fusing RA ultrasonic modal synovial score based on deep learning comprises the following steps:
(1) Collecting an original ultrasonic image, and dividing the original ultrasonic image into a training data set, a prospective test data set and an external test data set according to sources;
(2) The method comprises the following steps that gray-scale ultrasound and Doppler ultrasound share a convolutional neural network feature extractor, the gray-scale ultrasound is input into the feature extractor to obtain gray-scale features, and the Doppler ultrasound is input into the feature extractor to obtain Doppler features;
(3) End-to-end training is carried out by adopting a multi-task learning mode, and meanwhile, a convolutional neural network feature extractor, an SH scoring four-classifier and a blood flow scoring four-classifier are trained (the structures of the SH scoring four-classifier and the blood flow scoring four-classifier are all multilayer perceptrons);
(4) The gray scale features and the Doppler features are spliced into a feature vector, the feature vector is input into an SH score four classifier to obtain SH scores, the Doppler features are independently input into a blood flow score four classifier to obtain blood flow scores, and the SH scores and the blood flow scores are integrated to obtain combined scores;
(5) And displaying the original ultrasonic image, the heat map superposed image and the prediction result.
The method comprises the steps of sharing a convolution neural network feature extractor by gray-scale ultrasound and Doppler ultrasound, obtaining gray-scale features by the gray-scale ultrasound input feature extractor, obtaining Doppler features by the Doppler ultrasound input feature extractor, performing end-to-end training in a multitask learning mode, simultaneously training the convolution neural network feature extractor, an SH score four classifier and a blood flow score four classifier, splicing the gray-scale features and the Doppler features into a feature vector, inputting the feature vector into the SH score four classifier to obtain SH scores, independently inputting the Doppler features into the blood flow score four classifier to obtain blood flow scores, synthesizing the SH scores and the blood flow scores to obtain combined scores, and displaying an original ultrasound image, a heat map superposed image and a prediction result, so that ultrasound images of gray-scale ultrasound and Doppler ultrasound modes of a joint synovial membrane can be evaluated and comprehensively evaluated respectively, the purpose of assisting the diagnosis of an ultrasonic doctor is achieved, the accurate ultrasound image scoring capability is achieved, and the scoring accuracy of the inexperienced ultrasonic doctor can be greatly improved.
Also provided is a system for fusing RA ultrasound modality synovial score based on deep learning, comprising:
an acquisition module configured to acquire an original ultrasound image, which is divided into a training data set, a prospective test data set, and an external test data set according to a source;
the extraction module is configured to share one convolutional neural network feature extractor with the gray-scale ultrasound and the Doppler ultrasound, the gray-scale ultrasound is input into the feature extractor to obtain gray-scale features, and the Doppler ultrasound is input into the feature extractor to obtain Doppler features;
the training module is configured to perform end-to-end training in a multi-task learning mode, and train a convolutional neural network feature extractor, an SH score four classifier and a blood flow score four classifier at the same time;
a scoring module configured to stitch the grayscale and Doppler features into a feature direction
Inputting the Doppler characteristics into a four SH score classifier to obtain SH scores, independently inputting the Doppler characteristics into a four blood flow score classifier to obtain blood flow scores, and integrating the SH scores and the blood flow scores to obtain combined scores;
a display module configured to display the original ultrasound image, the heat map overlay image, and the prediction result.
Drawings
Fig. 1 shows a display interface according to the invention.
Fig. 2 shows another display interface according to the invention.
Fig. 3 shows the accuracy of the invention on a data set. A is an accuracy of 86.1% (95% CI =82.5% -90.1%) on the prospective test data set, B is an accuracy of 85.0% (95% CI =80.5% -89.1%) on the external test data set.
Figure 4 shows the performance comparison of the present invention with a sonographer and software assistance. A doctors (R1-R10) and average scores increased in accuracy with the aid of RATING. B-D, youden index of physician's composite score without and with the aid of RATING. 0 and 1,2 and 3 (B), 0 and 1 and 2 and 3 (C), 0, 1 and 2 and 3 (D).
Fig. 5 shows a flow chart of a method for fusion of RA ultrasound modality synovial score based on deep learning according to the present invention.
Detailed Description
Aiming at the problems in the assessment of the RA background and the deep learning auxiliary ultrasonic image, a method (RATING) for scoring and fusing two ultrasonic modes of arthritic synovium based on a deep learning system is designed. The RATING is based on a convolutional neural network, and after learning the ultrasonic images of the two marked modes of the joint synovium, the score and the comprehensive score of each mode of synovitis can be automatically evaluated so as to achieve the aim of assisting the diagnosis of an ultrasonic doctor. Clinical contrast tests are carried out on the ultrasonic diagnostic system, and experimental results show that the ultrasonic diagnostic system has accurate ultrasonic image scoring capability and can greatly improve the scoring accuracy of inexperienced ultrasonic doctors.
As shown in fig. 5, the method for fusing synovial score of RA ultrasound modality based on deep learning comprises the following steps:
(1) Acquiring an original ultrasonic image, and dividing the original ultrasonic image into a training data set, a prospective test data set and an external test data set according to sources; (the training dataset contained 752 ultrasound images of 104 patients from the Beijing coordination hospital (PUMCH.) the prospective test dataset contained 274 ultrasound images of 28 patients from the Beijing coordination hospital (PUMCH)
Like this. The external test data set comprises 293 pairs of people hospitals from Shenzhen city
(SZPH) ultrasound images of 42 patients. )
(2) The method comprises the following steps that gray-scale ultrasound and Doppler ultrasound share a convolutional neural network feature extractor, the gray-scale ultrasound is input into the feature extractor to obtain gray-scale features, and the Doppler ultrasound is input into the feature extractor to obtain Doppler features;
(3) Performing end-to-end training in a multi-task learning mode, and simultaneously training a convolutional neural network feature extractor, an SH scoring four-classifier and a blood flow scoring four-classifier;
(4) The gray scale features and the Doppler features are spliced into a feature vector, the feature vector is input into an SH score four classifier to obtain SH scores, the Doppler features are independently input into a blood flow score four classifier to obtain blood flow scores, and the SH scores and the blood flow scores are integrated to obtain combined scores;
(5) And displaying the original ultrasonic image, the heat map superposed image and the prediction result.
The method comprises the steps of sharing a convolution neural network feature extractor by gray-scale ultrasound and Doppler ultrasound, obtaining gray-scale features by the gray-scale ultrasound input feature extractor, obtaining Doppler features by the Doppler ultrasound input feature extractor, performing end-to-end training in a multitask learning mode, simultaneously training the convolution neural network feature extractor, an SH score four classifier and a blood flow score four classifier, splicing the gray-scale features and the Doppler features into a feature vector, inputting the feature vector into the SH score four classifier to obtain SH scores, independently inputting the Doppler features into the blood flow score four classifier to obtain blood flow scores, synthesizing the SH scores and the blood flow scores to obtain combined scores, and displaying an original ultrasound image, a heat map superposed image and a prediction result, so that ultrasound images of gray-scale ultrasound and Doppler ultrasound modes of a joint synovial membrane can be evaluated and comprehensively evaluated respectively, the purpose of assisting the diagnosis of an ultrasonic doctor is achieved, the accurate ultrasound image scoring capability is achieved, and the scoring accuracy of the inexperienced ultrasonic doctor can be greatly improved.
Preferably, in the step (3), each training sample is: grayscale ultrasound images, doppler ultrasound images, SH score, blood flow score, joint score.
Preferably, the step (3) comprises the following substeps:
(3.1) inputting a gray scale ultrasonic image and a Doppler ultrasonic image, and respectively outputting 4 numerical values by an SH score four classifier and a blood flow score four classifier;
(3.2) calculating loss function values of the SH score four-classification and the blood flow score four-classification respectively by using a cross entropy loss function, adding the two loss function values to serve as a total loss function, and then optimizing the whole model by using a gradient descent method;
where θ is a model parameter, α is a learning rate,Gradient calculation, J loss function, i parameter number, J sample number, x input image, and y correct category;
and (3.3) stopping training when the loss function value of the model reaches the lowest value in the training process.
Preferably, in the step (4), the grayscale feature and the doppler feature are multidimensional vectors with shapes of channel number, height and width, and the two multidimensional vectors are respectively flattened into one-dimensional vectors and then spliced into one vector.
Preferably, in the four classifiers for SH score in step (4), the classifier predicts that SH score is predicted to be i, assuming that the ith output of the four output values of the four neurons in the last layer is the largest.
Preferably, in the step (4), the maximum value of the SH score and the blood flow score is taken as the combined score.
Preferably, in the step (5), a heat map is generated for each gray scale ultrasound image or doppler ultrasound image to highlight the important region for determining the synovial hypertrophy score.
Preferably, in step (5), the heat map of the grayscale ultrasound image highlights potential synovial hypertrophy areas and the heat map of the doppler ultrasound image highlights potential synovial hypertrophy areas and potential synovial hypervascularization areas.
Preferably, in the step (5), the heat map is colored yellow and is superimposed on the original ultrasound image.
It will be understood by those skilled in the art that all or part of the steps in the method of the above embodiments may be implemented by hardware instructions related to a program, the program may be stored in a computer-readable storage medium, and when executed, the program includes the steps of the method of the above embodiments, and the storage medium may be: ROM/RAM, magnetic disks, optical disks, memory cards, and the like. Therefore, corresponding to the method of the present invention, the present invention also includes a system for fusing RA ultrasound modality synovial membrane score based on deep learning, which is generally expressed in the form of functional modules corresponding to the steps of the method. The system comprises:
an acquisition module configured to acquire an original ultrasound image, which is divided into a training data set, a prospective test data set, and an external test data set according to a source;
the extraction module is configured to share one convolutional neural network feature extractor with the gray-scale ultrasound and the Doppler ultrasound, the gray-scale ultrasound is input into the feature extractor to obtain gray-scale features, and the Doppler ultrasound is input into the feature extractor to obtain Doppler features;
the training module is configured to perform end-to-end training in a multi-task learning mode, and train a convolutional neural network feature extractor, an SH score four classifier and a blood flow score four classifier at the same time;
a scoring module configured to stitch the grayscale and Doppler features into a feature direction
Inputting the Doppler characteristics into a four SH score classifier to obtain SH scores, independently inputting the Doppler characteristics into a four blood flow score classifier to obtain blood flow scores, and integrating the SH scores and the blood flow scores to obtain combined scores;
a display module configured to display the original ultrasound image, the heat map overlay image, and the prediction result.
The following is a description of the technical effects of the present invention.
Prospective test dataset: on the three synovial thickening scoring binary classification tasks, the AUC of the prospective test dataset was 0.930 (95% ci = 0.919-0.941), 0.933 (95% ci = 0.930-0.936), and 0.979 (95% ci = 0.973-0.985), respectively. In the three vessel scoring binary classification tasks, AUC was 0.986 (95% ci = 0.985-0.987), 0.990 (95% ci = 0.986-0.995) and 0.995 (95% ci = 0.991-0.998), respectively.
The accuracy of the composite score prediction for RATING was 86.1% (95% CI =82.5% -90.1%), and the linearly weighted kappa score was 0.853 (95% CI = 0.806-0.900). For joints with composite scores of 0 and 3, the accuracy score was higher than 90%.
Assisting a doctor: RATING was compared with 10 sonographers (experience of ultrasound varied from 4 years to 15 years). The accuracy of the composite score obtained by RATING is obviously higher than that of 10 doctors and the average level (P < 0.001). In a binary classification setting for all three composite scores, RATING achieved a significantly higher Yoden index than the 10 physicians and the average level (P < 0.001).
One week later, software-assisted image reading was performed and the same group of physicians scored the same group of images with the assistance of RATING. The average accuracy increased significantly from 41.4% (95% = CI 35.8% -47.2%) to 64.0% (95% CI =58.7% -69.5%), the composite score accuracy for all 10 physicians was significantly higher than the independent assessment (P < 0.001). On the composite score classifications of 0 and 1,2 and 3, the physician mean euden index increased significantly from 0.226 to 0.520 (P < 0.001), the composite score classifications of 0 with 1,2, 3 from 0.520 to 0.668 (P < 0.001), and the composite score classifications of 0 with 1,2, 3 from 0.492 to 0.660 (P < 0.001).
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications, equivalent variations and modifications made to the above embodiment according to the technical spirit of the present invention still belong to the protection scope of the technical solution of the present invention.
Claims (10)
1. A method for fusing RA ultrasonic modal synovial membrane score based on deep learning is characterized in that: which comprises the following steps:
(1) Collecting an original ultrasonic image, and dividing the original ultrasonic image into a training data set, a prospective test data set and an external test data set according to sources;
(2) The method comprises the following steps that gray-scale ultrasound and Doppler ultrasound share a convolutional neural network feature extractor, the gray-scale ultrasound is input into the feature extractor to obtain gray-scale features, and the Doppler ultrasound is input into the feature extractor to obtain Doppler features;
(3) Performing end-to-end training in a multi-task learning mode, and simultaneously training a convolutional neural network feature extractor, an SH scoring four-classifier and a blood flow scoring four-classifier;
(4) The gray scale features and the Doppler features are spliced into a feature vector, the feature vector is input into an SH score four classifier to obtain SH scores, the Doppler features are independently input into a blood flow score four classifier to obtain blood flow scores, and the SH scores and the blood flow scores are integrated to obtain combined scores;
(5) And displaying the original ultrasonic image, the heat map superposed image and the prediction result.
2. The method for fusion of RA ultrasound modality synovial score based on deep learning of claim 1, wherein: in the step (3), each training sample is: grayscale ultrasound images, doppler ultrasound images, SH scores, blood flow scores, joint scores.
3. The method for fusion of RA ultrasound modality synovial score based on deep learning of claim 2, wherein: the step (3) comprises the following sub-steps:
(3.1) inputting a gray scale ultrasonic image and a Doppler ultrasonic image, and respectively outputting 4 numerical values by an SH score four classifier and a blood flow score four classifier;
(3.2) calculating loss function values of the SH score four-classification and the blood flow score four-classification respectively by using a cross entropy loss function, adding the two loss function values to serve as a total loss function, and then optimizing the whole model by using a gradient descent method;
where θ is a model parameter, α is a learning rate,Gradient calculation, J loss function, i parameter number, J sample number, x input image, and y correct category;
and (3.3) stopping training when the loss function value of the model reaches the lowest value in the training process.
4. The method for fusion of RA ultrasound modality synovial score based on deep learning of claim 3, wherein: in the step (4), the gray scale feature and the Doppler feature are multidimensional vectors with the shapes of channel number, height and width, the two multidimensional vectors are respectively leveled into one-dimensional vectors, and then the one-dimensional vectors are spliced into one vector.
5. The method for fusion of RA ultrasound modality synovial score based on deep learning of claim 4, wherein: in the step (4), in the four classifiers for SH score, assuming that the ith output of the four output values of the four neurons in the last layer is the largest, the classifier predicts that SH score is predicted to be i.
6. The method for fusion of RA ultrasound modality synovial score based on deep learning of claim 5, wherein: in the step (4), the maximum value of the SH score and the blood flow score is taken as a combined score.
7. The method for fusion of RA ultrasound modality synovial score based on deep learning of claim 6, wherein: in the step (5), a heat map is generated for each gray-scale ultrasonic image or Doppler ultrasonic image, so as to highlight the important region for determining the synovial pachynsis score.
8. The method for fusion of RA ultrasound modality synovial score based on deep learning of claim 7, wherein: in said step (5), the heat map of the grayscale ultrasound image highlights potential synovial hypertrophy areas, and the heat map of the doppler ultrasound image highlights potential synovial hypertrophy areas and potential synovial hypervascularization areas.
9. The method for fusion of RA ultrasound modality synovial score based on deep learning of claim 8, wherein: in step (5), the heat map is colored yellow and superimposed on the original ultrasound image.
10. System based on deep learning fuses RA ultrasonic modal synovial membrane score, its characterized in that: it includes:
an acquisition module configured to acquire an original ultrasound image, which is divided into a training data set, a prospective test data set, and an external test data set according to a source;
the extraction module is configured to share one convolutional neural network feature extractor with the gray-scale ultrasound and the Doppler ultrasound, the gray-scale ultrasound is input into the feature extractor to obtain gray-scale features, and the Doppler ultrasound is input into the feature extractor to obtain Doppler features;
the training module is configured to perform end-to-end training in a multi-task learning mode, and train a convolutional neural network feature extractor, an SH score four classifier and a blood flow score four classifier at the same time;
the grading module is configured to splice the gray scale features and the Doppler features into a feature vector, input the feature vector into an SH score four classifier to obtain SH scores, independently input the Doppler features into a blood flow score four classifier to obtain blood flow scores, and synthesize the SH scores and the blood flow scores to obtain combined scores;
a display module configured to display the original ultrasound image, the heat map overlay image, and the prediction result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211015198.1A CN115719329A (en) | 2022-08-23 | 2022-08-23 | Method and system for fusing RA ultrasonic modal synovial membrane scores based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211015198.1A CN115719329A (en) | 2022-08-23 | 2022-08-23 | Method and system for fusing RA ultrasonic modal synovial membrane scores based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115719329A true CN115719329A (en) | 2023-02-28 |
Family
ID=85253925
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211015198.1A Pending CN115719329A (en) | 2022-08-23 | 2022-08-23 | Method and system for fusing RA ultrasonic modal synovial membrane scores based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115719329A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116630679A (en) * | 2023-03-29 | 2023-08-22 | 南通大学 | Osteoporosis identification method based on CT image and domain invariant feature |
CN116630679B (en) * | 2023-03-29 | 2024-06-04 | 南通大学 | Osteoporosis identification method based on CT image and domain invariant feature |
-
2022
- 2022-08-23 CN CN202211015198.1A patent/CN115719329A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116630679A (en) * | 2023-03-29 | 2023-08-22 | 南通大学 | Osteoporosis identification method based on CT image and domain invariant feature |
CN116630679B (en) * | 2023-03-29 | 2024-06-04 | 南通大学 | Osteoporosis identification method based on CT image and domain invariant feature |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN100481096C (en) | Automated regional myocardial assessment method for cardiac imaging | |
EP3306500A1 (en) | Method for analysing medical treatment data based on deep learning, and intelligent analyser thereof | |
US20040193036A1 (en) | System and method for performing probabilistic classification and decision support using multidimensional medical image databases | |
CN110461240A (en) | System, method and computer accessible for ultrasonic analysis | |
Nurmaini et al. | Accurate detection of septal defects with fetal ultrasonography images using deep learning-based multiclass instance segmentation | |
JP2007527743A (en) | System and method for automatic diagnosis and decision support for heart related diseases and conditions | |
Herrick et al. | Quantitative nailfold capillaroscopy—update and possible next steps | |
CN112819818B (en) | Image recognition module training method and device | |
CN112508884A (en) | Comprehensive detection device and method for cancerous region | |
KR20210060923A (en) | Apparatus and method for medical image reading assistant providing representative image based on medical use artificial neural network | |
KR20220124665A (en) | Apparatus and method for medical image reading assistant providing user preferenced style based on medical use artificial neural network | |
JP4651271B2 (en) | Computer-aided patient diagnosis decision support system | |
CN116524248B (en) | Medical data processing device, method and classification model training device | |
CN116309346A (en) | Medical image detection method, device, equipment, storage medium and program product | |
Paunksnis et al. | The use of information technologies for diagnosis in ophthalmology | |
CN115719329A (en) | Method and system for fusing RA ultrasonic modal synovial membrane scores based on deep learning | |
Karegowda et al. | Knowledge based fuzzy inference system for diagnosis of diffuse goiter | |
Reddy et al. | Enhanced Pre-Processing Based Cardiac Valve Block Detection Using Deep Learning Architectures | |
Parola et al. | Image-based screening of oral cancer via deep ensemble architecture | |
CN114708973B (en) | Device and storage medium for evaluating human health | |
US20230096522A1 (en) | Method and system for annotation of medical images | |
US20230274424A1 (en) | Appartus and method for quantifying lesion in biometric image | |
Quero et al. | Artificial Intelligence in Colorectal Cancer Surgery: Present and Future Perspectives. Cancers. 2022; 14: 3803 | |
Gunasekara et al. | A feasibility study for deep learning based automated brain tumor segmentation using magnetic resonance images | |
Begimov | EXTRACTING TAGGING FROM EXOCARDIOGRAPHIC IMAGES VIA MACHINE LEARNING ALGORITHMICS |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |