CN113053523A - Continuous self-learning multi-model fusion ultrasonic breast tumor precise identification system - Google Patents
Continuous self-learning multi-model fusion ultrasonic breast tumor precise identification system Download PDFInfo
- Publication number
- CN113053523A CN113053523A CN202110444366.8A CN202110444366A CN113053523A CN 113053523 A CN113053523 A CN 113053523A CN 202110444366 A CN202110444366 A CN 202110444366A CN 113053523 A CN113053523 A CN 113053523A
- Authority
- CN
- China
- Prior art keywords
- module
- classification
- information
- image
- pathological
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 46
- 206010006187 Breast cancer Diseases 0.000 title claims abstract description 11
- 208000026310 Breast neoplasm Diseases 0.000 title claims abstract description 11
- 230000001575 pathological effect Effects 0.000 claims abstract description 67
- 238000012545 processing Methods 0.000 claims abstract description 48
- 238000007637 random forest analysis Methods 0.000 claims abstract description 34
- 238000000034 method Methods 0.000 claims abstract description 25
- 230000008569 process Effects 0.000 claims abstract description 14
- 206010006272 Breast mass Diseases 0.000 claims abstract description 13
- 206010028980 Neoplasm Diseases 0.000 claims description 38
- 238000012549 training Methods 0.000 claims description 29
- 201000010099 disease Diseases 0.000 claims description 21
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims description 21
- 230000003211 malignant effect Effects 0.000 claims description 18
- 210000000481 breast Anatomy 0.000 claims description 14
- 238000004891 communication Methods 0.000 claims description 12
- 230000005540 biological transmission Effects 0.000 claims description 9
- 238000002604 ultrasonography Methods 0.000 claims description 9
- 238000004422 calculation algorithm Methods 0.000 claims description 7
- 238000002059 diagnostic imaging Methods 0.000 claims description 7
- 238000002592 echocardiography Methods 0.000 claims description 7
- 238000013500 data storage Methods 0.000 claims description 6
- 230000010354 integration Effects 0.000 claims description 6
- 208000004434 Calcinosis Diseases 0.000 claims description 3
- 230000002308 calcification Effects 0.000 claims description 3
- 238000013527 convolutional neural network Methods 0.000 claims description 3
- 238000013135 deep learning Methods 0.000 claims description 3
- 210000001165 lymph node Anatomy 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 210000001519 tissue Anatomy 0.000 claims description 3
- 206010006298 Breast pain Diseases 0.000 claims description 2
- 208000006662 Mastodynia Diseases 0.000 claims description 2
- 238000013480 data collection Methods 0.000 claims description 2
- 239000012530 fluid Substances 0.000 claims description 2
- 230000002459 sustained effect Effects 0.000 claims 4
- 230000007170 pathology Effects 0.000 claims 2
- 230000010339 dilation Effects 0.000 claims 1
- 238000003745 diagnosis Methods 0.000 description 16
- 238000005516 engineering process Methods 0.000 description 6
- 238000010801 machine learning Methods 0.000 description 3
- 238000003748 differential diagnosis Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 210000005075 mammary gland Anatomy 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 208000002193 Pain Diseases 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24323—Tree-organised classifiers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Software Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- Pathology (AREA)
- Databases & Information Systems (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of ultrasonic breast lump identification, in particular to a continuous self-learning multi-model fusion ultrasonic breast lump accurate identification system which comprises a central data processing system, a lump pathological result feedback module, a continuous self-learning multi-model fusion classification module, a lump classification processing module, an image acquisition module, a clinical information characteristic acquisition module and a random forest classification model establishing module, wherein the clinical information acquisition module transmits acquired information to the clinical information characteristic acquisition module through a gateway, the clinical information characteristic acquisition module processes the information and transmits the information to the random forest classification model establishing module through the gateway, and the random forest classification model establishing module transmits the established model to the continuous self-learning multi-model fusion classification module through the gateway. The invention also provides an accurate identification method of the continuous self-learning multi-model fusion ultrasonic breast tumor, which can realize the continuous self-learning and identification accuracy.
Description
Technical Field
The invention relates to the technical field of ultrasonic breast mass identification, in particular to a continuous self-learning multi-model fusion ultrasonic breast mass accurate identification system.
Background
The existing medical imaging AI technology can not closely combine image diagnosis with clinical information of a patient for diagnosis and realize clinical application. With the development of artificial intelligence technology, some students apply the AI technology to ultrasound breast mass image identification and diagnosis, but mainly adopt images to identify benign and malignant masses, and also adopt a multi-modal fusion technology to diagnose masses, the multi-modal fusion machine learning is to build a data set training machine learning model by information fusion of various modalities, the multi-modal fusion technology needs to integrate image features of multiple examinations, the features can be obtained only after the multiple examinations are completed, if a patient only performs a certain examination, the multi-modal fusion diagnosis on masses can not be performed, meanwhile, the diagnosis on diseases can not be performed by only depending on images, and patient clinical information needs to be combined, for example, after an imaging physician sees the patient examination images, the imaging physician obtains a diagnosis impression from the image results, and then queries the medical history and the physical examination to obtain the patient clinical information, the method is characterized in that whether a tumor is benign or malignant is judged through comprehensive analysis, diagnosis of a disease by a doctor is a comprehensive analysis process, the diagnosis result of an image of a patient needs to be combined, the clinical performance and physical signs of the patient also need to be combined, the accuracy of the disease diagnosis is related to subsequent treatment and treatment of the patient, the optimal model can provide image diagnosis and clinical characteristics of the patient, a good solution is not provided in the prior art, classification and labeling of a data set are mainly completed by experienced doctors in the prior art, but whether a tumor image is malignant or not can be judged by the experienced doctors, and the model in the prior art cannot realize continuous self-learning to improve the accuracy.
Disclosure of Invention
To solve the problems set forth in the background art described above. The invention provides an accurate identification system for a continuous self-learning multi-model fusion ultrasonic breast tumor, which solves the problems of misjudgment and incapability of continuous self-learning.
In order to achieve the purpose, the invention adopts the following technical scheme:
the system comprises a central data processing system, a tumor pathological result feedback module, a continuous self-learning multi-model fusion classification module, a tumor classification processing module, an image acquisition module, a clinical information characteristic acquisition module and a random forest classification model building module, wherein the central data processing system comprises a computer cluster, the computer cluster comprises an information receiving module, a data storage module, a data processing module, a data comparison module, a data integration module and a satellite communication and transmission module, the central data processing system is in real-time communication connection with the tumor pathological result feedback module, the image acquisition module and the clinical information acquisition module through the satellite communication and transmission module, and the tumor pathological result feedback module comprises a data acquisition module, The training module comprises a mass benign and malignant image data set, a mass pathological type classification training module and a mass pathological disease classification training module, the model building module comprises a mass benign and malignant image data set, a mass pathological type classification training module and a mass pathological disease classification training module, the mass pathological disease classification training module is connected with the continuous self-learning multi-model fusion classification module in real time through a gateway, the continuous self-learning multi-model fusion classification module is connected with the mass classification processing module in real time through a gateway, the image acquisition module is connected with the continuous self-learning multi-model fusion classification module in real time through a gateway, and the central data processing system is connected with the clinical information acquisition module in real time through a gateway, the clinical information acquisition module transmits acquired information to the clinical information characteristic acquisition module through the gateway, the clinical information characteristic acquisition module processes the information and transmits the information to the random forest classification model building module through the gateway, and the random forest classification model building module transmits the built model to the continuous self-learning multi-model fusion classification module through the gateway.
Preferably, the tumor pathological result feedback module acquires clinical information and tumor characteristic information of the patient from the electronic medical record and the medical imaging system, and establishes a data set according to the pathological result and the BI-RADS classification. And establishing a benign and malignant image data set, a pathological type and a pathological disease image data set according to the pathological result of the image.
Preferably, the clinical information acquiring module is information obtained by a physician in an ultrasound examination process, and includes breast structure, breast echo, whether a breast duct has been dilated, breast nodules, the number of nodules, nodule boundaries, nodule edges, nodule shapes, nodule positions, nodule echoes, nodule back echoes, nodule orientations, nodule calcifications, nodule peripheral tissues, nodule color flow distribution, axillary lymph node aspect ratio, boundary echoes, color flow distribution and the like, and whether a patient has a breast lump family history, whether a breast has effusion, lump hardness, a surgical history, whether a breast has pain and the like.
Preferably, the random forest classification model establishing module adopts an open-source sklern library, introduces a random forest classifier module, establishes a random forest model by a fit method of the random forest classification model, and verifies the accuracy of the model by a predict method.
Preferably, the training module uses an imageAI library open source on GitHub, which uses a deep learning convolutional neural network algorithm, providing 4 algorithms including SqueezeNet, ResNet, inclusion v3 and DenseNet, and the training process generates a JSON file for mapping the image dataset and object types in many models.
Preferably, the image acquisition module is provided with a window capable of displaying a mammary gland ultrasonic image, when the images need to be identified, a user can select the images only by clicking an AI-assisted identification button, only one image or any plurality of images can be selected, and after the selection is completed, the module automatically submits the selected images to the continuous self-learning multi-model fusion classification module to perform image classification to obtain a classification result, and the classification result is transmitted to the central data processing system.
Preferably, the continuous self-learning multi-model fusion classification module fuses data in the tumor pathological result feedback module, constructs a set of continuous self-learning system through python programming, and automatically runs on a server.
An accurate identification method of a continuous self-learning multi-model fusion ultrasonic breast tumor comprises the following steps,
s1, the central data processing system acquires clinical information and tumor characteristic information of a patient from an electronic medical record and medical imaging system, then transmits the clinical information and the tumor characteristic information to the tumor pathological result feedback module, establishes a data set according to the pathological result and BI-RADS classification, establishes a benign and malignant image data set, a pathological type and a pathological disease image data set according to the image pathological result, simultaneously transmits the information acquired in the ultrasonic examination process to the clinical information characteristic acquisition module, and transmits the information to the random forest classification model establishing module for modeling after the information is processed by the clinical information characteristic acquisition module;
s2, the random forest classification model building module fuses information received from the lump pathological result feedback module and information output by the random forest classification model building module, a set of sustainable self-learning system is built through python programming and automatically runs on a server, and classification identification and self-training learning of lumps are achieved;
s3, when identifying the patient, the central data processing system inputs the case image to the continuous self-learning multi-model fusion classification module through the image acquisition module, inputs the case image to the central data processing system through the lump classification processing module after classification processing, and outputs the lump classification result through the satellite communication and transmission module after the processing of the information receiving module, the data storage module, the data processing module, the data comparison module and the data integration module.
And S4, the central data processing system completes the identification of the patient' S condition according to the clinical information and the lump image classification result.
Compared with the prior art, the invention has the beneficial effects that: the invention combines the clinical information of the patient and the image identification result, and meets the medical requirement that the diagnosis of the disease needs to be combined with the clinical manifestation of the patient; the self-learning function is provided, and the differential diagnosis capability of the model can be automatically improved; the advantages of machine learning identification images and the advantages of clinical information characteristics acquired by doctors are combined, so that the identification and diagnosis accuracy of the tumor is improved; corresponding technologies are constructed into a set of ultrasonic image AI systems, which is convenient for assisting doctors to improve the tumor identification capability, and meanwhile, the working pressure of the doctors can be greatly reduced because standard ultrasonic reports can be automatically generated; the quality of the data set and the differential diagnosis precision of the model can be improved by constructing the data set through the tumor pathological feedback result; the diagnosis confidence and the diagnosis accuracy of doctors are improved, AI diagnosis assistance can be provided for doctors in primary hospitals, and the model continuously self-learns and continuously improves the AI diagnosis capability.
The parts of the device not involved are the same as or can be implemented using prior art.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a system block diagram of a continuous self-learning multi-model fusion ultrasound breast tumor precise identification system provided by the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the system for accurately identifying the breast tumor by the ultrasonic fusion of the continuous self-learning multi-model comprises a central data processing system, a tumor pathological result feedback module, a continuous self-learning multi-model fusion classification module, a tumor classification processing module, an image acquisition module, a clinical information characteristic acquisition module and a random forest classification model establishment module, wherein the central data processing system comprises a computer cluster, the computer cluster comprises an information receiving module, a data storage module, a data processing module, a data comparison module, a data integration module and a satellite communication and transmission module, the central data processing system is in real-time communication connection with the tumor pathological result feedback module, the image acquisition module and the clinical information acquisition module through the satellite communication and transmission module, and the tumor pathological result feedback module comprises a data collection module, The training module comprises a mass benign and malignant image data set, a mass pathological type classification training module and a mass pathological disease classification training module, the model building module comprises a mass benign and malignant image data set, a mass pathological type classification training module and a mass pathological disease classification training module, the mass pathological disease classification training module is connected with the continuous self-learning multi-model fusion classification module in real time through a gateway, the continuous self-learning multi-model fusion classification module is connected with the mass classification processing module in real time through a gateway, the image acquisition module is connected with the continuous self-learning multi-model fusion classification module in real time through a gateway, and the central data processing system is connected with the clinical information acquisition module in real time through a gateway, the clinical information acquisition module transmits acquired information to the clinical information characteristic acquisition module through the gateway, the clinical information characteristic acquisition module processes the information and transmits the information to the random forest classification model building module through the gateway, and the random forest classification model building module transmits the built model to the continuous self-learning multi-model fusion classification module through the gateway.
Specifically, the tumor pathological result feedback module acquires clinical information and tumor characteristic information of the patient from the electronic medical record and the medical imaging system, and establishes a data set according to the pathological result and BI-RADS classification. And establishing a benign and malignant image data set, a pathological type and a pathological disease image data set according to the pathological result of the image.
Specifically, the clinical information acquisition module is information obtained by a physician in an ultrasonic examination process, and the information includes breast structure, breast echo, whether a breast duct has expansion or not, breast nodules, the number of nodules, nodule boundaries, nodule edges, nodule shapes, nodule positions, nodule echo, nodule back echo, nodule orientations, nodule calcifications, nodule peripheral tissues, nodule color flow distribution, axillary lymph node aspect ratio, boundary echo, color flow distribution and the like, and a patient has breast lump family history or not, breast fluid overflow or not, lump hardness, surgical history, breast pain or not and the like.
Specifically, the random forest classification model building module adopts an open-source sklern library, introduces a random forest classifier module, builds a random forest model through a fit method of the random forest classification model building module, and verifies the accuracy of the model through a predict method.
Specifically, the training module adopts an imageAI library open source on a GitHub, the library adopts a deep learning convolutional neural network algorithm, 4 algorithms including SqueezeNet, ResNet, Inception V3 and DenseNet are provided, and a JSON file is generated in the training process and is used for mapping the image data set and object types in a plurality of models.
Specifically, the image acquisition module is provided with a window capable of displaying a mammary gland ultrasonic image, when the images need to be identified, a user can select the images only by clicking an AI-assisted identification button, only one image or any plurality of images can be selected, and after the selection is completed, the module automatically submits the selected images to the continuous self-learning multi-model fusion classification module to perform image classification, obtain a classification result and transmit the result to the central data processing system.
Specifically, the continuous self-learning multi-model fusion classification module fuses data in the tumor pathological result feedback module, constructs a set of continuous self-learning system through python programming, and automatically runs on a server.
An accurate identification method of a continuous self-learning multi-model fusion ultrasonic breast tumor comprises the following steps,
s1, the central data processing system acquires clinical information and tumor characteristic information of a patient from an electronic medical record and medical imaging system, then transmits the clinical information and the tumor characteristic information to the tumor pathological result feedback module, establishes a data set according to the pathological result and BI-RADS classification, establishes a benign and malignant image data set, a pathological type and a pathological disease image data set according to the image pathological result, simultaneously transmits the information acquired in the ultrasonic examination process to the clinical information characteristic acquisition module, and transmits the information to the random forest classification model establishing module for modeling after the information is processed by the clinical information characteristic acquisition module;
s2, the random forest classification model building module fuses information received from the lump pathological result feedback module and information output by the random forest classification model building module, a set of sustainable self-learning system is built through python programming and automatically runs on a server, and classification identification and self-training learning of lumps are achieved;
s3, when identifying the patient, the central data processing system inputs the case image to the continuous self-learning multi-model fusion classification module through the image acquisition module, inputs the case image to the central data processing system through the lump classification processing module after classification processing, and outputs the lump classification result through the satellite communication and transmission module after the processing of the information receiving module, the data storage module, the data processing module, the data comparison module and the data integration module.
And S4, the central data processing system completes the identification of the patient' S condition according to the clinical information and the lump image classification result.
The use principle and the use flow of the invention are as follows: the pathological result feedback module acquires pathological results of the tumor, a physician automatically establishes an image classification data set comprising an image benign and malignant classification data set, a pathological type classification data set and a pathological disease classification data set according to the system after the pathological results are classified, a clinical information characteristic acquisition module acquires relevant clinical characteristic information of a patient and constructs a clinical information data set, a random forest algorithm is adopted to construct a patient clinical information classification model for carrying out the BI-RADS classification of the breast tumor to obtain the tumor classification results, the image acquisition module displays the tumor image, the physician provides the image to be identified to the system, the system firstly carries out benign and malignant classification on the tumor through the benign and malignant classification module, obtains the image classification results through the pathological type classification module and the pathological disease classification module, fuses the classification results and the clinical information of the patient to construct a multi-model fusion information data set, and thirdly, classifying the tumor BI-RADS by adopting the random forest classification model to obtain a tumor multi-model fusion classification result, automatically training the data set according to the change of the data set by using the continuous self-learning multi-model fusion classification module, and realizing the continuous self-learning of the model
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (8)
1. The system is characterized in that the central data processing system comprises a computer cluster, the computer cluster comprises an information receiving module, a data storage module, a data processing module, a data comparison module, a data integration module and a satellite communication and transmission module, the central data processing system is in real-time communication connection with the tumor pathological result feedback module, the image acquisition module and the clinical information acquisition module through the satellite communication and transmission module, and the tumor pathological result feedback module comprises a data collection module, a tumor pathological result feedback module, The training module comprises a mass benign and malignant image data set, a mass pathological type classification training module and a mass pathological disease classification training module, the model building module comprises a mass benign and malignant image data set, a mass pathological type classification training module and a mass pathological disease classification training module, the mass pathological disease classification training module is connected with the continuous self-learning multi-model fusion classification module in real time through a gateway, the continuous self-learning multi-model fusion classification module is connected with the mass classification processing module in real time through a gateway, the image acquisition module is connected with the continuous self-learning multi-model fusion classification module in real time through a gateway, and the central data processing system is connected with the clinical information acquisition module in real time through a gateway, the clinical information acquisition module transmits acquired information to the clinical information characteristic acquisition module through the gateway, the clinical information characteristic acquisition module processes the information and transmits the information to the random forest classification model building module through the gateway, and the random forest classification model building module transmits the built model to the continuous self-learning multi-model fusion classification module through the gateway.
2. The system of claim 1, wherein the mass pathology results feedback module is configured to establish a data set by obtaining clinical information And mass characteristic information of the patient from an electronic medical record And a medical imaging system And classifying the pathology results according to BI-RADS (BI-RADS). And establishing a benign and malignant image data set, a pathological type and a pathological disease image data set according to the pathological result of the image.
3. The system of claim 1, wherein the clinical information acquisition module is information obtained by a physician during an ultrasound examination process, and the information includes breast structures, breast echoes, whether there is dilation in breast ducts, breast nodules, nodule number, nodule boundaries, nodule edges, nodule shapes, nodule positions, nodule echoes, nodule posterior echoes, nodule orientations, nodule calcifications, tissue around nodules, nodule color flow distribution, axillary lymph node aspect ratio, boundary echoes, color flow distribution, and the like, and the patient has a family history of breast nodules, breast fluid spills, mass hardness, surgical history, breast pain or the like.
4. The system for accurately identifying breast masses by fusing multiple continuous self-learning models and ultrasound according to claim 1, wherein the random forest classification model building module adopts an open-source sklern library, introduces a random forest classifier module, builds a random forest model by a fit method of the random forest classification model, and verifies the accuracy of the model by a predict method.
5. The system of claim 1, wherein the training module employs an imageAI library open on GitHub, which employs deep learning convolutional neural network algorithms, providing 4 algorithms including SqueezeNet, ResNet, inclusion v3 and DenseNet, the training process generating a JSON file for mapping image datasets and object types in a plurality of models.
6. The system for accurately identifying the breast tumor through continuous self-learning multi-model fusion ultrasound as claimed in claim 1, wherein the image acquisition module has a window capable of displaying the ultrasound image of the breast, when the image needs to be identified, a user can select one image or any multiple images by clicking an "AI assist identification" button, and after the selection is completed, the module automatically submits the selected image to the continuous self-learning multi-model fusion classification module for image classification to obtain a classification result, and the classification result is transmitted to the central data processing system.
7. The system for accurately identifying a sustained self-learning multi-model fusion ultrasound breast tumor according to claim 1, wherein the sustained self-learning multi-model fusion classification module fuses data in the tumor pathological result feedback module, and a set of sustained self-learning system is constructed by python programming and automatically runs on a server.
8. The method for accurately identifying a sustained self-learning multi-model fusion ultrasound breast mass according to claims 1-7, comprising:
s1, the central data processing system acquires clinical information and tumor characteristic information of a patient from an electronic medical record and medical imaging system, then transmits the clinical information and the tumor characteristic information to the tumor pathological result feedback module, establishes a data set according to the pathological result and BI-RADS classification, establishes a benign and malignant image data set, a pathological type and a pathological disease image data set according to the image pathological result, simultaneously transmits the information acquired in the ultrasonic examination process to the clinical information characteristic acquisition module, and transmits the information to the random forest classification model establishing module for modeling after the information is processed by the clinical information characteristic acquisition module;
s2, the random forest classification model building module fuses information received from the lump pathological result feedback module and information output by the random forest classification model building module, a set of sustainable self-learning system is built through python programming and automatically runs on a server, and classification identification and self-training learning of lumps are achieved;
s3, when identifying the patient, the central data processing system inputs the case image to the continuous self-learning multi-model fusion classification module through the image acquisition module, inputs the case image to the central data processing system through the lump classification processing module after classification processing, and outputs the lump classification result through the satellite communication and transmission module after the processing of the information receiving module, the data storage module, the data processing module, the data comparison module and the data integration module.
And S4, the central data processing system completes the identification of the patient' S condition according to the clinical information and the lump image classification result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110444366.8A CN113053523A (en) | 2021-04-23 | 2021-04-23 | Continuous self-learning multi-model fusion ultrasonic breast tumor precise identification system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110444366.8A CN113053523A (en) | 2021-04-23 | 2021-04-23 | Continuous self-learning multi-model fusion ultrasonic breast tumor precise identification system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113053523A true CN113053523A (en) | 2021-06-29 |
Family
ID=76520213
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110444366.8A Pending CN113053523A (en) | 2021-04-23 | 2021-04-23 | Continuous self-learning multi-model fusion ultrasonic breast tumor precise identification system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113053523A (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109146848A (en) * | 2018-07-23 | 2019-01-04 | 东北大学 | A kind of area of computer aided frame of reference and method merging multi-modal galactophore image |
US10420535B1 (en) * | 2018-03-23 | 2019-09-24 | China Medical University Hospital | Assisted detection model of breast tumor, assisted detection system thereof, and method for assisted detecting breast tumor |
CN110728674A (en) * | 2019-10-21 | 2020-01-24 | 清华大学 | Image processing method and device, electronic equipment and computer readable storage medium |
CN110838110A (en) * | 2019-11-05 | 2020-02-25 | 张峰 | System for identifying benign and malignant tumor based on ultrasonic imaging |
CN111584046A (en) * | 2020-05-15 | 2020-08-25 | 周凌霄 | AI (Artificial intelligence) processing method for medical image data |
AU2020101581A4 (en) * | 2020-07-31 | 2020-09-17 | Ampavathi, Anusha MS | Lymph node metastases detection from ct images using deep learning |
CN111681210A (en) * | 2020-05-16 | 2020-09-18 | 浙江德尚韵兴医疗科技有限公司 | Method for identifying benign and malignant breast nodules by shear wave elastogram based on deep learning |
CN111768366A (en) * | 2020-05-20 | 2020-10-13 | 深圳迈瑞生物医疗电子股份有限公司 | Ultrasonic imaging system, BI-RADS classification method and model training method |
CN111915596A (en) * | 2020-08-07 | 2020-11-10 | 杭州深睿博联科技有限公司 | Method and device for predicting benign and malignant pulmonary nodules |
CN111933279A (en) * | 2020-09-14 | 2020-11-13 | 江苏瑞康成医疗科技有限公司 | Intelligent disease diagnosis and treatment system |
CN112086197A (en) * | 2020-09-04 | 2020-12-15 | 厦门大学附属翔安医院 | Mammary nodule detection method and system based on ultrasonic medicine |
-
2021
- 2021-04-23 CN CN202110444366.8A patent/CN113053523A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10420535B1 (en) * | 2018-03-23 | 2019-09-24 | China Medical University Hospital | Assisted detection model of breast tumor, assisted detection system thereof, and method for assisted detecting breast tumor |
CN109146848A (en) * | 2018-07-23 | 2019-01-04 | 东北大学 | A kind of area of computer aided frame of reference and method merging multi-modal galactophore image |
CN110728674A (en) * | 2019-10-21 | 2020-01-24 | 清华大学 | Image processing method and device, electronic equipment and computer readable storage medium |
CN110838110A (en) * | 2019-11-05 | 2020-02-25 | 张峰 | System for identifying benign and malignant tumor based on ultrasonic imaging |
CN111584046A (en) * | 2020-05-15 | 2020-08-25 | 周凌霄 | AI (Artificial intelligence) processing method for medical image data |
CN111681210A (en) * | 2020-05-16 | 2020-09-18 | 浙江德尚韵兴医疗科技有限公司 | Method for identifying benign and malignant breast nodules by shear wave elastogram based on deep learning |
CN111768366A (en) * | 2020-05-20 | 2020-10-13 | 深圳迈瑞生物医疗电子股份有限公司 | Ultrasonic imaging system, BI-RADS classification method and model training method |
AU2020101581A4 (en) * | 2020-07-31 | 2020-09-17 | Ampavathi, Anusha MS | Lymph node metastases detection from ct images using deep learning |
CN111915596A (en) * | 2020-08-07 | 2020-11-10 | 杭州深睿博联科技有限公司 | Method and device for predicting benign and malignant pulmonary nodules |
CN112086197A (en) * | 2020-09-04 | 2020-12-15 | 厦门大学附属翔安医院 | Mammary nodule detection method and system based on ultrasonic medicine |
CN111933279A (en) * | 2020-09-14 | 2020-11-13 | 江苏瑞康成医疗科技有限公司 | Intelligent disease diagnosis and treatment system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11553874B2 (en) | Dental image feature detection | |
Khan et al. | Automatic detection of tympanic membrane and middle ear infection from oto-endoscopic images via convolutional neural networks | |
US20190392944A1 (en) | Method and workstations for a diagnostic support system | |
Habuza et al. | AI applications in robotics, diagnostic image analysis and precision medicine: Current limitations, future trends, guidelines on CAD systems for medicine | |
US20190139642A1 (en) | System and methods for medical image analysis and reporting | |
CN110517238B (en) | AI three-dimensional reconstruction and human-computer interaction visualization network system for CT medical image | |
CN106909778A (en) | A kind of Multimodal medical image recognition methods and device based on deep learning | |
US20140341449A1 (en) | Computer system and method for atlas-based consensual and consistent contouring of medical images | |
CN111933279A (en) | Intelligent disease diagnosis and treatment system | |
EP1711908A2 (en) | Systems and methods for automated diagnosis and decision support for heart related diseases and conditions | |
WO2012012664A2 (en) | Image reporting method | |
WO2020027228A1 (en) | Diagnostic support system and diagnostic support method | |
CN114052794B (en) | Carotid artery ultrasonic report generation system based on multi-mode information | |
CN112270993A (en) | Ultrasonic robot online decision-making method and system with diagnosis result as feedback | |
CN111403029B (en) | Information processing method and device for improving evaluation quality | |
CN112132805A (en) | Ultrasonic robot state normalization method and system based on human body characteristics | |
CN112447276A (en) | Method and system for prompting data donations for artificial intelligence tool development | |
Lu et al. | PKRT-Net: prior knowledge-based relation transformer network for optic cup and disc segmentation | |
Sengan et al. | Echocardiographic image segmentation for diagnosing fetal cardiac rhabdomyoma during pregnancy using deep learning | |
WO2024126468A1 (en) | Echocardiogram classification with machine learning | |
US20190188849A1 (en) | Generating simulated photographic anatomical slices | |
US11995834B2 (en) | Method and system for the automated determination of examination results in an image sequence | |
CN113053523A (en) | Continuous self-learning multi-model fusion ultrasonic breast tumor precise identification system | |
Shaaf et al. | A Convolutional Neural Network Model to Segment Myocardial Infarction from MRI Images. | |
Qazi et al. | Automated heart abnormality detection using sparse linear classifiers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |