WO2019164064A1 - Système d'interprétation d'image médicale par génération de données d'apprentissage renforcé d'intelligence artificielle perfectionnée, et procédé associé - Google Patents

Système d'interprétation d'image médicale par génération de données d'apprentissage renforcé d'intelligence artificielle perfectionnée, et procédé associé Download PDF

Info

Publication number
WO2019164064A1
WO2019164064A1 PCT/KR2018/005641 KR2018005641W WO2019164064A1 WO 2019164064 A1 WO2019164064 A1 WO 2019164064A1 KR 2018005641 W KR2018005641 W KR 2018005641W WO 2019164064 A1 WO2019164064 A1 WO 2019164064A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
learning
medical image
feature
medical
Prior art date
Application number
PCT/KR2018/005641
Other languages
English (en)
Korean (ko)
Inventor
이병일
김성현
Original Assignee
(주)헬스허브
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)헬스허브 filed Critical (주)헬스허브
Publication of WO2019164064A1 publication Critical patent/WO2019164064A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Definitions

  • the present invention relates to a medical image reading system and method through the generation of purified artificial intelligence reinforcement learning data, and more particularly, to extract the reinforcement learning data of medical image reading from the medical image reading expert's readings
  • the present invention relates to a medical image reading system and a method for generating artificial intelligence enhanced learning data that can reduce the computational cost and complexity of artificial intelligence medical image reading and improve accuracy.
  • a doctor's ability is often determined by the ability to read lesions from medical images without missing them. Accurate judgment of medical images is of paramount importance because early detection of pathologies and early detection of pathologies are very important in increasing the likelihood of treating and curing the disease.
  • CAD computer-aided diagnosis
  • the image analysis technology used here is mainly evolving from pattern recognition by image processing to prediction of lesions through machine learning. It extracts features from images, vectorizes images with extracted features, and then uses various machine learning classification techniques. In recent years, deep learning has become a mainstream method.
  • CNNs Convolutional neural networks
  • the present invention provides a medical image reading system and method for improving performance of a conventional CNN by applying purified learning data to a CNN.
  • Korean Patent Publication No. 10-2017-0140757 (December 21, 2017) relates to a clinical decision support ensemble system and a clinical decision support method using the same, and the clinical practice of a patient through machine learning received from a plurality of external medical institutions.
  • Korean Patent Publication No. 10-2015-0108701 (2015.09.30.) Relates to a system and method for visualizing anatomical elements in a medical image, and to verify anatomical elements included in the medical image by using anatomical context information. By presenting a system and method for visualizing anatomical elements in a medical image that automatically classifies and user-friendly visualization of the classified anatomical elements.
  • Korean Patent Publication No. 10-2015-0098119 (2015.08.27.) Relates to a system and method for removing false-positive lesion candidates in a medical image, and to verify lesion candidates detected in a medical image using anatomical context information. By providing a system and method for removing false-positive lesion candidates in a medical image to remove false-positive lesion candidates.
  • the prior art integrates clinical prediction results of patients to perform ensemble prediction, to use anatomical context information in medical images, and to remove false positive lesion candidates, or to provide extensive learning data for deep learning as in the present invention. It has not been suggested to reduce the computational cost and complexity and improve the accuracy of the learning model for the purification and the artificial intelligence medical image reading through it.
  • the present invention was created to solve the above problems, and extracts the reinforcement learning data of the medical image reading from the readings of the medical image reading expert (imagine medicine specialist) and utilizes it as artificial intelligence learning data It is an object of the present invention to provide a medical image reading system and method through refined artificial intelligence enhanced learning data generation that can reduce the calculation cost and complexity of reading and improve accuracy.
  • the present invention to improve the performance of the medical image reading system, to improve the learning effect through the image, to identify the presence and location of the lesion as well as to identify the various types of conditions that can appear in the same body parts
  • Another object is to generate a medical report supervised learning model that can normalize texts contained in well-trained radiologist's readings to produce refined learning data.
  • the present invention to improve the performance of the medical image reading system, by extracting the refined learning data (refined data) by learning the readings that have been verified through the generated reading record supervision learning model, and together with the medical image image
  • the purpose of the present invention is to provide a structure and generation method of refined learning data with a new structure that can enhance learning effects and identify types of diseases by reinforcement learning.
  • the present invention improves the read performance of the existing convolutional neural network and extracts the type of the disease by extracting filters that can identify the type of the condition from the refined learning data having the new data structure and interacting it with the convolutional neural network.
  • the goal is to build a reading-based supervised learning model that can form a converged convolutional neural network with a recognizable new structure.
  • the medical image reading system through the generation of purified artificial intelligence reinforcement learning data generates the normalized type of purified learning data extracted from the readings of a medical image reading expert.
  • a reading record supervised learning unit and a learning model generating unit performing machine learning to read the medical image by inputting the learning data purified by the reading record supervising learning unit, wherein the machine learning includes the medical image.
  • the learning data is automatically received by reading the expert's readings, and the input is characterized in that the learning data automatically becomes reinforcement learning data.
  • the reading record supervised learning unit may include a medical record data loading unit reading data from a file location address of the reading statement, findings, findings, and recommendations by the body parts.
  • a labeling unit which classifies the section into a section including a recommendation, and labels a disease-related word or phrase as a set from the plain text of each section, extracts a disease-related word or phrase from the labeled reading, and extracts the word.
  • a feature extractor for extracting a common feature from a phrase, a feature matrix generator for generating a feature matrix by regularizing the extracted features, and mapping the read to a feature matrix, if given.
  • a feature analysis unit for analyzing features, and to generate purified learning data using only the analyzed features. It characterized in that it comprises a data generator.
  • the medical record data loading unit reads the position of the file, the total number of files, the length of the file, or a combination thereof from the file position address of the read original data given as an input value, and loads the file into the system memory.
  • the data loaded in the memory is labeled for each read section by body part, classified into respective sets, and rearranged on the system memory, and the feature extracting unit includes the plain text of each section, SNOMED-CT, and ICD. Selecting only the plain text in comparison with the standard medical term data set including -11, LOINC, and KCD-10, and selecting the word type, description form, description frequency, or the like from the medical term extracted from the medical term extract section.
  • Combination analysis to characterize terms related to the presence or absence of lesions to characterize terms associated with the location of lesions, Extracts features of terms related to descriptions, features of terms representing types of conditions, or a combination thereof, and the feature matrix generator maps the features extracted from the feature extractor into data sets that are newly input as plain text. Generates a feature matrix that can be compared and analyzed for terms of similar or identical meaning, and the feature analyzer, when the unpurified original read is input, maps it to the feature matrix in the plain text that describes the read.
  • Extracting, analyzing, and classifying the presence or absence of a lesion, the location of the lesion, the symptoms, and the type of the disease, and the purified data generation unit generates the learning data purified from the data extracted, analyzed, and classified by the feature analysis unit. It is characterized by.
  • the learning model generation unit is trained by a converged convolutional neural network, and the converged convolutional neural network continuously performs refined learning data from readings of the medical image reading expert.
  • deep learning machine learning is performed through the fusion convolutional neural network to reduce the calculation amount and improve the accuracy to improve the overall performance.
  • the converged convolutional neural network is characterized in that the weight is updated by reverse propagation.
  • read-record instructional learning that generates purified learning data in a normalized form extracted from a reading of a medical image reading expert (supervised learning) and a learning model generation step of performing a machine learning to read the medical image with the input of the training data purified in the read recording instruction learning step
  • the machine learning is a reading of the medical image reading expert
  • the read history supervised learning step may include loading medical record data reading data from a file location address of the read statement, finding the read statement by body parts, findings, and conclusions.
  • a feature analysis step of analyzing features by mapping to a feature matrix, and only the analyzed features And a refinement data generation step of generating refined learning data.
  • the medical record data loading step reads the file location, the total number of files, the length of the file, or a combination thereof from the file location address of the read original data given as an input value, and loads the file into the system memory.
  • the data loaded in the system memory is labeled by each reading section for each body part, classified into respective sets and rearranged on the system memory, and the medical term extraction step includes the plain text of each section. Only the plaintext text is selectively extracted in comparison with a standard medical term data set including SNOMED-CT, ICD-11, LOINC, and KCD-10, and the feature extraction step is performed from the medical terminology extracted by the medical terminology extracting unit.
  • Analyze word types, narrative forms, frequency of descriptions, or a combination thereof to determine the characteristics of a term related to the presence or absence of a lesion Extracting features of terms related to the location of the sides, features of terms related to the description of symptoms, features of terms representing the types of symptoms, or a combination thereof, wherein the feature matrix generating step includes extracting the features extracted from the feature extractor. Create a feature matrix that can compare and analyze whether the terms have similar or identical meanings by mapping them with newly input plaintext text as a set, and the feature analysis step is performed when an unrefined original readout is inputted.
  • Extracting, analyzing, and classifying the presence or absence of a lesion, the location of the lesion, the symptom, the type of the disease, and the like in the plain text that describes the reading by mapping it to a feature matrix, and the purified data generation step is performed by the feature analysis unit. And generating refined learning data from the classified data.
  • the learning model generation step learning is performed by a converged convolutional neural network, and the converged convolutional neural network continuously performs the refined learning data from readings of the medical image reading expert.
  • deep learning machine learning is performed through the fusion convolutional neural network to reduce the calculation amount and improve the accuracy to improve the overall performance.
  • the learning model generation step characterized in that the weight is updated by the backward propagation.
  • the user uses the convolutional neural network to determine the presence of the lesion, the location of the lesion, and the pathology of the medical image.
  • Conventional convolution by fusing the output value of the convolutional neural network through analysis of pixel information of the medical image and the output value through analysis of the read record of the same body region calculated as a learning result in the read record supervised learning model. It is possible to obtain more accurate readings than medical image readings using neural networks or to reduce computational complexity of convolutional neural networks, and to predict the types of complications that were not known through conventional convolutional neural networks. .
  • FIG. 1 is a view showing the concept of a conventional medical image reading system.
  • FIG. 2 is a view showing the concept of a medical image reading system and method through the generation of purified artificial intelligence enhanced learning data according to an embodiment of the present invention.
  • FIG. 3 is a view for explaining the configuration of a medical image reading system through the generation of purified artificial intelligence enhanced learning data according to an embodiment of the present invention.
  • FIG. 4 is a block diagram illustrating a configuration of a medical report supervised learning model for generating purified artificial intelligence enhanced learning data according to an embodiment of the present invention.
  • FIG. 4 is a block diagram showing the configuration of a medical report supervised learning unit for generating purified artificial intelligence enhanced learning data according to an embodiment of the present invention.
  • FIG. 5 is a view showing a process of generating purified artificial intelligence enhanced learning data according to an embodiment of the present invention.
  • FIG. 6 is a conceptual diagram illustrating a configuration of a converged convolutional neural network (CCNN) through generation of purified artificial intelligence enhanced learning data according to an embodiment of the present invention.
  • CCNN converged convolutional neural network
  • FIG. 7 is a flowchart for generating a feature matrix of a read / write map learning model according to an embodiment of the present invention.
  • FIG. 8 is a flowchart of extracting feature values associated with a disease from an arbitrary reading in a reading recording supervising model according to an embodiment of the present invention.
  • FIG. 9 is a flowchart of a medical image reading process through generation of purified artificial intelligence enhanced learning data according to an embodiment of the present invention.
  • FIG. 1 is a view showing the concept of a conventional medical image reading system.
  • FIG. 2 is a view showing the concept of a medical image reading system and method through the generation of purified artificial intelligence enhanced learning data according to an embodiment of the present invention.
  • the conventional medical image reading system is a system that helps the doctor to determine the presence of the lesion through the medical image, using the medical image stored in the database of the local, hospital, medical institutions, etc.
  • the input medical image was applied to the learning model to predict a lesion from the reading result.
  • the present invention proposes a structure for performing reinforcement learning to upgrade the learning model with readings which are reading results of medical images of specialists.
  • a specialist configures a reading by reading a medical image, and uses the generated reading to improve the learning result of the learning model of the medical image reading system.
  • the present invention serves to improve the learning performance and reduce the complexity in the process of generating a learning model for learning a medical image using the reading result of the reading.
  • FIG. 3 is a view for explaining the configuration of a medical image reading system through the generation of purified artificial intelligence enhanced learning data according to an embodiment of the present invention.
  • the medical image reading system 10 may include a readout instruction learning unit 100, a medical image learning unit 200, a medical image database 300, and a reading database ( 400) and the like.
  • the medical image database 300 and the reading database 400 may be configured in one database.
  • the radiology specialist reads the medical image 300 and then writes a readout 400 of the image.
  • the medical image is input to the medical image learning unit 200 to perform machine learning.
  • the reading 400 is input to the reading guidance learning unit 100 to extract the characteristics of the reading and provide it to the medical image learning unit 200 to improve the learning performance (calculation amount, complexity) for the medical image.
  • the medical image learning unit 200 which learns the medical image by inputting the medical image 30 is extracted from the readout of the medical image 300 read by a specialist.
  • the readout guidance learning unit 100 provides the purified learning data required by the medical image learning unit 200, thereby improving the learning performance of the medical image,
  • FIG. 4 is a block diagram showing the configuration of a medical report supervised learning unit for generating purified artificial intelligence enhanced learning data according to an embodiment of the present invention.
  • the read record map learning unit 100 discovers the medical record data loading unit 111 that reads data from the file location address of the read statement, and the read document for each body part (
  • a labeling processor 112 for classifying into sections including Findings, Conclusions, and Recommendations, and labeling a disease-related word or phrase from a plain text of each section into a set, the disease in the labeled readings
  • the feature extractor 113 extracts a related word or phrase, extracts a common feature from the extracted word or phrase, and generates a feature matrix by regularizing the extracted features. ).
  • the resulting feature matrix is stored in a database and can be used to analyze features by mapping incoming readings to the feature matrix.
  • the reading history map learning unit 100 maps the readings to a feature matrix through a map learning, given a random reading, and analyzes the features by the map learning feature analysis unit 120 and the analyzed features only. Further comprising a refined data generation unit 130 for generating purified learning data.
  • the medical record data loading unit 111 reads the position of the file, the total number of files, the length of the file, or a combination thereof from the file position address of the original reading data given as an input value, and loads it into the system memory or the auxiliary memory. The results of the study will be used for reading, reading and teaching.
  • the labeling processor 112 labels the data loaded in the system memory or the auxiliary memory for each read section for each body part, classifies them into sets, and rearranges the data in the system memory or the auxiliary memory.
  • the feature extractor 113 selectively extracts only the plaintext text by comparing the plaintext text of each section with a standard medical term data set including SNOMED-CT, ICD-11, LOINC, and KCD-10. Analyze the word type, description form, frequency of description, or a combination thereof from the medical terms extracted by the extraction unit, and the characteristics of the terms related to the presence of the lesion, the characteristics of the terms related to the location of the lesion, and the characteristics of the terms related to the description of symptoms. Extract features or combinations of terms that indicate the type of condition.
  • the feature matrix generator 114 compares and analyzes whether the feature extracted by the feature extractor 130 is a similar or identical meaning by mapping the features extracted from the feature extractor 130 to the newly input plaintext text. Matrix).
  • the supervised learning feature analysis unit 120 when the unrefined original readout is input, applies it to the supervised learning model, maps it to the feature matrix, and indicates the presence or absence of the lesion in the plaintext text describing the readout. Extract, analyze and classify symptoms, types of symptoms, and more.
  • the refined data generation unit 130 generates the refined learning data from the data extracted, analyzed, and classified by the supervised feature analysis unit 120.
  • the refined learning data corresponds to additional information for improving performance of reading a medical image.
  • the supervised learning feature analysis unit 120 performs supervised learning, first by learning a readout of any newly input medical image based on a well-defined feature matrix for the medical image, and the corresponding readout is characterized by Determine if you have a matrix.
  • the newly read readings on any medical images are extracted from disease-related words or phrases through the Natural Language Processor (NLP), and inputted into the supervised learning model. Is extracted.
  • NLP Natural Language Processor
  • any new reading is inputted, it is classified by comparing with the existing feature matrix.
  • the medical record data loading unit 111, the labeling processing unit 112, and the feature extraction unit A method of performing the function of the reference feature matrix extractor 110 including the process of the 113 and the feature matrix generator 114 may be performed as it is, or may be inputted into a map learning model and classified.
  • FIG. 5 is a view showing a process of generating purified artificial intelligence enhanced learning data according to an embodiment of the present invention.
  • a reading (R k ) of a radiologist is input to generate a set of extracting only the contents corresponding to finding (F) among readings, and a conclusion of the readings (Conc. Create a set that extracts only the content of C), and create and label a set that extracts only the content of Recommendation (R) from the readings. These sets need to be distinguished by body part (BP). .
  • FM 1 ,. , FM x, etc. is a set of feature metric extracted from the F, C, and R sets of labeled readings.
  • the feature metric of the Xth feature is mapped to A, which converges or dominates all k synonyms, expressions, and vocabularies representing the same disease or condition from a 1 to a k , and the entire feature matrix.
  • Feature Metrix consists of each metric indicating disease name, location expression, and severity.
  • the generated result is RR x and outputs data whose X-th row data R x is refined through a feature matrix.
  • FIG. 6 is a conceptual diagram illustrating a configuration of a convergent convolutional neural network (CCNN) through generation of purified artificial intelligence enhanced learning data according to an embodiment of the present invention.
  • CCNN convolutional neural network
  • a fused convolutional neural network can be constructed. For example, from the plain text describing the reading in the refined AI data, the presence or absence of the lesion, the location of the lesion, the symptom, and the type of the disease are extracted, analyzed, and classified to intensively predict the parts to be predicted in the convolutional stage.
  • the other part does not add a relatively large amount of computational complexity. That is, by precisely learning about the area where the lesion exists and learning differently according to the symptoms or the type of the lesion, the complexity is reduced and the learning performance is improved as compared to the same intensity learning for all the input images. .
  • the reading is decoded for the medical image, and then the medical records are analyzed according to the supervised learning model.
  • the information on the presence or absence of the lesion, the location of the lesion, the symptom, and the type of the symptom is extracted. And classify and construct a convolutional neural network fused with supervised learning models.
  • the fused convolutional neural network has a plurality of convolutional layers, and each time, a customized convolution is performed by using the result of supervised learning.
  • the present invention can create a learning model with further improved performance.
  • the radiologists read the readings, label them by findings, conclusions, and recommendations for each body part, extract the features, generate and store the feature matrix, and save the results.
  • the feature matrix is also extracted from the medical image readings and then mapped with the stored feature matrix to derive refined learning data. Segmenting this feature matrix can extract information about the presence or absence of the lesion, the location of the lesion, the symptoms, and the type of symptom. Through this, it is possible to construct a converged CNN and perform learning.
  • the convergence convolutional network according to the present invention is configured such that the weight is updated by reverse propagation reflecting the evaluation result of the specialist in reverse.
  • the read result of the output unit is propagated to the hidden layer and the convolutional layer by inverting the weight to be corrected to enable more accurate reading.
  • FIG. 7 is a flowchart for generating a feature matrix of a read / write map learning model according to an embodiment of the present invention.
  • the feature matrix generation process of the supervised learning model first loads the dicom metadata and the readout of the medical image image from the network or the local storage (S110). Next, a body part field included in the dicom metadata of the loaded medical image image is extracted (S120). Subsequently, plain texts are inserted into each set by labeling the readout of the medical image by findings, conclusions, and recommendations (S130).
  • FIG. 8 is a flowchart of extracting feature values associated with a disease from an arbitrary reading in a reading recording supervising model according to an embodiment of the present invention.
  • the process of extracting a disease-related feature value from an arbitrary reading first loads the dicom metadata and reading of the medical image image from a network or a local storage ( S210). Subsequently, a body part field included in the dicom metadata of the medical image image is extracted (S220).
  • the plain text is extracted for each section by labeling the readout of the medical image image by findings, conclusions, and recommendations (S230).
  • the extracted plain text is mapped to elements of the feature matrix of the same body part (S240).
  • S240 data of text in which similar or identical terms or lexical expressions exist is extracted (S250).
  • the extracted data can be analyzed to extract information on the presence or absence of lesions, the location of the lesions, symptoms, and types of symptoms. Through this, it is possible to construct a converged CNN and perform learning.
  • FIG. 9 is a flowchart of a medical image reading process through generation of purified artificial intelligence enhanced learning data according to an embodiment of the present invention.
  • the medical image reading process through the generation of purified artificial intelligence reinforcement learning data may be performed to first generate purified learning data in a normalized form extracted from a reading of a medical image reading expert.
  • Record supervised learning is performed (S310).
  • machine learning is performed to read the medical image with input of the learning data purified in the read / write instruction learning step (S320).
  • the user when a user reads the presence or absence of a lesion of the medical image image, the location of the lesion, the type of the disease, etc. using the convolutional neural network, the user outputs the output value of the convolutional neural network by analyzing pixel information of the medical image image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Epidemiology (AREA)
  • Biomedical Technology (AREA)
  • Primary Health Care (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Image Analysis (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

La présente invention concerne un système d'interprétation d'une image médicale par génération de données d'apprentissage renforcé d'intelligence artificielle perfectionnée, ainsi qu'un procédé associé, le système extrayant des données d'apprentissage renforcé en vue d'une interprétation d'image médicale à partir d'un texte d'interprétation d'un expert en interprétation d'image médicale de façon à l'utiliser en tant que données d'apprentissage d'intelligence artificielle, ce qui permet de réduire les coûts de calcul et la complexité de l'interprétation d'image médicale à l'aide d'une intelligence artificielle et d'améliorer la précision de cette dernière.
PCT/KR2018/005641 2018-02-26 2018-05-17 Système d'interprétation d'image médicale par génération de données d'apprentissage renforcé d'intelligence artificielle perfectionnée, et procédé associé WO2019164064A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2018-0022735 2018-02-26
KR1020180022735A KR102153920B1 (ko) 2018-02-26 2018-02-26 정제된 인공지능 강화학습 데이터 생성을 통한 의료영상 판독 시스템 및 그 방법

Publications (1)

Publication Number Publication Date
WO2019164064A1 true WO2019164064A1 (fr) 2019-08-29

Family

ID=67687142

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/005641 WO2019164064A1 (fr) 2018-02-26 2018-05-17 Système d'interprétation d'image médicale par génération de données d'apprentissage renforcé d'intelligence artificielle perfectionnée, et procédé associé

Country Status (2)

Country Link
KR (1) KR102153920B1 (fr)
WO (1) WO2019164064A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111104595A (zh) * 2019-12-16 2020-05-05 华中科技大学 一种基于文本信息的深度强化学习交互式推荐方法及系统
CN112190269A (zh) * 2020-12-04 2021-01-08 兰州大学 基于多源脑电数据融合的抑郁症辅助识别模型构建方法
CN116543918A (zh) * 2023-07-04 2023-08-04 武汉大学人民医院(湖北省人民医院) 多模态疾病特征的提取方法及装置

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102140402B1 (ko) * 2019-09-05 2020-08-03 주식회사 루닛 기계학습을 이용한 의료 영상 판독의 품질 관리 방법 및 장치
KR102325555B1 (ko) * 2019-11-27 2021-11-12 주식회사 에프앤디파트너스 의료 영상에 대한 의사 소견 자동추천장치
KR102183310B1 (ko) * 2020-03-02 2020-11-26 국민대학교산학협력단 전문성 이식을 통한 딥러닝 기반의 전문 이미지 해석 장치 및 방법
KR102240932B1 (ko) * 2020-03-23 2021-04-15 한승호 구강 데이터 관리 방법, 장치, 및 시스템
KR102365287B1 (ko) * 2020-03-31 2022-02-18 인제대학교 산학협력단 뇌mri 획득 영상기법 자동 기재 방법 및 시스템
KR102426091B1 (ko) * 2020-06-26 2022-07-29 고려대학교 산학협력단 온톨로지 데이터베이스 기반의 딥러닝을 통한 병리검사결과보고서 정제 시스템
KR102480134B1 (ko) * 2020-07-15 2022-12-22 주식회사 루닛 기계학습을 이용한 의료 영상 판독의 품질 관리 방법 및 장치
KR102213924B1 (ko) * 2020-07-15 2021-02-08 주식회사 루닛 기계학습을 이용한 의료 영상 판독의 품질 관리 방법 및 장치
KR102516820B1 (ko) 2020-11-19 2023-04-04 주식회사 테렌즈 알츠하이머 질병 검출을 위한 3d 컨볼루셔널 뉴럴 네트워크
KR102516868B1 (ko) 2020-11-19 2023-04-04 주식회사 테렌즈 파킨슨 병 검출을 위한 3d 컨볼루셔널 뉴럴 네트워크
KR102476957B1 (ko) * 2020-12-11 2022-12-12 가천대학교 산학협력단 의료 영상 기반 홀로그램 제공 장치 및 방법
KR102507315B1 (ko) * 2021-01-19 2023-03-08 주식회사 루닛 기계학습을 이용한 의료 영상 판독의 품질 관리 방법 및 장치
KR102326740B1 (ko) * 2021-04-30 2021-11-17 (주)제이엘케이 자동 기계학습을 통한 자동 진화형 플랫폼 구현 방법 및 장치
WO2022260292A1 (fr) * 2021-06-11 2022-12-15 주식회사 라인웍스 Procédé d'extraction de données de rapport de pathologie cancéreuse, et système et programme pour sa mise en oeuvre
WO2023205177A1 (fr) * 2022-04-19 2023-10-26 Synthesis Health Inc. Combinaison de compréhension de langage naturel et de segmentation d'image pour remplir intelligemment des rapports de texte

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110075920A1 (en) * 2009-09-14 2011-03-31 Siemens Medical Solutions Usa, Inc. Multi-Level Contextual Learning of Data
KR20140042531A (ko) * 2012-09-28 2014-04-07 삼성전자주식회사 카테고리별 진단 모델을 이용한 병변 진단 장치 및 방법
KR20150098119A (ko) * 2014-02-19 2015-08-27 삼성전자주식회사 의료 영상 내 거짓양성 병변후보 제거 시스템 및 방법
KR20160066481A (ko) * 2014-11-29 2016-06-10 주식회사 인피니트헬스케어 지능형 의료 영상 및 의료 정보 검색 방법
KR20160096460A (ko) * 2015-02-05 2016-08-16 삼성전자주식회사 복수의 분류기를 포함하는 딥 러닝 기반 인식 시스템 및 그 제어 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110075920A1 (en) * 2009-09-14 2011-03-31 Siemens Medical Solutions Usa, Inc. Multi-Level Contextual Learning of Data
KR20140042531A (ko) * 2012-09-28 2014-04-07 삼성전자주식회사 카테고리별 진단 모델을 이용한 병변 진단 장치 및 방법
KR20150098119A (ko) * 2014-02-19 2015-08-27 삼성전자주식회사 의료 영상 내 거짓양성 병변후보 제거 시스템 및 방법
KR20160066481A (ko) * 2014-11-29 2016-06-10 주식회사 인피니트헬스케어 지능형 의료 영상 및 의료 정보 검색 방법
KR20160096460A (ko) * 2015-02-05 2016-08-16 삼성전자주식회사 복수의 분류기를 포함하는 딥 러닝 기반 인식 시스템 및 그 제어 방법

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111104595A (zh) * 2019-12-16 2020-05-05 华中科技大学 一种基于文本信息的深度强化学习交互式推荐方法及系统
CN111104595B (zh) * 2019-12-16 2023-04-07 华中科技大学 一种基于文本信息的深度强化学习交互式推荐方法及系统
CN112190269A (zh) * 2020-12-04 2021-01-08 兰州大学 基于多源脑电数据融合的抑郁症辅助识别模型构建方法
CN112190269B (zh) * 2020-12-04 2024-03-12 兰州大学 基于多源脑电数据融合的抑郁症辅助识别模型构建方法
CN116543918A (zh) * 2023-07-04 2023-08-04 武汉大学人民医院(湖北省人民医院) 多模态疾病特征的提取方法及装置
CN116543918B (zh) * 2023-07-04 2023-09-22 武汉大学人民医院(湖北省人民医院) 多模态疾病特征的提取方法及装置

Also Published As

Publication number Publication date
KR20190102399A (ko) 2019-09-04
KR102153920B1 (ko) 2020-09-09

Similar Documents

Publication Publication Date Title
WO2019164064A1 (fr) Système d'interprétation d'image médicale par génération de données d'apprentissage renforcé d'intelligence artificielle perfectionnée, et procédé associé
US10929420B2 (en) Structured report data from a medical text report
WO2022227294A1 (fr) Procédé et système de prédiction de risque de maladie basés sur la fusion multimodale
US11610678B2 (en) Medical diagnostic aid and method
US11244755B1 (en) Automatic generation of medical imaging reports based on fine grained finding labels
EP3557584A1 (fr) Interrogation d'intelligence artificielle pour rapports de radiologie en imagerie médicale
US20210375488A1 (en) System and methods for automatic medical knowledge curation
CN112151183A (zh) 一种基于Lattice LSTM模型的中文电子病历的实体识别方法
CN113707307A (zh) 病情分析方法、装置、电子设备及存储介质
CN114242194A (zh) 一种基于人工智能的医学影像诊断报告自然语言处理装置及方法
Zhao et al. Exploiting classification correlations for the extraction of evidence-based practice information
Waheeb et al. An efficient sentiment analysis based deep learning classification model to evaluate treatment quality
WO2024005413A1 (fr) Procédé et dispositif basés sur l'intelligence artificielle pour extraire des informations d'un document électronique
CN109859813B (zh) 一种实体修饰词识别方法及装置
CN113658688B (zh) 基于无分词深度学习的临床决策支持方法
US11809826B2 (en) Assertion detection in multi-labelled clinical text using scope localization
CN112309519B (zh) 基于多模型的电子病历用药结构化处理系统
Cui et al. Intelligent recommendation for departments based on medical knowledge graph
Wu et al. Developing EMR-based algorithms to Identify hospital adverse events for health system performance evaluation and improvement: Study protocol
Fei et al. Adversarial shared-private model for cross-domain clinical text entailment recognition
Garg et al. Reliability Analysis of Psychological Concept Extraction and Classification in User-penned Text
Teja et al. Autism Spectrum Disorder Detection Techniques
WO2024043744A1 (fr) Dispositif et procédé de prise en charge de génération d'annotations
WO2024117528A1 (fr) Procédé et système de prédiction du diabète sucré en utilisant des données cliniques et des données génétiques appliquées à une fréquence spécifique à une race
WO2022211501A1 (fr) Appareil et procédé pour déterminer une position anatomique à l'aide d'une image de bronchoscopie à fibre optique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18906826

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18906826

Country of ref document: EP

Kind code of ref document: A1