WO2021137454A1 - Procédé et système à base d'intelligence artificielle pour analyser des informations médicales d'utilisateur - Google Patents

Procédé et système à base d'intelligence artificielle pour analyser des informations médicales d'utilisateur Download PDF

Info

Publication number
WO2021137454A1
WO2021137454A1 PCT/KR2020/017506 KR2020017506W WO2021137454A1 WO 2021137454 A1 WO2021137454 A1 WO 2021137454A1 KR 2020017506 W KR2020017506 W KR 2020017506W WO 2021137454 A1 WO2021137454 A1 WO 2021137454A1
Authority
WO
WIPO (PCT)
Prior art keywords
user information
artificial intelligence
user
sharer
analysis
Prior art date
Application number
PCT/KR2020/017506
Other languages
English (en)
Korean (ko)
Inventor
김원태
강신욱
이명재
김동민
Original Assignee
(주)제이엘케이
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)제이엘케이 filed Critical (주)제이엘케이
Publication of WO2021137454A1 publication Critical patent/WO2021137454A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring

Definitions

  • the present disclosure relates to a method and apparatus for analyzing user medical information based on artificial intelligence. More specifically, the present invention relates to a method and apparatus for providing an analysis result of user medical information based on artificial intelligence to a user, and allowing a sharer selected by the user to check the analysis result.
  • AI Artificial Intelligence
  • machines such as computers, perform thinking, learning, and analysis that are possible with human intelligence.
  • Recently, the technology to apply AI to the medical industry is increasing.
  • An artificial neural network is one of the techniques for implementing machine learning.
  • an artificial neural network consists of an input layer, a hidden layer, and an output layer.
  • Each layer is composed of neurons, and neurons in each layer are connected to the outputs of neurons in the previous layer.
  • the value obtained by adding a bias to the inner product of each output value of the neurons of the previous layer and the corresponding connection weight is applied to the activation function, which is generally non-linear. and pass the output value to the neurons of the next layer.
  • CNNs Convolutional Neural Networks
  • CNNs are attracting a lot of attention in the field of image recognition, overwhelming the performance of existing machine learning techniques.
  • the structure of a convolutional neural network is almost the same as that of a general artificial neural network, and additional components include a convolutional layer and a pooling layer.
  • a technical problem according to the present disclosure is to provide a method and system for analyzing user medical information based on artificial intelligence.
  • Another technical task according to the present disclosure is to provide a method and system for providing a user medical information analysis result to a sharer based on artificial intelligence.
  • a user information analysis system includes a user interface for receiving user information and an analysis request for the user information from a user, an artificial intelligence model storage for storing an artificial intelligence model, and the analysis request in response to the It may include an artificial intelligence model interface for receiving at least one or more artificial intelligence models from the artificial intelligence model storage to perform analysis on the user information, and a data space for storing the user information and analysis results on the user information.
  • the artificial intelligence model interface additionally receives an artificial intelligence shared recommendation model from the artificial intelligence model storage, and the artificial intelligence shared recommendation model analyzes the user information A sharer recommendation list may be generated based on the result.
  • the system further includes a sharer interface, wherein the user interface receives an input for selecting a sharer from the user based on the generated sharer recommendation list;
  • the sharer interface may transmit an analysis result of the user information to the selected sharer.
  • the artificial intelligence model interface uses the artificial intelligence sharing recommendation model to classify the user based on the analysis result, and the sharer recommendation list is the classification result can be generated based on
  • the artificial intelligence model storage may receive an input related to storage of the artificial intelligence model, deletion and update of the stored artificial intelligence model.
  • the sharer recommendation list may include a plurality of sharers.
  • the plurality of sharers included in the sharer recommendation list may be included in the order of recommendation.
  • the AI sharing recommendation model may generate the sharer recommendation list based on filtering information input from the user.
  • the analysis result of the user information may be provided to the user through the user interface.
  • the sharer interface may receive feedback on an analysis result of the user information from the selected sharer, and the feedback may be stored in the data space.
  • the user information may include the user's medical information.
  • the artificial intelligence model interface extracts a feature point for the user information using the received at least one or more artificial intelligence model, and the extracted feature point is the It may be shared between at least one or more AI models.
  • a user information analysis method performed by a user information analysis system including a user interface, an artificial intelligence model repository, an artificial intelligence model interface, and a data space the user information and the information from the user through the user interface
  • Receiving an analysis request for user information performing analysis on the user information requested for analysis through the AI model interface using at least one AI model stored in the AI model storage, and the user
  • the method may include storing information and an analysis result of the user information in the data space.
  • the method further comprises the step of additionally using an artificial intelligence sharing recommendation model stored in the artificial intelligence model storage through the artificial intelligence model interface, the artificial intelligence sharing The recommendation model may generate a sharer recommendation list based on an analysis result of the user information.
  • the system further includes a sharer interface, wherein the method receives an input for selecting at least one sharer from the generated sharer recommendation list from the user to the sharer interface
  • the method may further include the step of receiving the user information and transmitting the analysis result of the user information to the selected sharer through the sharer interface.
  • the method further comprises the step of classifying the user based on the analysis result by using the artificial intelligence shared recommendation model through the artificial intelligence model interface, , the sharer recommendation list may be generated based on the classification result.
  • the method may receive an input regarding at least one of the storage of the artificial intelligence model, the deletion and the update of the stored artificial intelligence model through the artificial intelligence model storage.
  • the sharer recommendation list may include a plurality of sharers.
  • the plurality of sharers included in the sharer recommendation list may be included in the order of recommendation.
  • the AI sharing recommendation model may generate the sharer recommendation list based on filtering information input from the user.
  • the method may further include providing an analysis result of the user information to the user through the user interface.
  • the method includes: receiving a feedback on the analysis result of the user information from the selected sharer through the sharer interface; and storing the feedback in the data space It may include further steps.
  • the user information may include the user's medical information.
  • the performing of the analysis on the user information requested to be analyzed includes: extracting a feature point for the user information by using the at least one or more artificial intelligence models Including, the extracted feature points may be shared among the at least one or more artificial intelligence models.
  • the user information analysis method includes user information from a user through the user interface and receiving an analysis request for the user information, receiving at least one or more artificial intelligence models from the artificial intelligence model storage in response to the analysis request through the artificial intelligence model interface to perform analysis on the user information and storing the user information and an analysis result for the user information through the data space.
  • a method and system for analyzing user medical information based on artificial intelligence may be provided.
  • a method and system for providing a user medical information analysis result to a sharer based on artificial intelligence may be provided.
  • FIG. 1 is a diagram illustrating the configuration of an artificial intelligence-based user medical information analysis system according to an embodiment of the present disclosure.
  • FIG. 2 is a diagram showing the configuration of a deep learning model learning apparatus for analyzing user medical information, according to an embodiment of the present disclosure.
  • FIG. 3 is a diagram illustrating a process of generating and analyzing context information of an image according to an embodiment of the present disclosure.
  • FIG. 4 is a diagram for explaining an embodiment of a convolutional neural network for generating a multi-channel feature map.
  • FIG. 5 is a diagram for explaining an embodiment of a pooling technique.
  • a component when a component is “connected”, “coupled” or “connected” to another component, it is not only a direct connection relationship, but also an indirect connection relationship in which another component exists in the middle. may also include.
  • a component when a component is said to "include” or “have” another component, it means that another component may be further included without excluding other components unless otherwise stated. .
  • first, second, etc. are used only for the purpose of distinguishing one component from other components, and unless otherwise specified, the order or importance between the components is not limited. Accordingly, within the scope of the present invention, a first component in one embodiment may be referred to as a second component in another embodiment, and similarly, a second component in one embodiment is referred to as a first component in another embodiment. can also be called
  • components that are distinguished from each other are for clearly explaining each characteristic, and do not necessarily mean that the components are separated. That is, a plurality of components may be integrated to form one hardware or software unit, or one component may be distributed to form a plurality of hardware or software units. Accordingly, even if not specifically mentioned, such integrated or dispersed embodiments are also included in the scope of the present invention.
  • components described in various embodiments do not necessarily mean essential components, and some may be optional components. Accordingly, an embodiment composed of a subset of components described in one embodiment is also included in the scope of the present invention. In addition, embodiments including other components in addition to the components described in various embodiments are also included in the scope of the present invention.
  • units means a unit for processing at least one function or operation, which may be implemented as hardware or software or a combination of hardware and software.
  • FIG. 1 is a diagram illustrating the configuration of an artificial intelligence-based user medical information analysis system according to an embodiment of the present disclosure.
  • an artificial intelligence-based user medical information analysis system 100 includes a user interface 110 , a data space 111 , an artificial intelligence model interface 112 , and artificial intelligence. It may include a model repository 113 , a sharer interface 115 , and a visualization module 116 .
  • the user 101 may transmit and/or receive information with the user medical information analysis system 100 through the user interface 110 .
  • the user 101 may mean a user device.
  • a user device may be a computing device that includes a memory, processor, display, and/or communication module.
  • the user 101 may store user information in the data space 111 through the user interface 110 .
  • the user 101 may view the stored user information through the user interface 110 .
  • a visualization module 116 to be described later may be used to view user information stored in the data space 111 .
  • the user 101 may modify or delete user information stored in the data space 111 .
  • the user information may be personal information (eg, age, gender, occupation, address, current location, etc.) of the user 101, medical information (eg, diagnosis history and treatment history, medical image, etc.) have.
  • the present invention is not necessarily limited thereto, and the user information may include various information necessary for performing the method for analyzing medical information according to the present invention.
  • the user 101 may request the user medical information analysis system 100 to analyze the user information through the user interface 110 .
  • the user 101 may select the visualization module 116 through the user interface 110 to check the artificial intelligence analysis result for the input user information.
  • the user 101 may select the user information stored in the data space 111 and/or the analysis result of the user information.
  • the selected user information and/or an analysis result of the user information may be transmitted to the visualization module 116 .
  • the visualization module 116 may process the transmitted user information and/or the analysis result of the user information into a form that the user can view.
  • the visualization module may transmit user information processed in a viewable form and/or an analysis result of user information to the user 101 through the user interface 110 .
  • the user 101 may view, for example, the user information transmitted through a display provided in the user device and/or the analysis result of the user information.
  • the visualization module 116 may be provided on the user 101 side.
  • the user device may include a visualization module 116 .
  • the user 101 may select user information stored in the data space 111 and/or an analysis result of the user information.
  • the selected user information and/or the analysis result of the user information may be processed in a form that can be viewed by the user through the visualization module 116 located in the user device.
  • the analysis result of the user information may be provided to the user 101 in the form of a report or a file as well as in the visualization module 116 .
  • a user may download an analysis result of user information in the form of a document such as PDF, JPG, GIF, or the like.
  • the present invention is not necessarily limited thereto, and the analysis result of the user information may be provided to the user through various forms.
  • the artificial intelligence model storage 113 may individually store a plurality of artificial intelligence models.
  • an artificial intelligence model suitable for analyzing the user information may be selected from the artificial intelligence model storage 113 and transmitted to the artificial intelligence model interface 112 .
  • the transmitted artificial intelligence model may be single or plural.
  • developer 104 may modify and/or delete the AI model stored in the AI model storage 113 , or add a new AI model.
  • Developer 104 may refer to a developer device.
  • a developer device may be a computing device that includes a memory, processor, display, and/or communication module.
  • the artificial intelligence model interface 112 may perform analysis of user information by mounting a single or a plurality of artificial intelligence models received from the artificial intelligence model storage 113 .
  • the analysis of user information may be performed by sequentially inputting user information into the artificial intelligence model.
  • User information for analysis may be transmitted from the data space 111 . In this case, when the analysis of user information is completed, the mounted single or a plurality of artificial intelligence models may be deleted.
  • the artificial intelligence model interface 112 uses an artificial neural network (eg, a convolutional neural network (CNN), etc.)
  • CNN convolutional neural network
  • the extracted feature points of the user medical image may be shared among AI models.
  • the feature points extracted from one AI model can be used in another AI model.
  • a detailed analysis method for a user medical image will be described later with reference to FIGS. 2 to 5 .
  • the artificial intelligence analysis result analyzed in the artificial intelligence model interface 112 may be stored in the data space 111 .
  • the user 101 may check the AI analysis result through the user interface 110 .
  • the artificial intelligence model interface 112 may extract text from the corresponding voice information.
  • the extracted text may be used for user information analysis.
  • user medical information may be digitized.
  • the artificial intelligence model interface 112 may classify users by synthesizing user information including digitized user medical information and text extracted from the user's voice information. For example, when the input user information indicates blood pressure below the normal range, the user who has input the corresponding information may be classified as a user with 'low blood pressure symptoms'. In addition, when the input user information is an image including a benign tumor having a shape similar to that of prostate cancer, the user who has input the corresponding information may be classified as a user having a risk of 'prostate cancer'.
  • the artificial intelligence model interface 112 may generate a sharer recommendation list 102 including a single or a plurality of sharers by using the classified user information.
  • Sharers included in the sharer recommendation list 102 may be included in the order of recommendation.
  • the generated sharer recommendation list 102 may be filtered by reflecting the corresponding filter value.
  • the user 101 may check the sharer recommendation list 102 generated through the user interface 110 .
  • the artificial intelligence model interface 112 may create a hospital list by setting hospitals specializing in 'prostate cancer' as sharers. In this case, when the user inputs a filter value excluding a specific hospital through the user interface, the corresponding hospital may be removed from the created hospital list.
  • the filter value of the sharer 103 may be reflected in the created sharer recommendation list 102 .
  • the hospital may request removal of the corresponding hospital and/or change of the recommendation order from the generated sharer recommendation list.
  • the user 101 may select a specific sharer 103 from the created sharer recommendation list 102 .
  • the sharer 103 selected by the user 101 may access the visualization module 116 through the sharer interface 115 to check the user information input by the user 101 and the artificial intelligence analysis result of the user information. have.
  • the sharer 103 may provide feedback such as correcting an image included in the artificial intelligence analysis result of the user information or inputting a new one through the sharer interface 115 .
  • the sharer 103 may provide feedback by writing a supplementary opinion on the artificial intelligence analysis result of user information.
  • the modified or newly input artificial intelligence analysis result may be stored in the data space 111 .
  • the user may select hospital A.
  • the selected hospital A may check the user medical image input by the user through the sharer interface.
  • hospital A may check the AI analysis result of the user's medical image through the sharer interface.
  • the data space 111 and the artificial intelligence model storage 113 may be implemented using cloud technology, but are not necessarily limited thereto.
  • FIG. 2 is a diagram showing the configuration of a deep learning model learning apparatus for analyzing user medical information, according to an embodiment of the present disclosure.
  • the deep learning model learning apparatus 20 may be an embodiment of an artificial intelligence model mounted on an artificial intelligence model interface.
  • the deep learning model training apparatus 20 may include a feature extractor 21 , a context generator 22 , and a feature and context analyzer 23 .
  • the deep learning model learning apparatus 20 extracts features of the user's medical image to be analyzed, generates context information based on the extracted features, and analyzes the analysis target image based on the extracted features and the generated context information. have.
  • the deep learning model training apparatus 20 may classify an image or find a location of an object of interest by using the extracted feature and the generated context information.
  • the input image of the deep learning model training apparatus 20 may be at least one medical image (MRI, CT, X-ray, etc.).
  • the feature extractor 21 may analyze the input image to extract features of the image.
  • the feature may be a local feature for each region of the image.
  • the feature extraction unit 21 may extract features of the input image using a general convolutional neural network (CNN) technique or a pooling technique.
  • the pooling technique may include at least one of a max pooling technique and an average pooling technique.
  • the pooling technique referred to in the present disclosure is not limited to the max pooling technique or the average pooling technique, and includes any technique for obtaining a representative value of an image region of a predetermined size.
  • the representative value used in the pooling technique may be at least one of a variance value, a standard deviation value, a mean value, a most frequent value, a minimum value, and a weighted average value, in addition to the maximum value and the average value.
  • the convolutional neural network of the present disclosure may be used to extract features such as edges and line colors from input data (image), and may include a plurality of layers. Each layer may receive input data, process the input data of the corresponding layer, and generate output data.
  • the convolutional neural network may output a feature map generated by convolving an input image or an input feature map with filter kernels as output data.
  • the initial layers of a convolutional neural network may be operated to extract low-level features such as edges or gradients from the input.
  • the next layers of the neural network can extract progressively more complex features such as eyes, noses, etc. A detailed operation of the convolutional neural network will be described later with reference to FIG. 4 .
  • a convolutional neural network may include a pooling layer on which a pooling operation is performed in addition to a convolutional layer on which a convolution operation is performed.
  • the pooling technique is a technique used to reduce the spatial size of data in the pooling layer.
  • the pooling technique there are a max pooling technique that selects a maximum value in a corresponding region and an average pooling technique that selects an average value of the corresponding region.
  • the max pooling technique is used in the image recognition field. used In the pooling technique, in general, the window size and interval (stride) of pooling are set to the same value.
  • the stride refers to an interval to be moved when a filter is applied to input data, that is, an interval to which the filter is moved, and the stride may also be used to adjust the size of the output data. A detailed operation of the pooling technique will be described later with reference to FIG. 5 .
  • the feature extraction unit 21 may apply filtering to the analysis target image as pre-processing for extracting features of the analysis target image.
  • the filtering may be a Fast Fourier Transform (FFT), a histogram equalization, a motion artifact removal or noise removal, and the like.
  • FFT Fast Fourier Transform
  • the filtering of the present disclosure is not limited to the methods listed above, and may include all types of filtering capable of improving image quality.
  • the context generator 22 may generate context information of the input image (analysis target image) by using the features of the input image extracted from the feature extractor 21 .
  • the context information may be a representative value indicating all or a partial region of an image to be analyzed.
  • the context information may be global context information of the input image.
  • the context generating unit 22 may generate context information by applying a convolutional neural network technique or a pooling technique to the features extracted from the feature extracting unit 21 .
  • the pooling technique may be, for example, an average pooling technique.
  • the feature and context analyzer 23 may analyze the image based on the feature extracted by the feature extractor 21 and the context information generated by the context generator 22 .
  • the feature and context analyzer 23 according to an embodiment concatenates the local features for each region of the image extracted by the feature extractor 21 and the global context reconstructed by the context generator 22 . It can be used to classify an input image or to find a location of an object of interest included in an input image by using them together. Since information at a specific two-dimensional position in the input image includes not only local feature information but also global context information, the feature and context analyzer 23 uses these information, so that the actual content is different but local feature information It is possible to more accurately recognize or classify input images similar to .
  • the invention according to an embodiment of the present disclosure enables more accurate and efficient learning and image analysis by using global context information as well as local features used by general convolutional neural network techniques. do. From this point of view, the invention according to the present disclosure may be referred to as 'deep neural network through context analysis'.
  • FIG. 3 is a diagram illustrating a process of generating and analyzing context information of an image according to an embodiment of the present disclosure.
  • the feature extractor 21 may extract a feature from the input image 301 using the input image 301 and generate a feature image 302 including the extracted feature information.
  • the extracted feature may be a feature of a local region of the input image.
  • the input image 301 may include an input image of an image analysis apparatus or a feature map in each layer in a convolutional neural network model.
  • the feature image 302 may include a feature map and/or feature vector obtained by applying a convolutional neural network technique and/or a pooling technique to the input image 301 .
  • the context generator 22 may generate context information by applying a convolutional neural network technique and/or a pooling technique to the feature image 302 extracted by the feature extractor 21 .
  • the context generator 22 may generate context information of various scales, such as an entire image, a quadrant region, a ninth region, and the like by variously adjusting the stride of the pooling. Referring to FIG.
  • a full context information image 311 including context information for an image of the full image size, and a quadrant context information image including context information for a quadrant image of a size obtained by dividing the entire image into quarters ( 312) and a ninth segmented context information image 313 including context information on the nineth segmented image having a size obtained by dividing the entire image into nine equal parts may be obtained.
  • the feature and context analyzer 23 may use both the feature image 302 and the context information images 311 , 312 , and 313 to more accurately analyze a specific region of the image to be analyzed.
  • the feature extraction unit 21 may recognize the shape of the object based on the local feature, but may not be able to accurately identify and classify the object with only the shape of the object.
  • the context generator 22 according to an embodiment of the present disclosure generates context information 311 , 312 , and 313 based on the analysis target image or the feature image 302 to more accurately identify and classify objects.
  • the feature and context analyzer 23 may identify the object having the shape of the prostate cancer or benign tumor as “prostate cancer” by using the context information.
  • the context information for the entire image, the context information for the quadrant image, and the context information for the ninth segment image are generated and utilized, but the size of the image from which the context information is extracted is It is not limited to this.
  • context information about an image having a size other than the above-described size may be generated and utilized.
  • FIG. 4 is a diagram for explaining an embodiment of a convolutional neural network for generating a multi-channel feature map.
  • Image processing based on convolutional neural networks can be used in various fields.
  • an image processing apparatus for object recognition of an image an image processing apparatus for image reconstruction, an image processing apparatus for semantic segmentation, and image processing for scene recognition It can be used for devices and the like.
  • the input image 410 may be processed through the convolutional neural network 400 to output a feature map image.
  • the output feature map image may be utilized in various fields described above.
  • the convolutional neural network 400 may be processed through a plurality of layers 420 , 430 , and 440 , and each layer may output multi-channel feature map images 425 and 435 .
  • the plurality of layers 420 , 430 , and 440 may extract features of an image by applying a filter of a predetermined size from the upper left to the lower right of the received data.
  • the plurality of layers 420 , 430 , and 440 are mapped to one neuron in the upper left corner of the feature map by multiplying the upper left N ⁇ M pixel of the input data by a weight.
  • the multiplied weight will also be N ⁇ M.
  • the N ⁇ M may be, for example, 3 ⁇ 3, but is not limited thereto.
  • the plurality of layers 420 , 430 , and 440 scan the input data from left to right and from top to bottom by k cells, multiplying them by weights, and map them to neurons of the feature map.
  • the k column means a stride for moving the filter when performing convolution, and may be appropriately set to adjust the size of output data.
  • k may be 1.
  • the N ⁇ M weight is referred to as a filter or filter kernel. That is, the process of applying the filter in the plurality of layers 420 , 430 , and 440 is a process of performing a convolution operation with the filter kernel, and the result extracted as a result is a "feature map" or a "feature map” map image". Also, a layer on which a convolution operation is performed may be referred to as a convolution layer.
  • multi-channel feature map means a set of feature maps corresponding to a plurality of channels, and may be, for example, a plurality of image data.
  • the multi-channel feature maps may be an input in an arbitrary layer of a convolutional neural network, and may be an output according to a feature map operation result such as a convolution operation.
  • the multi-channel feature maps 425 and 435 are generated by a plurality of layers 420 , 430 , 440 also referred to as “feature extraction layers” or “convolutional layers” of a convolutional neural network. do. Each layer may sequentially receive the multi-channel feature maps generated in the previous layer, and generate subsequent multi-channel feature maps as output.
  • the L (L is an integer)-th layer 540 may receive the multi-channel feature maps generated in the L-1 th layer (not shown) to generate multi-channel feature maps (not shown).
  • the feature maps 425 having K1 channels are outputs according to the feature map operation 420 in the layer 1 with respect to the input image 410 and the feature map operation 430 in the layer 2 ) is an input for
  • the feature maps 435 having K2 channels are outputs according to the feature map operation 430 in the layer 2 for the input feature maps 425, and also the feature map operation in the layer 3 (not shown) input for
  • the multi-channel feature maps 425 generated in the first layer 420 include feature maps corresponding to K1 (K1 is an integer) number of channels.
  • the multi-channel feature maps 435 generated in the second layer 430 include feature maps corresponding to K2 (K2 is an integer) number of channels.
  • K1 and K2 indicating the number of channels may correspond to the number of filter kernels used in the first layer 420 and the second layer 430 , respectively. That is, the number of multi-channel feature maps generated in the M-th layer (M is an integer greater than or equal to 1 L-1) may be the same as the number of filter kernels used in the M-th layer.
  • FIG. 5 is a diagram for explaining an embodiment of a pooling technique.
  • the window size of pooling is 2 ⁇ 2 and the stride is 2, and max pooling may be applied to the input image 510 to generate the output image 590 .
  • a 2X2 window 510 is applied to the upper left corner of the input image 510 , and a representative value (here, the maximum value 4) is calculated among the values in the window 510 area, and the output image 590 is ) to the corresponding position 520 of the input.
  • the window is moved by the stride, that is, by two, and the maximum value 3 among the values in the area of the window 530 is input to the corresponding position 540 of the output image 590 .
  • the above process is repeated from the position below the stride from the left side of the input image again. That is, as shown in FIG. 5C , the maximum value 5 among the values in the window 550 region is input to the corresponding position 560 of the output image 590 .
  • the window is moved by a stride, and the maximum value 2 among values in the area of the window 570 is input to the corresponding position 580 of the output image 590 .
  • the above process is repeatedly performed until a window is located in the lower right area of the input image 510 , thereby generating the output image 590 to which the pooling is applied to the input image 510 .
  • the deep learning-based model of the present disclosure includes a fully convolutional neural network (fully convolutional neural network), a convolutional neural network (convolutional neural network), and a recurrent neural network (recurrent neural network). ), a restricted Boltzmann machine (RBM), and at least one of a deep belief neural network (DBN), but is not limited thereto.
  • machine learning methods other than deep learning may be included.
  • it may include a hybrid model that combines deep learning and machine learning. For example, a feature of an image is extracted by applying a deep learning-based model, and a machine learning-based model may be applied when classifying or recognizing an image based on the extracted feature.
  • the machine learning-based model may include, but is not limited to, a support vector machine (SVM), an AdaBoost, and the like.
  • Exemplary methods of the present invention are expressed as a series of operations for clarity of description, but this is not intended to limit the order in which the steps are performed, and each step may be performed simultaneously or in a different order if necessary.
  • other steps may be included in addition to the illustrated steps, steps may be excluded from some steps, and/or other steps may be included except for some steps.
  • various embodiments of the present invention may be implemented by hardware, firmware, software, or a combination thereof.
  • ASICs Application Specific Integrated Circuits
  • DSPs Digital Signal Processors
  • DSPDs Digital Signal Processing Devices
  • PLDs Programmable Logic Devices
  • FPGAs Field Programmable Gate Arrays
  • general purpose It may be implemented by a processor (general processor), a controller, a microcontroller, a microprocessor, and the like.
  • the scope of the present invention includes software or machine-executable instructions (eg, operating system, application, firmware, program, etc.) that cause operation according to the method of various embodiments to be executed on a device or computer, and such software or and non-transitory computer-readable media in which instructions and the like are stored and executed on a device or computer.
  • software or machine-executable instructions eg, operating system, application, firmware, program, etc.

Abstract

La présente invention concerne un procédé et un système à base d'intelligence artificielle pour analyser des informations médicales d'utilisateur. Un système d'analyse d'informations d'utilisateur selon un mode de réalisation de la présente invention peut comprendre : une interface utilisateur qui reçoit, depuis un utilisateur, des informations d'utilisateur et une demande d'analyse des informations d'utilisateur ; un stockage de modèle d'intelligence artificielle qui stocke un modèle d'intelligence artificielle ; une interface d'intelligence artificielle qui reçoit au moins un modèle d'intelligence artificielle depuis le stockage de modèle d'intelligence artificielle en réponse à la demande d'analyse, et analyse les informations d'utilisateur ; et un espace de données pour stocker les informations d'utilisateur et le résultat d'analyse des informations d'utilisateur.
PCT/KR2020/017506 2019-12-31 2020-12-03 Procédé et système à base d'intelligence artificielle pour analyser des informations médicales d'utilisateur WO2021137454A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020190179064A KR102160390B1 (ko) 2019-12-31 2019-12-31 인공지능 기반의 사용자 의료정보 분석 방법 및 시스템
KR10-2019-0179064 2019-12-31

Publications (1)

Publication Number Publication Date
WO2021137454A1 true WO2021137454A1 (fr) 2021-07-08

Family

ID=72661095

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/017506 WO2021137454A1 (fr) 2019-12-31 2020-12-03 Procédé et système à base d'intelligence artificielle pour analyser des informations médicales d'utilisateur

Country Status (2)

Country Link
KR (1) KR102160390B1 (fr)
WO (1) WO2021137454A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102160390B1 (ko) * 2019-12-31 2020-09-29 (주)제이엘케이 인공지능 기반의 사용자 의료정보 분석 방법 및 시스템
KR102410415B1 (ko) * 2021-06-23 2022-06-22 주식회사 셀타스퀘어 지능형 약물감시 플랫폼을 제공하기 위한 방법 및 장치
KR102589834B1 (ko) * 2021-12-28 2023-10-16 동의과학대학교 산학협력단 치매 스크리닝 디퓨저 장치
KR102656090B1 (ko) * 2021-12-29 2024-04-11 주식회사 커넥트시스템 인공지능 모델 통합 관리 및 배포 시스템과 그 방법

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140126830A (ko) * 2013-04-23 2014-11-03 한국 한의학 연구원 체형 정보를 이용한 한증 및 열증 판별 방법 및 장치
KR20160099804A (ko) * 2015-02-13 2016-08-23 주식회사 티플러스 공유형 원격 판독 서비스 제공 방법 및 장치
KR20170062839A (ko) * 2015-11-30 2017-06-08 임욱빈 Dnn 학습을 이용한 세포이상 여부 진단시스템 및 그 진단관리 방법, 그리고 이를 위한 기록매체
KR20180110310A (ko) * 2017-03-28 2018-10-10 한국전자통신연구원 뇌졸중 예측과 분석 시스템 및 방법
JP2019504402A (ja) * 2015-12-18 2019-02-14 コグノア, インコーポレイテッド デジタル個別化医療のためのプラットフォームおよびシステム
KR102160390B1 (ko) * 2019-12-31 2020-09-29 (주)제이엘케이 인공지능 기반의 사용자 의료정보 분석 방법 및 시스템

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102056847B1 (ko) * 2018-10-16 2019-12-17 김태희 자궁경부 자동판독 및 임상의사결정지원시스템 기반의 원격 자궁경부암 검진 시스템

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140126830A (ko) * 2013-04-23 2014-11-03 한국 한의학 연구원 체형 정보를 이용한 한증 및 열증 판별 방법 및 장치
KR20160099804A (ko) * 2015-02-13 2016-08-23 주식회사 티플러스 공유형 원격 판독 서비스 제공 방법 및 장치
KR20170062839A (ko) * 2015-11-30 2017-06-08 임욱빈 Dnn 학습을 이용한 세포이상 여부 진단시스템 및 그 진단관리 방법, 그리고 이를 위한 기록매체
JP2019504402A (ja) * 2015-12-18 2019-02-14 コグノア, インコーポレイテッド デジタル個別化医療のためのプラットフォームおよびシステム
KR20180110310A (ko) * 2017-03-28 2018-10-10 한국전자통신연구원 뇌졸중 예측과 분석 시스템 및 방법
KR102160390B1 (ko) * 2019-12-31 2020-09-29 (주)제이엘케이 인공지능 기반의 사용자 의료정보 분석 방법 및 시스템

Also Published As

Publication number Publication date
KR102160390B1 (ko) 2020-09-29

Similar Documents

Publication Publication Date Title
WO2021137454A1 (fr) Procédé et système à base d'intelligence artificielle pour analyser des informations médicales d'utilisateur
US10810735B2 (en) Method and apparatus for analyzing medical image
US10902588B2 (en) Anatomical segmentation identifying modes and viewpoints with deep learning across modalities
US11900647B2 (en) Image classification method, apparatus, and device, storage medium, and medical electronic device
US11276164B2 (en) Classifier trained with data of different granularity
WO2019143177A1 (fr) Procédé de reconstruction de série d'images de tranche et appareil utilisant celui-ci
WO2022149894A1 (fr) Procédé pour entraîner un réseau neuronal artificiel fournissant un résultat de détermination d'un échantillon pathologique, et système informatique pour sa mise en œuvre
WO2019132590A1 (fr) Procédé et dispositif de transformation d'image
WO2020016736A1 (fr) Auto-codeur par élimination pour détecter des anomalies dans des images biomédicales
Sivakumar et al. A novel method to detect bleeding frame and region in wireless capsule endoscopy video
CN110390674A (zh) 图像处理方法、装置、存储介质、设备以及系统
WO2019132588A1 (fr) Dispositif et procédé d'analyse d'image basés sur une caractéristique d'image et un contexte
WO2019189972A1 (fr) Méthode d'analyse d'image d'iris par l'intelligence artificielle de façon à diagnostiquer la démence
WO2021049784A2 (fr) Procédé de généralisation de la distribution d'intensité lumineuse d'une image médicale à l'aide d'un gan
WO2019124836A1 (fr) Procédé de mappage d'une région d'intérêt d'une première image médicale sur une seconde image médicale, et dispositif l'utilisant
WO2022197044A1 (fr) Procédé de diagnostic de lésion de la vessie utilisant un réseau neuronal, et système associé
EP4049239A1 (fr) Cartes thermiques à variables multiples pour modèles de diagnostic assistés par ordinateur
WO2023095989A1 (fr) Procédé et dispositif d'analyse d'images médicales à modalités multiples pour le diagnostic d'une maladie cérébrale
Zhai et al. Coronary artery vascular segmentation on limited data via pseudo-precise label
WO2021002669A1 (fr) Appareil et procédé pour construire un modèle d'apprentissage de lésion intégré, et appareil et procédé pour diagnostiquer une lésion à l'aide d'un modèle d'apprentissage de lésion intégré
Pal et al. A fully connected reproducible SE-UResNet for multiorgan chest radiographs segmentation
Sham et al. Automatic reaction emotion estimation in a human–human dyadic setting using Deep Neural Networks
WO2020230972A1 (fr) Procédé d'amélioration des performances de reproduction d'un modèle de réseau neuronal profond entraîné et dispositif l'utilisant
Li et al. Feature pyramid based attention for cervical image classification
Pushpa An efficient internet of things (iot)-enabled skin lesion detection model using hybrid feature extraction with extreme machine learning model

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20910823

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07/12/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20910823

Country of ref document: EP

Kind code of ref document: A1