WO2024014702A1 - Procédé et dispositif de diagnostic de l'otite moyenne - Google Patents

Procédé et dispositif de diagnostic de l'otite moyenne Download PDF

Info

Publication number
WO2024014702A1
WO2024014702A1 PCT/KR2023/007254 KR2023007254W WO2024014702A1 WO 2024014702 A1 WO2024014702 A1 WO 2024014702A1 KR 2023007254 W KR2023007254 W KR 2023007254W WO 2024014702 A1 WO2024014702 A1 WO 2024014702A1
Authority
WO
WIPO (PCT)
Prior art keywords
disease
otitis media
classifier
feature data
layers
Prior art date
Application number
PCT/KR2023/007254
Other languages
English (en)
Korean (ko)
Inventor
권지훈
안중호
채지혜
박근우
최연주
Original Assignee
재단법인 아산사회복지재단
울산대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020230002549A external-priority patent/KR20240009328A/ko
Application filed by 재단법인 아산사회복지재단, 울산대학교 산학협력단 filed Critical 재단법인 아산사회복지재단
Publication of WO2024014702A1 publication Critical patent/WO2024014702A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/227Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for ears, i.e. otoscopes
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders

Definitions

  • Acute otitis media is a disease that occurs so commonly that 80% of children before the age of 3 experience it, has frequent recurrences, and often requires a lot of antibiotics.
  • Otitis media with effusion is a disease that occurs when exudate accumulates within the eardrum due to sequelae of acute otitis media or poor function of the middle ear ventilation duct, and is known to be the most common cause of hearing loss in children. It is known that 80% of children suffer from otitis media with effusion at least once before the age of 10.
  • an endoscope To diagnose otitis media in hospitals, an endoscope is generally used to obtain images by approaching the eardrum through the external auditory canal. It is used in various hospitals, such as pediatrics or family medicine, and is often equipped in private hospitals, etc. Recently, endoscopes in the form of portable devices connected to personal communication devices (handsets or tablets) have been developed, increasing opportunities to obtain images of the eardrum.
  • middle ear diseases in order to accurately diagnose middle ear diseases, a method is needed to classify middle ear diseases into diseases that can co-exist and diseases that cannot co-exist.
  • a learned otitis media diagnosis model including a shared layer including at least one convolution operation and a plurality of classifier layers connected to the shared layer, and computer-executable instructions
  • Memory that stores (computer-executable instructions); a processor that accesses the memory and executes the instructions; a display electrically connected to the processor; and an image acquisition unit for receiving an otoendoscopic image of a patient, wherein the commands are configured to receive an otoendoscopic image of the patient and select a region of interest extracted from the received otoendoscopic image.
  • the processor may receive a video sequence of the patient's tympanic membrane and acquire a plurality of otoscope images corresponding to the number of frames determined by the user of the video sequence. .
  • the processor removes the patient's personally identifiable information from the otoscope image, extracts the region of interest having a predetermined shape from the otoscope image, and selects the extracted region of interest, 2. It can be placed in the center of the otoscope image in a dimensional form.
  • the processor obtains probabilities for primary diseases belonging to the primary class from the extracted feature data, based on the first classifier layer, and has the highest probability among the probabilities for the primary diseases.
  • the disease may be output to the user as the disease prediction result.
  • the processor obtains a probability regarding a secondary disease belonging to the secondary class from the extracted feature data based on each of the second classifier layers, and obtains a probability regarding the secondary disease for each of the second classifier layers.
  • the disease occurrence result based on can be output to the user as a single disease prediction result for each of the second classifier layers.
  • the processor sets the probability of a disease with the highest probability among diseases excluding the disease corresponding to the disease prediction result in the primary class as the target result, and sets the probability of the disease prediction result and the probability of the target result. Based on a case where the difference between the two is less than a threshold determined by the user, an output suggesting retry of otitis media diagnosis through an otoscope image different from the otoscope image may be provided to the user.
  • the processor selects second classifier layers related to the secondary class disease in which the disease occurred among the disease occurrence results related to the secondary disease, and determines that at least one of the probabilities related to the secondary disease for the selected layers is determined by the user. Based on the case being less than the threshold determined by , an output suggesting retry of otitis media diagnosis through an otoscope image different from the otoscope image may be provided to the user.
  • the processor may extract feature data based on skipping at least some of the connections between nodes of the shared layers of the learned otitis media diagnosis model.
  • the processor extracts first feature data based on skipping the selected first connection among the connections between the nodes of the shared layers, and the first connection and the first connection among the connections between the nodes of the shared layers. Extracting second feature data based on skipping other second connections, and repeating changes in connections skipped between the nodes, a plurality of feature data including the first feature data and the second feature data can be extracted.
  • the processor obtains probabilities for diseases belonging to the primary class for each of the plurality of feature data, based on the first classifier layer, and, for each of the plurality of feature data, the first classifier.
  • the probabilities for diseases obtained from the classifier layer are converted into disease binary results based on a predetermined threshold, and for each disease belonging to the primary class, the average of the plurality of disease binary results is calculated.
  • a first statistical result representing may be obtained, and the highest first statistical result among diseases belonging to the primary class may be output to the user as the disease prediction result.
  • the processor applies the plurality of feature data to each of the second classifier layers to obtain probabilities for diseases belonging to the secondary class for each of the plurality of feature data, and the second classifier For each of the layers, the probability corresponding to each of the plurality of feature data is converted into a single disease binary result based on a predetermined threshold, and for each of the second classifier layers, the probability corresponding to each of the plurality of feature data is converted into a single disease binary result.
  • FIG. 1 is a diagram illustrating an electronic device for diagnosing otitis media according to an embodiment.
  • Figure 2 is a flow chart illustrating a method for diagnosing otitis media according to an embodiment.
  • FIG. 3 is a diagram illustrating a disease belonging to a primary class and a disease belonging to a secondary class according to an embodiment.
  • Figure 4 is a diagram illustrating an otitis media diagnosis model for diagnosing otitis media from an otoscope image according to an embodiment.
  • Figure 5 is a diagram illustrating prediction results obtained from an otitis media diagnosis model according to an embodiment.
  • Figures 6A to 6C are diagrams showing prediction results obtained from classifier layers of an otitis media diagnosis model according to an embodiment.
  • FIG. 7 is a diagram illustrating a method of obtaining a prediction result from a plurality of feature data according to an embodiment.
  • Figure 8 is a diagram showing McNemar test results of an otitis media diagnosis model according to an embodiment.
  • Figure 9 is a diagram showing a confusion matrix of an otitis media diagnosis model according to an embodiment.
  • Figure 10 is a diagram showing ROC curves for the primary class and secondary class according to one embodiment.
  • first or second may be used to describe various components, but these terms should be interpreted only for the purpose of distinguishing one component from another component.
  • a first component may be named a second component, and similarly, the second component may also be named a first component.
  • a or B “at least one of A and B”, “at least one of A or B”, “A, B or C”, “at least one of A, B and C”, and “A Each of phrases such as “at least one of , B, or C” may include any one of the items listed together in the corresponding phrase, or any possible combination thereof.
  • FIG. 1 is a diagram illustrating an electronic device for diagnosing otitis media according to an embodiment.
  • the electronic device 100 may apply an otoendoscopic image of the patient 170 to the otitis media diagnosis model 130 and output a plurality of disease prediction results for otitis media diseases.
  • the electronic device 100 may include a processor 110, a memory 120, an image acquisition unit 140, and a display 150.
  • the processor 110 may receive an otoscope image of the patient 170 from the otoscope device 180.
  • the processor 110 may generate an input otoscope image based on a region of interest extracted from the received otoscope image.
  • the otoscope image may be an image of the tympanic membrane of the patient 170 captured by the endoscope camera 160 of the otoscope device 180.
  • the processor 110 may apply the input otoscope image to the otitis media diagnosis model 130 to obtain a plurality of disease prediction results for otitis media diseases.
  • Processor 110 may execute software and control at least one other component (e.g., hardware or software component) connected to processor 110.
  • the processor 110 may also perform various data processing or operations.
  • the processor 110 may store the otoscope image received from the otoscope device 180 by the image acquisition unit 140 in the memory 120 .
  • the processor 110 may output a plurality of disease prediction results for an otoscope image through the display 150 as result data using an otitis media diagnosis method described later.
  • Memory 120 may temporarily and/or permanently store various data and/or information required to perform otitis media diagnosis.
  • the memory 120 may store at least one of an otoscope image, computer-executable instructions, or an otitis media diagnosis model 130.
  • the otitis media diagnosis model 130 may be a learned machine learning model that outputs prediction results regarding otitis media disease from images or images. A description of the otitis media diagnostic model will be provided later in Figure 4.
  • the image acquisition unit 140 may receive an otoscope image of the patient 170 from the otoscope device 180.
  • the otoscope image is an image corresponding to at least one frame in a video sequence in which the eardrum of the patient 170 is photographed will mainly be described.
  • the electronic device 100 may receive a video sequence.
  • the electronic device 100 may acquire a plurality of otoscope images from the received video sequence, corresponding to the number of frames determined by the user.
  • the electronic device 100 may apply each of the plurality of acquired otoscope images to the otitis media diagnosis model 130.
  • the display 150 may visually provide a user (eg, a medical professional) with a plurality of disease prediction results for otitis media diseases of the patient 170.
  • the display 150 may visually output at least one of an otoscope image, an input otoscope image, a disease prediction result, a single disease prediction result, or a notification suggesting retry of otitis media diagnosis.
  • the display 150 may include a touch sensor configured to detect a touch, or a pressure sensor configured to measure the intensity of force generated by the touch.
  • the display 150 may include, for example, a display, a hologram device, or a projector, and a control circuit for controlling the device.
  • Figure 2 is a flow chart illustrating a method for diagnosing otitis media according to an embodiment.
  • an electronic device receives information from an otoscope device (e.g., otoscope device 180 of FIG. 1) to a patient (e.g., patient 170 of FIG. 1). Ear endoscopy images can be received.
  • the otoscope device may include a CCD camera, a CMOS camera, etc. used in general endoscopes, but is not particularly limited thereto.
  • An otoscope image is a medical image of a patient collected by a capsule endoscope image, ultrasound, or any other medical imaging system known in the art of the present invention, and may be an image converted into a form similar to an otoscope image.
  • the electronic device may generate an input otoscope image based on the region of interest extracted from the received otoscope image.
  • a method of generating an input otoscope will be described later in Figure 4.
  • the electronic device may extract feature data based on the shared layer from the input otoscope image.
  • the feature data may include abstracted values extracted by applying the input otoscope image to the shared layer of the otitis media diagnosis model.
  • a shared layer may include multiple convolutional layers.
  • the convolutional layer may be used to extract a plurality of feature maps from input data (eg, an input otoscope image) using a plurality of convolutional filters.
  • a plurality of feature maps extracted from the shared layer may be feature data.
  • the electronic device may apply feature data to a plurality of classifier layers and output a plurality of disease prediction results.
  • the electronic device may output disease prediction results for diseases belonging to the primary class based on the first classifier layer from the extracted feature data.
  • the first classifier layer may include layers that output prediction results for diseases belonging to the primary class from feature data extracted from the shared layer. A detailed description of the first classifier layer is described later in FIG. 4.
  • Diseases belonging to the primary class may include at least one of otitis media with effusion (OME), chronic otitis media (COM), congenital cholesteatoma, or absence of disease. there is. A description of the diseases belonging to the primary class will be provided later in FIG. 3.
  • OME otitis media with effusion
  • COM chronic otitis media
  • congenital cholesteatoma congenital cholesteatoma
  • the disease prediction result may be a disease with the highest probability among diseases belonging to the primary class.
  • the electronic device may obtain probabilities regarding diseases belonging to the primary class from extracted feature data based on the first classifier layer.
  • the electronic device may obtain at least one of a probability for otitis media with effusion, a probability for chronic otitis media, or a probability for the absence of a disease based on the first classifier layer from the feature data.
  • the electronic device may output to the user the disease with the highest probability among the probabilities of primary diseases as a disease prediction result.
  • the electronic device may output to the user the disease with the highest probability among the probability of otitis media with effusion, the probability of chronic otitis media, or the probability of the absence of the disease as a disease prediction result.
  • the electronic device may output to the user a set of probabilities including at least one of the probability of otitis media with effusion, the probability of chronic otitis media, or the probability of the absence of the disease as a disease prediction result.
  • the disease prediction result is not limited to the probability of diseases belonging to the primary class, but may be a statistical result for each disease belonging to the primary class. A method of obtaining statistical results for diseases belonging to the primary class is described later in FIG. 7.
  • the electronic device may individually output a single disease prediction result for each disease of diseases belonging to the secondary class, based on the first classifier layer and the plurality of second classifier layers separated.
  • the second classifier layers may include layers that individually output prediction results for each disease belonging to the secondary class from feature data extracted from the shared layer.
  • Each of the plurality of second classifier layers may output a single disease prediction result for at least one disease among diseases belonging to the secondary class. A detailed description of the second classifier layers is described later in FIG. 4.
  • Diseases belonging to the secondary class may include at least one of the following: Attic Cholesteatoma, Myringitis, Otomycosis, Tympanosclerotic Plague, or Ventilating Tube. A description of diseases belonging to the secondary class will be provided later in FIG. 3.
  • the single disease prediction result may be the disease occurrence result for each of the diseases belonging to the secondary class.
  • a disease occurrence result may be a result indicating whether a disease has occurred.
  • the electronic device may obtain a probability regarding a secondary disease belonging to the secondary class from feature data based on each of the plurality of second classifier layers.
  • the electronic device may separately obtain the probability for epitympanic cholesteatoma, probability for tympanitis, probability for ear fungus, or probability for ventilator tract separately from a corresponding second one of the second classifier layers. .
  • the electronic device may determine the disease occurrence result for each of the diseases belonging to the secondary class.
  • the electronic device may output the disease occurrence results for each of the diseases belonging to the secondary class to the user based on the determined disease occurrence results. That is, the electronic device may output a disease occurrence result based on the probability of a secondary disease for each of the second classifier layers to the user as a single disease prediction result for each of the second classifier layers.
  • the single disease prediction result is not limited to the disease occurrence result based on the probability for each disease belonging to the secondary class, but may be a statistical result for each disease belonging to the secondary class. A method of obtaining statistical results for diseases belonging to the secondary class will be described later with reference to FIG. 7.
  • FIG. 3 is a diagram illustrating a disease belonging to a primary class and a disease belonging to a secondary class according to an embodiment.
  • the primary class 310 refers to diseases that are unlikely to exist and/or develop at the same time among otitis media-related diseases.
  • the primary class 310 includes diseases that cannot occur together during a certain time period. It can be included.
  • the possibility of simultaneous existence may be determined by at least one of the user's judgment or statistical results of past medical records.
  • the user's judgment may be an example of a specific disease being determined as a disease belonging to the primary class by a medical professional or medical personnel.
  • the statistical result may be a result indicating the possibility that certain diseases may exist simultaneously in multiple otoscope images or multiple medical records.
  • an otoscope image taken of a patient's eardrum may have a low possibility of simultaneously containing otitis media with effusion and chronic otitis media.
  • the secondary class 320 may include diseases related to otitis media that can occur together over a certain period of time.
  • diseases belonging to the secondary class 320 may be diseases that are likely to co-exist in the patient's eardrum.
  • an otoscope image taken of a patient's eardrum may have a high possibility of simultaneously containing epitympanic cholesteatoma and myringitis.
  • Figure 4 is a diagram illustrating an otitis media diagnosis model for diagnosing otitis media from an otoscope image according to an embodiment.
  • An electronic device may apply the otoscope image 410 to the learned otitis media diagnosis model 440. Specifically, the electronic device may remove the patient's personally identifiable information from the otoscope image 410. The electronic device may extract a region of interest 415 having a predetermined shape from the otoscope image 410. The electronic device may place the extracted region of interest 415 at the center of the two-dimensional otoscope image. The electronic device may generate the input otoscope image 430 by placing the region of interest 415 at the center of the otoscope image.
  • the electronic device may apply the input otoscope image 430 generated based on the region of interest 415 extracted from the otoscope image 410 to the learned otitis media diagnosis model 440.
  • the electronic device may obtain a plurality of disease prediction results by applying the input otoscope image 430 to the learned otitis media diagnosis model 440.
  • the electronic device may obtain a plurality of disease prediction results based on feeding forward the input otoscope image 430 to the learned otitis media diagnosis model 440.
  • the otitis media diagnosis model 440 may include a neural network.
  • a neural network includes layers, and each layer can include nodes.
  • a node may have a node value determined based on an activation function.
  • a node in an arbitrary layer may be connected to a node in another layer (e.g., another node) through a link (e.g., a connection edge) with a connection weight.
  • a node's node value can be propagated to other nodes through links.
  • node values may be forward propagated from the previous layer to the next layer.
  • the forward propagation operation may represent an operation that propagates node values based on input data in the direction from the input side of the shared layer 442 toward the classifier layer.
  • the node value of that node can be propagated (e.g., forward propagation) to the node of the next layer (e.g., next node) connected to the node through a connection line.
  • a node may receive a value weighted by a connection weight from a previous node (eg, multiple nodes) connected through a connection line.
  • the node value of a node may be determined based on applying an activation function to the sum of weighted values received from previous nodes (e.g., a weighted sum).
  • Parameters of the neural network may exemplarily include the connection weights described above.
  • the parameters of the neural network may be updated so that the objective function value, which will be described later, changes in the targeted direction (e.g., the direction in which loss is minimized).
  • the electronic device may extract feature data based on the shared layer 442 by applying the input otoscope image 430 to the learned otitis media diagnosis model 440.
  • the electronic device may output a plurality of disease prediction results based on the classifier layers by applying the extracted feature data to the classifier layers.
  • the input otoscope image 430 has the region of interest placed in the center and may be an RGB image reformatted to 256 ⁇ 256 ⁇ 3.
  • the area of interest 415 may be an area where only a certain area (eg, a circle) including the patient's eardrum is selected in the otoscope image.
  • the electronic device may estimate and extract the area corresponding to the patient's eardrum as the user's area of interest 415.
  • the electronic device may place the region of interest 415 at the center of the image.
  • the electronic device may generate an input otoscope image 430, which is an image in which the region of interest is located at the center.
  • the learned otitis media diagnosis model 440 may represent a model learned through machine learning, and in detail, may be a learned machine learning model that outputs a prediction result regarding otitis media disease from an image or video.
  • the learned otitis media diagnosis model 440 may output a plurality of disease prediction results from the input otoscope image 430.
  • a machine learning model (eg, the learned otitis media diagnosis model 440) may be created through machine learning. Learning algorithms may include, for example, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning, but It is not limited.
  • a machine learning model may include multiple artificial neural network layers.
  • the learned otitis media diagnosis model 440 includes a shared layer 442 including at least one convolution operation and a plurality of classifier layers connected to the shared layer 442 (e.g. , task-specific layers).
  • Artificial neural networks include deep neural network (DNN), convolutional neural network (CNN), recurrent neural network (RNN), restricted boltzmann machine (RBM), belief deep network (DBN), bidirectional recurrent deep neural network (BRDNN), It may be one of deep Q-networks or a combination of two or more of the above, but is not limited to the examples described above.
  • this specification mainly describes an example in which the shared layer 442 of the otitis media diagnosis model 440 is a convolutional neural network (CNN) including at least one convolution operation.
  • the otitis media diagnosis model 440 may be an EfficientNet B-4 model.
  • the above-described machine learning model has a training input (e.g., an otoscope image of a patient for learning (e.g., patient 170 in FIG. 1)) and a training output mapped to the training input. It may be learned based on training data including pairs of (eg, a plurality of disease prediction results for learning). For example, a machine learning model can be trained to output training output from training input. A machine learning model being trained may produce temporary outputs in response to training inputs, and may be trained such that loss between the temporary outputs and the training outputs (e.g., targets of training) is minimized.
  • a training input e.g., an otoscope image of a patient for learning (e.g., patient 170 in FIG. 1)
  • a training output mapped to the training input. It may be learned based on training data including pairs of (eg, a plurality of disease prediction results for learning).
  • a machine learning model can be trained to output training output from training input.
  • parameters of the machine learning model may be updated according to the loss.
  • This learning may be performed, for example, in the electronic device itself on which the machine learning model is performed, or may be performed through a separate server.
  • the machine learning model on which training has been completed (eg, the learned otitis media diagnosis model 440) may be stored in a memory (eg, the memory 120 of FIG. 1).
  • the electronic device may use Equation 1 as an objective function for learning the otitis media diagnosis model.
  • Is Class of training input can represent the ground truth of Is Class of training input It can represent a temporary output (e.g., output probability). also, can represent 'categorical cross-entropy loss'.
  • the electronic device may set the above-described Equation 1 as an objective function for learning the machine learning model, and use the machine learning model learned through the above-described objective function as the learned otitis media diagnosis model 440.
  • the shared layer 442 may include a plurality of convolutional layers, which are an input layer and a hidden layer.
  • the electronic device uses either a first convolutional filter (e.g., 'Conv 3 ⁇ 3' in FIG. 4) or a second convolutional filter (e.g., 'MBConv 3x3' in FIG. 4) in the convolution layer.
  • At least one convolution filter can be used.
  • the convolution layer may be a layer that performs a convolution operation between input data (eg, input otoscope image 430) and a convolution filter.
  • the first convolutional filter and the second convolutional filter may represent learned connection weights.
  • the electronic device may extract feature data based on a convolution operation between the input otoscope image 430 and the learned connection weights in the shared layer 442.
  • the classifier layers include a first classifier layer 444 that outputs disease prediction results for diseases belonging to the primary class and a second classifier layer 444 that individually outputs single disease prediction results for diseases belonging to the secondary class. It may include 2 classifier layers 446. Each of the classifier layers may include separate learned connection weights (eg, connection weights between nodes/layers in each of the classifier layers). The electronic device may obtain a plurality of disease prediction results based on an operation between feature data extracted from the classifier layer and learned connection weights.
  • the first classifier layer 444 may include separate layers from the second classifier layers 446.
  • the electronic device may output a disease prediction result from the first classifier layer 444 through calculation between the feature data and the learned connection weight of the first classifier layer 444.
  • the second classifier layers 446 may include at least one layer separate from the first classifier layer.
  • the electronic device may output a single disease prediction result through calculation between the feature data and the learned connection weights of each layer of the second classifier layers 446.
  • the electronic device may obtain a plurality of disease prediction results by applying one input otoscope image 430 to one learned otitis media diagnosis model 440.
  • the electronic device may apply one feature data to a plurality of classifier layers and output a plurality of otitis media-related disease prediction results.
  • the electronic device can provide high accuracy and convenience to the user compared to a comparison target model (eg, an independent otitis media diagnosis model).
  • the electronic device may simultaneously provide a plurality of disease prediction results to the user by applying one input otoscope image 430 to one learned otitis media diagnosis model 440.
  • a user can simultaneously obtain a plurality of disease prediction results by applying one input otoscope image 430 to the learned otitis media diagnosis model 440 once.
  • the learned otitis media diagnosis model 440 can simultaneously output a plurality of disease prediction results through a plurality of classifier layers.
  • the learned otitis media diagnosis model 440 may have superior performance compared to the model being compared. For example, the learned otitis media diagnosis model 440 may output a result closer to the correct answer than the comparison target model for the same input otoscope image.
  • the accuracy of the learned otitis media diagnosis model 440 will be described later in FIGS. 8 to 10 below.
  • Figure 5 is a diagram illustrating prediction results obtained from an otitis media diagnosis model according to an embodiment.
  • An electronic device (e.g., the electronic device 100 of FIG. 1) according to an embodiment outputs a plurality of disease prediction results through a learned otitis media diagnosis model (e.g., the learned otitis media diagnosis model 440 of FIG. 4). can do.
  • the electronic device may output prediction results for one otitis media disease or multiple otitis media diseases.
  • the electronic device may output first results 510 to third results 530 for one otitis media disease. For example, if the correct answer to the first result 510 is normal, the electronic device produces a 'None' result indicating the absence of the disease among the diseases belonging to the primary class in the first classifier layer of the otitis media diagnosis model (e.g., The disease prediction result of the first result 510) may be output. Additionally, the electronic device may output a 'False' result (e.g., a single disease prediction result of the first result 510) for each disease belonging to the secondary class in the second classifier layers of the otitis media diagnosis model. .
  • a 'False' result e.g., a single disease prediction result of the first result 510
  • the electronic device For example, if the correct answer to the second result 520 is Otomycosis, the electronic device produces a 'None' result indicating the absence of the disease among the diseases belonging to the primary class in the first classifier layer of the otitis media diagnosis model. can be output. Additionally, the electronic device may output a 'True' result from the second classifier layer corresponding to ear fungus among the second classifier layers of the otitis media diagnosis model. For example, if the correct answer to the third result 530 is chronic otitis media (COM), the electronic device selects 'chronic otitis media' among the diseases belonging to the primary class in the first classifier layer of the otitis media diagnosis model. COM' results can be output. Additionally, the electronic device may output a 'False' result for each of the diseases belonging to the secondary class in the second classifier layers of the otitis media diagnosis model.
  • COM chronic otitis media
  • the electronic device may output fourth results 540 and fifth results 550 for a plurality of otitis media diseases. For example, if the correct answer to the fourth result 540 is otitis media with effusion (OME) and Myringitis, the electronic device selects diseases belonging to the primary class in the first classifier layer of the otitis media diagnosis model.
  • OME otitis media with effusion
  • the 'OME' result which indicates otitis media with effusion, can be output.
  • the electronic device may output a 'True' result from the second classifier layer corresponding to myringitis among the second classifier layers of the otitis media diagnosis model.
  • the electronic device indicates the absence of the disease among the diseases belonging to the primary class in the first classifier layer of the otitis media diagnosis model.
  • a result of 'None' can be output.
  • the electronic device may output a 'True' result from the second classifier layer corresponding to myringitis and a 'True' result from the second classifier layer corresponding to the ventilation duct.
  • Figures 6A to 6C are diagrams showing prediction results obtained from classifier layers of an otitis media diagnosis model according to an embodiment.
  • An electronic device may output a disease prediction result 650a for the input otoscope image 610a of the patient 664a to the user 662a.
  • the electronic device may apply the input otoscope image 610a to the otitis media diagnosis model to obtain a first probability result 640a for diseases belonging to the primary class.
  • the electronic device extracts feature data by applying the input otoscope image 610a to the shared layer 620a, and applies the extracted feature data to the first classifier layer 630a among the classifier layers to determine the first classifier layer 630a.
  • a probability result 640a can be obtained.
  • the electronic device may output the disease with the highest probability among the first probability results 640a (for example, chronic otitis media (COM) in FIG. 6A) as the disease prediction result 650a.
  • the electronic device may output the disease prediction result 650a to the user 662a through the display 660a.
  • the disease prediction result 650a is not limited to this, and the electronic device may output the disease prediction result to the user 662a for a plurality of input otoscope images of the patient 664a.
  • the electronic device may apply each of a plurality of input otoscope images to an otitis media diagnosis model to obtain probabilities for each of the diseases belonging to the primary class.
  • the electronic device may obtain a first probability result for a plurality of input otoscope images based on a plurality of probabilities obtained for each of the diseases belonging to the primary class. Specifically, the electronic device may calculate at least one of the average, median, or mode of the plurality of probabilities obtained for each of the diseases belonging to the primary class as the probability for each of the diseases belonging to the primary class. The electronic device may output a disease with the highest probability among first probability results for a plurality of input otoscope images as a disease prediction result.
  • the electronic device may output single disease prediction results 650b for the input otoscope image 610b of the patient 664b to the user 662b.
  • the electronic device may apply the input otoscope image 610b to the otitis media diagnosis model to obtain second probability results 640b for diseases belonging to the secondary class.
  • the electronic device extracts feature data by applying the input otoscope image 610b to the shared layer 620b, and applies the extracted feature data to the second classifier layers 630b among the classifier layers. 2 probability results 640b may be obtained.
  • the electronic device may determine the disease occurrence result for each of the diseases belonging to the secondary class.
  • the electronic device determines whether the single disease prediction results 650b are greater than or equal to a predetermined threshold (for example, in FIG. 6B, a probability of 50% is set as the threshold) for the second probability results 640b. ) can be output.
  • a predetermined threshold for example, in FIG. 6B, a probability of 50% is set as the threshold
  • the electronic device may output a 'True' result for Attic Cholesteatoma because the probability of Attic Cholesteatoma is 80%.
  • a 'True' result for epitympanic cholesteatoma may indicate that the patient 664b has developed epitympanic cholesteatoma disease.
  • the electronic device may output a 'False' result for otomycosis because the probability of otomycosis is 45%.
  • a result of 'False' for ear mold disease may indicate that the patient 664b does not develop ear mold disease.
  • the electronic device may output single disease prediction results 650b to the user 662b through the display 660b.
  • the single disease prediction results 650b are not limited to this, and the electronic device may output single disease prediction results to the user 662b for a plurality of input otoscope images of the patient 664b.
  • the electronic device may apply each of a plurality of input otoscope images to an otitis media diagnosis model to obtain probabilities for each of the diseases belonging to the secondary class.
  • the electronic device may obtain second probability results for a plurality of input otoscope images based on the plurality of probabilities obtained for each of the diseases belonging to the secondary class.
  • the electronic device may calculate at least one of the average, median, or mode of the plurality of probabilities obtained for each of the diseases belonging to the secondary class as the probability for each of the diseases belonging to the secondary class.
  • the electronic device may output single disease prediction results based on whether the second probability results for the plurality of input otoscope images are greater than or equal to a predetermined threshold.
  • the electronic device may output a disease prediction result 650c and single disease prediction results 652c for the input otoscope image 610c.
  • the electronic device may apply the input otoscope image 610c to the otitis media diagnosis model 620c and output a disease prediction result 650c and single disease prediction results 652c.
  • the electronic device may extract feature data by applying the input otoscope image 610c to the shared layer 625c.
  • the electronic device may simultaneously apply the extracted feature data to the first classifier layer 630c and the second classifier layers 632c.
  • the electronic device may obtain a first probability result 640c by applying the extracted feature data to the first classifier layer 630c.
  • the electronic device may acquire the first probability result 640c and simultaneously obtain second probability results 642c by applying the extracted feature data to the second classifier layers 632c.
  • the electronic device may output the disease with the highest probability among the first probability results 640c as the disease prediction result 650c. If the probability for each of the diseases belonging to the secondary class is greater than or equal to a threshold predetermined by the user, the electronic device may determine the disease occurrence result for each of the diseases belonging to the secondary class. The electronic device may output single disease prediction results 652c based on whether the second probability results 640c are equal to or greater than a predetermined threshold.
  • the electronic device applies one input otoscope image 610c to the otitis media diagnosis model 620c to provide a disease prediction result 650c and single disease prediction results 652c to a user (e.g., a medical professional). Can be printed.
  • FIG. 7 is a diagram illustrating a method of obtaining a prediction result from a plurality of feature data according to an embodiment.
  • An electronic device may extract a plurality of feature data from the input otoscope image 710 .
  • the electronic device may extract feature data by performing a forward propagation operation of the otitis media diagnosis model while skipping at least some of the connections between nodes of some of the shared layers.
  • the electronic device may extract the first feature data 712 by skipping the first connection among the shared layers of the otitis media diagnosis model 720.
  • the first connection may be a connection excluded from the first forward propagation operation among connections between nodes of the shared layer.
  • the first feature data 712 may include abstracted values extracted by applying the input otoscope image 710 to the shared layer in which the first connection was skipped.
  • the electronic device may extract the second feature data 714 from the shared layer of the otitis media diagnosis model 730 by skipping the second connection that is different from the first connection.
  • the second connection may be a connection excluded from the second forward propagation operation among connections between nodes of the shared layer. Connections between nodes that are skipped or excluded in the first forward propagation operation and the second forward propagation operation may vary.
  • the second feature data 714 may include abstracted values extracted by applying the input otoscope image 710 to the shared layer in which the second connection is skipped. In other words, the electronic device can extract a plurality of different feature data from one input otoscope image 710 by skipping any connection in the shared layer.
  • the electronic device may apply the first feature data 712 to the first classifier layer to obtain the first disease prediction probability 722.
  • the first disease prediction probability 722 may represent occurrence probabilities for diseases belonging to the primary class.
  • the electronic device may convert the first disease prediction probability 722 into a first disease binary result 726 based on a predetermined threshold.
  • the electronic device may apply the first feature data 712 to the first classifier layer and simultaneously apply it to the second classifier layers to obtain first single disease prediction probabilities 724.
  • the first single disease prediction probabilities 724 may represent occurrence probabilities for diseases belonging to the secondary class.
  • the electronic device may convert the first single disease prediction probabilities 724 into a first single disease binary result 728 based on a predetermined threshold.
  • the electronic device may apply the second feature data 714 to the first classifier layer to obtain the second disease prediction probability 732.
  • the second disease prediction probability 732 may represent occurrence probabilities for diseases belonging to the primary class.
  • the electronic device may convert the second disease prediction probability 732 into a second disease binary result 736 based on a predetermined threshold.
  • the electronic device may apply the second feature data 714 to the first classifier layer and simultaneously apply it to the second classifier layers to obtain second single disease prediction probabilities 734.
  • the second single disease prediction probabilities 734 may represent occurrence probabilities for diseases belonging to a secondary class.
  • the electronic device may convert the second single disease prediction probabilities 734 into a second single disease binary result 738 based on a predetermined threshold.
  • the electronic device may obtain a first statistical result representing an average for the first disease binary outcome 726 and the second disease binary outcome 736.
  • the first statistical result may include statistical results for diseases belonging to the primary class.
  • the electronic device uses the average of the binary results of chronic otitis media (COM) of the first disease binary outcome 726 and chronic otitis media of the second disease binary outcome 736 as the statistical result of chronic otitis media. You can.
  • the electronic device may average the binary results of otitis media with effusion (OME) of the first disease binary outcome 726 and otitis media with effusion of the second disease binary outcome 736 into a statistical result of otitis media with effusion. .
  • OME binary results of otitis media with effusion
  • the electronic device may take the average of the binary results of the absence of the disease in the first disease binary result 726 and the absence of the disease in the second binary result 736 as a statistical result of the absence of the disease.
  • the first statistical result may include statistical results that are 1 for chronic otitis media, 0 for otitis media with effusion, and 0 for absence of disease.
  • the electronic device may output the above-described first statistical result to the user as a disease prediction result for the input otoscope image 710.
  • the electronic device may obtain a second statistical result representing an average for the first single disease binary outcome 728 and the second single disease binary outcome 738.
  • the second statistical result may include statistical results for diseases belonging to a secondary class.
  • the electronic device averages the binary outcomes Attic Cholesteatoma of the first single disease binary outcome 728 and Attic Cholesteatoma of the second single disease binary outcome 738, This can be done with statistical results.
  • the electronic device may average the binary outcomes of Myringitis of the first single disease binary outcome 728 and Myringitis of the second single disease binary outcome 738 into a statistical result of Myringitis.
  • the electronic device may average the binary outcomes of Otomycosis of the first single disease binary outcome 728 and Otomycosis of the second single disease binary outcome 738 into a statistical result of Otomycosis. .
  • the electronic device may average the binary results of the ventilating tube of the first single disease binary result 728 and the ventilating tube of the second single disease binary result 738 as a statistical result of the ventilating tube.
  • the secondary statistical results may include statistical results that are 1 for epitympanic cholesteatoma, 0.5 for tympanitis, 0.5 for ear fungus, and 0 for ventilator tract.
  • the electronic device may output the above-described second statistical result to the user as a single disease prediction result for the input otoscope image 710.
  • each of the first and second statistical results is not limited to the disease prediction result for the input otoscope image 710 and the single disease prediction result.
  • the electronic device may output the first disease prediction probability 722 and the second disease prediction probability 732 to the user as a disease prediction result for the input otoscope image 710.
  • the electronic device may output first single disease prediction probabilities 724 and second single disease prediction probabilities 734 to the user as a single disease prediction result for the input otoscope image 710.
  • the electronic device may provide the user with an output suggesting retry of otitis media diagnosis based on at least one of the disease prediction probability or the single disease prediction probability.
  • the disease prediction probability may include at least one of the first disease prediction probability 722 or the second disease prediction probability 732.
  • the single disease prediction probability may include at least one of the first single disease prediction probabilities 724 or the second single disease prediction probabilities 734.
  • the electronic device may provide an output suggesting that the user retry diagnosing otitis media.
  • the electronic device may provide an output suggesting retry of otitis media diagnosis based on the difference between the probability of the disease prediction result and the probability of the target result.
  • the probability of the disease prediction result may be the probability of the disease having the first probability among the first disease prediction probabilities 722
  • the probability of the target result may be the probability of the disease having the second priority probability among the first disease prediction probabilities 722. It can be.
  • the disease with the first probability may be chronic otitis media (COM), and the disease with the second highest probability may be otitis media with effusion (OME).
  • COM chronic otitis media
  • OME otitis media with effusion
  • the electronic device may provide an output to the user suggesting retrying the otitis media diagnosis through another otoscope image if the calculated difference is below a threshold.
  • the threshold may be a threshold for retrying otitis media diagnosis based on the first disease prediction probability 722.
  • the electronic device may provide an output suggesting that the user retry diagnosing otitis media.
  • the electronic device may select second classifier layers related to the secondary class disease in which the disease occurred among the first single disease prediction probabilities 724.
  • the electronic device may provide an output suggesting retry of otitis media diagnosis based on a case where at least one of the probabilities related to the secondary disease for the selected layers is less than a threshold value.
  • the threshold may be a threshold for retrying otitis media diagnosis based on the first single disease prediction probabilities 724.
  • the electronic device divides the second classifier layers for Attic Cholesteatoma and Otomycosis among the first single disease prediction probabilities 724 into a second classifier layer in which the disease occurs.
  • Figure 8 is a diagram showing McNemar test results of an otitis media diagnosis model according to an embodiment.
  • An electronic device may acquire the performance of an otitis media diagnosis model (eg, the otitis media diagnosis model 130 of FIG. 1 ).
  • the electronic device may obtain performance difference results between the otitis media diagnosis model and the comparison target model.
  • the electronic device can obtain results of the performance difference between the otitis media diagnosis model and the comparative model through the McNemar test results.
  • the model to be compared may represent a model that outputs one disease prediction result (e.g., a prediction result for otitis media with effusion disease) for one input data (e.g., an input otoscope image).
  • the electronic device can obtain the performance difference results between the otitis media diagnosis model and the comparison target model through the DSC (Dice similarity coefficient) results among the McNemar test results.
  • DSC results may primarily indicate differences between correct and predicted results of video images (e.g., input otoscope images). That is, the higher the DSC result of at least one of the otitis media diagnosis model or the comparison target model, the smaller the difference between the correct answer and the model's predicted results may be indicated.
  • the otitis media diagnosis model may have a smaller difference between the correct answer and the predicted result for the remaining diseases except for myringitis, compared to the comparison target model.
  • the result 830 may include differences between the DSC result 810 of the otitis media diagnosis model and the DSC result 820 of the comparison target model.
  • the otitis media diagnosis model may be a model that has superior prediction performance for all diseases except myringitis compared to the comparison model.
  • Figure 9 is a diagram showing a confusion matrix of an otitis media diagnosis model according to an embodiment.
  • An electronic device may acquire a confusion matrix of an otitis media diagnosis model (e.g., the otitis media diagnosis model 130 of FIG. 1).
  • a confusion matrix can represent a matrix containing elements representing the ratio between the prediction results of a machine learning model and the correct answer.
  • each matrix may include a row regarding the ground truth (GT) class and a column regarding the prediction result class.
  • GT ground truth
  • the electronic device selects 1,534 images containing chronic otitis media (COM) disease (e.g., an image of the eardrum containing chronic otitis media) ), 1,463 chronic otitis media disease prediction results can be output.
  • COM chronic otitis media
  • Figure 10 is a diagram showing ROC curves for the primary class and secondary class according to one embodiment.
  • An electronic device acquires a receiver operating characteristics (ROC) curve of an otitis media diagnosis model (e.g., the otitis media diagnosis model 130 of FIG. 1) and a comparison target model. can do.
  • the otitis media diagnosis model may be shown as a Combined model, and the model to be compared may be shown as a Separate model.
  • the primary class result 1010 may include an ROC curve regarding the primary class of the otitis media diagnosis model and the comparison target model.
  • the primary class result 1010 may include AUC (area under the ROC curve) values for diseases belonging to the primary class of the otitis media diagnosis model and the comparison target model. Referring to FIG.
  • the secondary class result 1020 may include an ROC curve regarding the secondary class of the otitis media diagnosis model and the comparison target model.
  • the secondary class result 1020 may include AUC (area under the ROC curve) values for diseases belonging to the secondary class of the otitis media diagnosis model and the model to be compared. Referring to FIG. 10, it can be seen that in the results of diseases belonging to the secondary class excluding myringitis disease, the otitis media diagnosis model can have a superior AUC value than the comparison model.
  • the embodiments described above may be implemented with hardware components, software components, and/or a combination of hardware components and software components.
  • the devices, methods, and components described in the embodiments may include, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, and a field programmable gate (FPGA).
  • ALU arithmetic logic unit
  • FPGA field programmable gate
  • It may be implemented using a general-purpose computer or a special-purpose computer, such as an array, programmable logic unit (PLU), microprocessor, or any other device capable of executing and responding to instructions.
  • the processing device may execute an operating system (OS) and software applications running on the operating system. Additionally, a processing device may access, store, manipulate, process, and generate data in response to the execution of software.
  • OS operating system
  • a processing device may access, store, manipulate, process, and generate data in response to the execution of software.
  • a single processing device may be described as being used; however, those skilled in the art will understand that a processing device includes multiple processing elements and/or multiple types of processing elements. It can be seen that it may include.
  • a processing device may include multiple processors or one processor and one controller. Additionally, other processing configurations, such as parallel processors, are possible.
  • Software may include a computer program, code, instructions, or a combination of one or more of these, which may configure a processing unit to operate as desired, or may be processed independently or collectively. You can command the device.
  • Software and/or data may be used on any type of machine, component, physical device, virtual equipment, computer storage medium or device to be interpreted by or to provide instructions or data to a processing device. , or may be permanently or temporarily embodied in a transmitted signal wave.
  • Software may be distributed over networked computer systems and stored or executed in a distributed manner.
  • Software and data may be stored on a computer-readable recording medium.
  • the method according to the embodiment may be implemented in the form of program instructions that can be executed through various computer means and recorded on a computer-readable medium.
  • a computer-readable medium may include program instructions, data files, data structures, etc., singly or in combination, and the program instructions recorded on the medium may be specially designed and constructed for the embodiment or may be known and available to those skilled in the art of computer software. It may be possible.
  • Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical media such as CD-ROMs and DVDs, and magnetic media such as floptical disks.
  • Examples of program instructions include machine language code, such as that produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter, etc.
  • the hardware devices described above may be configured to operate as one or multiple software modules to perform the operations of the embodiments, and vice versa.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Surgery (AREA)
  • Pathology (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biophysics (AREA)
  • Databases & Information Systems (AREA)
  • Optics & Photonics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

Un dispositif électronique permettant de diagnostiquer une otite moyenne, selon un mode de réalisation, comprend : une mémoire pour stocker des instructions exécutables par ordinateur, et un modèle de diagnostic d'otite moyenne entraîné comprenant une couche partagée, qui comprend au moins une opération de convolution, et une pluralité de couches de classificateur, qui sont reliées à la couche partagée ; un processeur accédant à la mémoire de façon à exécuter les instructions ; un dispositif d'affichage connecté électriquement au processeur ; et une unité d'acquisition d'image pour recevoir une image oto-endoscopique d'un patient, les instructions pouvant recevoir l'image oto-endoscopique du patient, générer une image oto-endoscopique d'entrée sur la base d'une région d'intérêt extraite de l'image oto-endoscopique reçue, extraire des données de caractéristique de l'image oto-endoscopique d'entrée sur la base de la couche partagée, délivrer, sur la base d'une première couche de classificateur parmi la pluralité de couches de classificateur, des résultats de prédiction de maladie pour des maladies appartenant à une classe primaire à partir des données de caractéristique extraites, et délivrer individuellement, sur la base d'une pluralité de secondes couches de classificateur séparées de la première couche de classificateur parmi la pluralité de couches de classificateur, un résultat de prédiction de maladie unique pour chaque maladie de maladies appartenant à une classe secondaire, à partir de la seconde couche de classificateur correspondante parmi la pluralité de secondes couches de classificateur.
PCT/KR2023/007254 2022-07-13 2023-05-26 Procédé et dispositif de diagnostic de l'otite moyenne WO2024014702A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR20220086105 2022-07-13
KR10-2022-0086105 2022-07-13
KR1020230002549A KR20240009328A (ko) 2022-07-13 2023-01-06 중이염 진단 방법 및 장치
KR10-2023-0002549 2023-01-06

Publications (1)

Publication Number Publication Date
WO2024014702A1 true WO2024014702A1 (fr) 2024-01-18

Family

ID=89536878

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/007254 WO2024014702A1 (fr) 2022-07-13 2023-05-26 Procédé et dispositif de diagnostic de l'otite moyenne

Country Status (1)

Country Link
WO (1) WO2024014702A1 (fr)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210121119A (ko) * 2019-01-25 2021-10-07 오토넥서스 메디컬 테크놀러지 인코퍼레이티드 중이염 진단을 위한 기계 학습

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210121119A (ko) * 2019-01-25 2021-10-07 오토넥서스 메디컬 테크놀러지 인코퍼레이티드 중이염 진단을 위한 기계 학습

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
CHA DONGCHUL, PAE CHONGWON, SEONG SI-BAEK, CHOI JAE YOUNG, PARK HAE-JEONG: "Automated diagnosis of ear disease using ensemble deep learning with a big otoendoscopy image database", EBIOMEDICINE, ELSEVIER BV, NL, vol. 45, 1 July 2019 (2019-07-01), NL , pages 606 - 614, XP093127423, ISSN: 2352-3964, DOI: 10.1016/j.ebiom.2019.06.050 *
CHEN YEN-CHI, CHU YUAN-CHIA, HUANG CHII-YUAN, LEE YEN-TING, LEE WEN-YA, HSU CHIEN-YEH, YANG ALBERT C., LIAO WEN-HUEI, CHENG YEN-FU: "Smartphone-based artificial intelligence using a transfer learning algorithm for the detection and diagnosis of middle ear diseases: A retrospective deep learning study", ECLINICAL MEDICINE, vol. 51, 1 September 2022 (2022-09-01), pages 101543, XP093127419, ISSN: 2589-5370, DOI: 10.1016/j.eclinm.2022.101543 *
CHOI YEONJOO, CHAE JIHYE, PARK KEUNWOO, HUR JAEHEE, KWEON JIHOON, AHN JOONG HO: "Automated multi-class classification for prediction of tympanic membrane changes with deep learning models", PLOS ONE, PUBLIC LIBRARY OF SCIENCE, US, vol. 17, no. 10, 10 October 2022 (2022-10-10), US , pages e0275846, XP093127429, ISSN: 1932-6203, DOI: 10.1371/journal.pone.0275846 *
KHAN MOHAMMAD AZAM, KWON SOONWOOK, CHOO JAEGUL, HONG SEOK MIN, KANG SUNG HUN, PARK IL-HO, KIM SUNG KYUN: "Automatic detection of tympanic membrane and middle ear infection from oto-endoscopic images via convolutional neural networks", NEURAL NETWORKS., ELSEVIER SCIENCE PUBLISHERS, BARKING., GB, vol. 126, 1 June 2020 (2020-06-01), GB , pages 384 - 394, XP093127426, ISSN: 0893-6080, DOI: 10.1016/j.neunet.2020.03.023 *
WU ZEBIN, LIN ZHEQI, LI LAN, PAN HONGGUANG, CHEN GUOWEI, FU YUQING, QIU QIANHUI: "Deep Learning for Classification of Pediatric Otitis Media", THE LARYNGOSCOPE, WILEY-BLACKWELL, UNITED STATES, vol. 131, no. 7, 1 July 2021 (2021-07-01), United States , pages E2344 - E2351, XP093127425, ISSN: 0023-852X, DOI: 10.1002/lary.29302 *
ZENG XINYU, JIANG ZIFAN, LUO WEN, LI HONGGUI, LI HONGYE, LI GUO, SHI JINGYONG, WU KANGJIE, LIU TONG, LIN XING, WANG FUSEN, LI ZHEN: "Efficient and accurate identification of ear diseases using an ensemble deep learning model", SCIENTIFIC REPORTS, NATURE PUBLISHING GROUP, US, vol. 11, no. 1, 25 May 2021 (2021-05-25), US , XP093127418, ISSN: 2045-2322, DOI: 10.1038/s41598-021-90345-w *

Similar Documents

Publication Publication Date Title
US11900647B2 (en) Image classification method, apparatus, and device, storage medium, and medical electronic device
WO2018106005A1 (fr) Système de diagnostic d'une maladie à l'aide d'un réseau neuronal et procédé associé
WO2017022908A1 (fr) Procédé et programme de calcul de l'âge osseux au moyen de réseaux neuronaux profonds
Zafer Fusing fine-tuned deep features for recognizing different tympanic membranes
WO2020242239A1 (fr) Système de prise en charge de diagnostic basé sur l'intelligence artificielle utilisant un algorithme d'apprentissage d'ensemble
WO2021071288A1 (fr) Procédé et dispositif de formation de modèle de diagnostic de fracture
WO2017051943A1 (fr) Procédé et appareil de génération d'image, et procédé d'analyse d'image
WO2005092176A1 (fr) Diagnostic a distance en temps reel d'images in vivo
KR20190115713A (ko) 다기능 신경망을 활용한 혈관탐지 및 망막부종진단 장치 및 그 방법
WO2021071286A1 (fr) Procédé et dispositif d'apprentissage d'images médicales basés sur un réseau contradictoire génératif
WO2022147885A1 (fr) Procédé et appareil de construction d'atlas cérébral guidée par imagerie, dispositif et support de stockage
WO2022131642A1 (fr) Appareil et procédé pour déterminer la gravité d'une maladie sur la base d'images médicales
WO2021137454A1 (fr) Procédé et système à base d'intelligence artificielle pour analyser des informations médicales d'utilisateur
WO2019098415A1 (fr) Procédé permettant de déterminer si un sujet a développé un cancer du col de l'utérus, et dispositif utilisant ledit procédé
WO2019143021A1 (fr) Procédé de prise en charge de visualisation d'images et appareil l'utilisant
US20220130544A1 (en) Machine learning techniques to assist diagnosis of ear diseases
WO2022193973A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique, support de stockage lisible par ordinateur et produit programme informatique
WO2021034138A1 (fr) Procédé d'évaluation de la démence et appareil utilisant un tel procédé
WO2024014702A1 (fr) Procédé et dispositif de diagnostic de l'otite moyenne
CN113222957A (zh) 一种基于胶囊镜图像的多类别病灶高速检测方法及系统
WO2022265197A1 (fr) Procédé et dispositif pour analyser une image endoscopique sur la base de l'intelligence artificielle
WO2021002669A1 (fr) Appareil et procédé pour construire un modèle d'apprentissage de lésion intégré, et appareil et procédé pour diagnostiquer une lésion à l'aide d'un modèle d'apprentissage de lésion intégré
WO2021201582A1 (fr) Procédé et dispositif permettant d'analyser des causes d'une lésion cutanée
Schneider et al. Classification of Viral Pneumonia X-ray Images with the Aucmedi Framework
WO2020246676A1 (fr) Système de diagnostic automatique du cancer du col de l'utérus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23839796

Country of ref document: EP

Kind code of ref document: A1