WO2021060700A1 - Apparatus and method for confirming videofluoroscopic swallowing study - Google Patents
Apparatus and method for confirming videofluoroscopic swallowing study Download PDFInfo
- Publication number
- WO2021060700A1 WO2021060700A1 PCT/KR2020/010735 KR2020010735W WO2021060700A1 WO 2021060700 A1 WO2021060700 A1 WO 2021060700A1 KR 2020010735 W KR2020010735 W KR 2020010735W WO 2021060700 A1 WO2021060700 A1 WO 2021060700A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- food
- anatomical
- video
- swallowing
- unit
- Prior art date
Links
- 230000009747 swallowing Effects 0.000 title claims abstract description 56
- 238000000034 method Methods 0.000 title claims description 26
- 235000013305 food Nutrition 0.000 claims abstract description 80
- 230000033001 locomotion Effects 0.000 claims abstract description 58
- 208000019505 Deglutition disease Diseases 0.000 claims abstract description 39
- 238000012360 testing method Methods 0.000 claims description 55
- 238000010801 machine learning Methods 0.000 claims description 18
- 210000003238 esophagus Anatomy 0.000 claims description 14
- 238000013528 artificial neural network Methods 0.000 claims description 10
- 210000001942 upper esophageal sphincter Anatomy 0.000 claims description 6
- 210000002409 epiglottis Anatomy 0.000 claims description 4
- 210000003437 trachea Anatomy 0.000 claims description 4
- 210000001983 hard palate Anatomy 0.000 claims description 3
- 201000000615 hard palate cancer Diseases 0.000 claims description 3
- 210000003065 pyriform sinus Anatomy 0.000 claims description 3
- 210000001584 soft palate Anatomy 0.000 claims description 3
- 210000002105 tongue Anatomy 0.000 claims description 3
- 210000001260 vocal cord Anatomy 0.000 claims description 3
- 210000001519 tissue Anatomy 0.000 description 17
- 238000010586 diagram Methods 0.000 description 12
- 210000000214 mouth Anatomy 0.000 description 6
- 238000013473 artificial intelligence Methods 0.000 description 4
- 238000002594 fluoroscopy Methods 0.000 description 4
- 210000003800 pharynx Anatomy 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 210000000056 organ Anatomy 0.000 description 3
- 206010033799 Paralysis Diseases 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 2
- 239000002872 contrast media Substances 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 210000000867 larynx Anatomy 0.000 description 2
- 206010010356 Congenital anomaly Diseases 0.000 description 1
- 206010010904 Convulsion Diseases 0.000 description 1
- 206010013642 Drooling Diseases 0.000 description 1
- 208000012902 Nervous system disease Diseases 0.000 description 1
- 208000025966 Neurological disease Diseases 0.000 description 1
- 208000008630 Sialorrhea Diseases 0.000 description 1
- 208000005392 Spasm Diseases 0.000 description 1
- 208000006011 Stroke Diseases 0.000 description 1
- 208000005611 Tooth Abnormalities Diseases 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000036461 convulsion Effects 0.000 description 1
- 230000037406 food intake Effects 0.000 description 1
- 235000012631 food intake Nutrition 0.000 description 1
- 239000007943 implant Substances 0.000 description 1
- 230000036244 malformation Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000003387 muscular Effects 0.000 description 1
- 210000003928 nasal cavity Anatomy 0.000 description 1
- 210000000653 nervous system Anatomy 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000010992 reflux Methods 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 210000000225 synapse Anatomy 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5217—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/42—Detecting, measuring or recording for evaluating the gastrointestinal, the endocrine or the exocrine systems
- A61B5/4205—Evaluating swallowing
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/48—Diagnostic techniques
- A61B6/481—Diagnostic techniques involving the use of contrast agents
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/48—Diagnostic techniques
- A61B6/486—Diagnostic techniques involving generating temporal series of image data
- A61B6/487—Diagnostic techniques involving generating temporal series of image data involving fluoroscopy
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
Definitions
- the present invention relates to a video perspective swallowing test reading apparatus and method, and more particularly, to an artificial intelligence algorithm-based video perspective swallowing test reading device and method.
- Swallowing is achieved by the harmonious movement of tissues in the oral cavity.
- swallowing disorder refers to difficulty in swallowing caused by abnormalities in the muscular nervous system or structural abnormalities in the section from the oral cavity to the upper esophagus.
- swallowing disorders can occur at any age, from newborns to the elderly, and can occur as a result of various congenital anomalies, structural damage, or medical conditions.
- various causes of swallowing disorders in particular, there are mild causes of simple tooth abnormalities or implants, and paralysis of the muscles of the mouth, pharynx and esophagus due to nerve paralysis caused by other neurological diseases such as stroke (Stroker),
- stroke Stroker
- pharynx or esophagus There are narrowing of the pharynx or esophagus, compression of the esophagus due to malformations of the surrounding organs, convulsions due to spasm of the esophagus, and impulse to move food from the mouth to the esophagus.
- Symptoms include drooling or feeling that food has stopped in the esophagus. Etc. may appear.
- a videofluoroscopic swallowing study is performed to determine whether there is a swallowing disorder.
- medical personnel with medical knowledge are reading whether airway aspiration or nasal inhalation by directly observing a plurality of VFS files composed of several frames.
- An embodiment of the present invention is to provide a video perspective swallowing test reading apparatus and method capable of self-reading whether or not swallowing disorder from a video perspective image.
- An embodiment of the present invention is to provide a video perspective swallowing test reading apparatus and method with high objectivity and accuracy.
- an apparatus for reading a video fluoroscopy swallowing test comprising: an image input unit for inputting a plurality of video fluoroscopy images sequentially photographing movements of an anatomical position while a test subject eats food; An anatomical position designating unit for designating the anatomical position to correspond to a preset anatomical standard position in the plurality of video perspective images and designating a position of food with respect to the anatomical position; A coordinate setting unit for setting coordinates of the anatomical position of the plurality of video perspective images and the position of the food; Using the anatomical position of the plurality of video perspective images and coordinates of the position of the food, a position tracking unit for tracking the moving degree of the anatomical position and the moving path of the food, and the degree of the anatomical position moving and the There is provided a video-visual swallowing test reading apparatus including a reading unit that analyzes a movement path of food and reads whether or not there is a swallowing disorder.
- At least one of the image input unit, the position designating unit, the coordinate setting unit, the location tracking unit, and the reading unit may use an algorithm learned through machine learning.
- the algorithm may perform the machine learning by constructing an artificial neural network.
- the standard anatomical positions are tongue, hard palate, soft palate, mandibular angle, epiglottis, valleculae, laryngeal vestibule, It may be at least one of a fourth cervical vertebra (C4 vertebra), a pyriform sinus, an upper esophageal sphincter (UES), a vocal cord, an esophagus, or a trachea.
- C4 vertebra cervical vertebra
- UAS upper esophageal sphincter
- vocal cord an esophagus
- a trachea trachea
- the position tracking unit may accumulate the anatomical position and the position of food included in an image divided from the video perspective image.
- a result of reading whether the reading unit has a swallowing disorder may be any one of airway suction, nasal suction, or normal.
- the reading unit may compare the degree of movement of the coordinates based on a predetermined clinical parameter to read whether the swallowing disorder has occurred.
- the reading unit may compare the movement path of the food with a standard path model to read whether the swallowing disorder has occurred.
- a method of reading a video-perspective swallowing test using a machine learning-based algorithm comprising: inputting a plurality of video-perspective images to learn the algorithm; Inputting a video-perspective image in which the motion of the anatomical position is photographed while the test subject eats food; Designating the anatomical position to correspond to a preset anatomical standard position in the video perspective image and designating a position of food with respect to the anatomical position; Setting coordinates of the anatomical position and the position of the food;
- a method for reading a swallowing test by video perspective including the step of tracking the degree of movement of the anatomical position and the position of food, and reading whether or not there is a swallowing disorder by analyzing the degree of movement of the anatomical position and the movement path of the food. do.
- the present invention it is possible to read whether a swallowing disorder has occurred from a video perspective image through an algorithm learned through machine learning, so that it is possible to save cost and time due to image reading for medical personnel.
- FIG. 1 is a block diagram of a video perspective swallowing test reading apparatus according to an embodiment of the present invention.
- FIG. 2 is a diagram showing a standard anatomical position designated by a coordinate setting unit of the apparatus for reading video fluoroscopy swallowing tests according to an exemplary embodiment of the present invention.
- FIG. 3 is a diagram illustrating an example of accumulating a position of food in a segmented image in one image by the video perspective swallowing test reading apparatus according to an exemplary embodiment of the present invention.
- FIG. 4 is a diagram showing a VFS file when a video-perspective swallowing test reading apparatus normally reads according to an embodiment of the present invention.
- FIG. 5 is a diagram showing a VFS file when a video-visual swallowing test reading apparatus according to an embodiment of the present invention performs airway suction reading.
- FIG. 6 is a diagram illustrating a VFS file when a video-visual swallowing test reading apparatus according to an embodiment of the present invention performs nasal suction reading.
- FIG. 7 is a diagram showing a standard path model prepared for reading whether a swallowing disorder is detected by a video-visual swallowing test reading apparatus according to an embodiment of the present invention.
- FIG. 8 is a flowchart of a method of performing a video-perspective swallowing test by a video-perspective swallowing test reading apparatus according to an embodiment of the present invention.
- Some embodiments of the present disclosure may be represented by functional block configurations and various processing steps. Some or all of these functional blocks may be implemented with various numbers of hardware and/or software components that perform specific functions.
- the functional blocks of the present disclosure may be implemented by one or more microprocessors, or may be implemented by circuit configurations for a predetermined function.
- the functional blocks of the present disclosure may be implemented in various programming or scripting languages. Functional blocks may be implemented as an algorithm executed on one or more processors.
- the present disclosure may employ conventional techniques for electronic environment setting, signal processing, and/or data processing. Terms such as “mechanism”, “element”, “means” and “composition” can be used widely, and are not limited to mechanical and physical configurations.
- the video-perspective swallowing test reading apparatus 1 is a device that analyzes a video-perspective image and reads whether swallowing disorders exist. In particular, by using an algorithm learned through machine learning, a video-perspective without reading by a medical person. It relates to a device for performing swallowing test.
- FIG. 1 is a block diagram of a video perspective swallowing test reading apparatus according to an embodiment of the present invention.
- FIG. 2 is a diagram showing a standard anatomical position designated by a coordinate setting unit of the apparatus for reading video fluoroscopy swallowing tests according to an exemplary embodiment of the present invention.
- FIG. 3 is a diagram illustrating an example of accumulating a position of food in a segmented image in one image by the video perspective swallowing test reading apparatus according to an exemplary embodiment of the present invention.
- 4 to 6 are diagrams each showing a VFS file when a video fluoroscopic swallowing test reading apparatus according to an embodiment of the present invention performs normal, airway suction, and nasal suction readings.
- FIG. 7 is a diagram showing a standard path model prepared for reading whether a swallowing disorder is detected by a video-visual swallowing test reading apparatus according to an embodiment of the present invention.
- a video perspective swallowing test reading device (1, hereinafter swallowing test device) according to an embodiment of the present invention includes an image input unit 110, an anatomical position designation unit 120, and a coordinate setting unit 130. , And a location tracking unit 140 and a reading unit 150.
- the image input unit 110 receives a video perspective image (V) photographed while a test subject eats food.
- the video perspective image V is an image capable of confirming the movement of the body tissue of the test subject, and in this case, the body tissue may be at least one of an oral cavity, a pharynx, an esophagus, a larynx, or an organ.
- the video perspective image V may include not only an X-ray image using X-rays but also a magnetic resonance image (MRI) image.
- MRI magnetic resonance image
- the test subject may consume food including a contrast agent. This is to more clearly indicate the position of the food in the video perspective image (V).
- the anatomical position designating unit 120 of the swallowing test apparatus 1 corresponds to the standard anatomical position St in the video perspective image V input through the image input unit 110.
- Anatomical location (Sp) can be specified.
- designation may mean mapping.
- the standard anatomical position (St) is a major body tissue that is considered important when reading whether the reading unit 150 to be described later has a swallowing disorder, and the reference when designating an anatomical position (Sp) to be described later is It means the virtual anatomical location (Sp).
- the standard anatomical position (St) is tongue, hard palate, soft palate, mandibular angle, epiglottis, epiglottis (valleculae), laryngeal vestibule, fourth cervical vertebra (C4 vertebra), pyriform sinus, upper esophageal sphincter (UES), vocal cord, esophagus, or trachea
- these are only examples of the standard anatomical position (St), and the standard anatomical position (St) of the swallowing examination apparatus 1 according to an embodiment of the present invention is not limited thereto.
- anatomical position (Sp) is a body tissue designated to correspond to the standard anatomical position (St) in a specific video perspective image (V), such as 10, 20, 30, 40 or 50 shown in Figs. This is an example of the standard anatomical position (St).
- the designated anatomical position (Sp) is the most easily observed point among the body tissue regions corresponding to the standard anatomical position (St), the point at the center of the body tissue, and a point that can represent the body tissue. Alternatively, it can be designated as a single point, such as the point that reacts most sensitively.
- the designated anatomical location Sp may be used as a spatial index indicating where food is located among body tissues when tracking the food movement path of the location tracking unit 140 to be described later.
- the positioning unit 120 designates the position of food (Sf) together with the anatomical position (Sp). Unlike the anatomical position (Sp), the position (Sf) of ingested food moves along the body organs. Therefore, positions of foods in a plurality of divided images divided in chronological order may be different from each other.
- the location designation unit 120 may designate an anatomical location Sp and a food location Sf using an algorithm learned through machine learning.
- machine learning refers to an algorithm that repeatedly learns from specific sample data, finds potential features in specific sample data, and applies the learning results to new data to predict the future according to the discovered features. do.
- machine learning may be based on a learning model using one or more artificial neural networks.
- the artificial neural network is a kind of algorithm modeled after the human neural network structure, and may include one or more nodes or one or more layers including neurons, and each node may be connected through a synapse.
- the artificial neural network may be based on supervised learning in which data of a video perspective image including an anatomical location Sp designated by a medical professional with expertise is used as an input value.
- the artificial neural network is a method of designating an anatomical location (Sp) or food location (Sf) by discovering a feature or pattern from a plurality of video perspective images without any guidance based on unsupervised learning. You can also learn.
- the coordinate setting unit 130 sets relative coordinates based on a specific point with respect to the designated anatomical location Sp and the food location Sf. This is to enable more diverse and precise data processing by quantifying the anatomical location (Sp) and the food location (Sf). For example, when analyzing the movement path of food by the location tracking unit 140 to be described later, it is possible to achieve more accurate movement path accumulation in consideration of relative coordinates.
- the location tracking unit 140 may track an anatomical location Sp and a location Sf of food.
- the video perspective image V may be divided into a plurality of frames.
- the swallowing test apparatus 1 according to an embodiment of the present invention may use only some frames among a plurality of frames for analysis. This is to save energy and time consumed in swallowing test by reducing data throughput.
- the location tracking unit 140 checks and analyzes according to the passage of time what spatial movement has occurred in relation to the anatomical location (Sp) and the food location (Sf) between some of the selected frames as described above. .
- tracking the anatomical location Sp means tracking the movement of the body tissue represented by the anatomical location Sp.
- the location tracking unit 140 may track movement of coordinates related to the anatomical location Sp.
- the degree of movement of such body tissues is called a clinical parameter.
- the clinical parameter may be used when the reading unit 150 to be described later determines whether or not there is a swallowing disorder.
- tracking of the anatomical location may be helpful in the treatment of swallowing disorders. For example, by analyzing the degree of movement of the anatomical position Sp, it may be helpful to determine which body tissue among the body tissues causes swallowing disorder.
- tracking the position Sf of the food means to check the relative positional movement of the food, and it means tracking which body tissue the food is located adjacent to over time.
- the location tracking unit 140 individually checks the positions of foods included in some frames selected for analysis among a plurality of frames, and then comprehensively accumulates them, thereby providing an overall food movement path. have.
- the movement of food between some frames and frames may be estimated by an algorithm.
- an estimation operation may be performed by an algorithm learned through machine learning. It goes without saying that the algorithm can be designed by constructing an artificial neural network for more sophisticated estimation.
- the reading unit 150 may determine whether or not there is a swallowing disorder based on the degree of movement of the anatomical position Sp obtained through the position tracking unit 140 and the movement path of the food. have.
- the anatomical location (Sp) and the movement path of food are only an example of the basis of the reading of the reading unit 150, and other information other than the anatomical location (Sp) and the movement path of food can be included to determine whether or not there is a swallowing disorder. Of course you can.
- the reading unit 150 may determine whether or not there is a swallowing disorder by comparing the clinical parameter value obtained from the location tracking unit 140 with the reference value of the clinical parameter in relation to the movement of the anatomical position Sp. In addition, the clinical parameter values of each body tissue may be compared with each other to determine whether swallowing disorders exist.
- the reading unit 150 may determine whether or not there is a swallowing disorder by comparing the overall food movement path obtained through the location tracking unit 140 with the standard path model of FIG. 7 in relation to the movement path of the food. 4 to 6, as shown in FIGS. 4 and 7(a), when it is confirmed that the food has moved in the order of the oral cavity 10-the pharynx 20-the esophagus 30, the reading unit 150 is normal. Can be read as. However, as shown in FIGS. 5 and 7(b), when it is confirmed that food has been moved to the larynx 40-tracheal 50 in addition to the esophagus 30, the reading unit 150 can read it as'airway suction'. have. In addition, as shown in FIGS. 6 and 7 (c), when a path through which food flows into the nasal cavity 60 is confirmed, it may be read as'nasal reflux'.
- the reading of whether the reading unit 150 has a swallowing disorder may also be performed through an algorithm learned through machine learning.
- the algorithm used when reading whether or not the reading unit 150 has a swallowing disorder is a swallowing disorder by comprehensively judging a variety of judgment grounds, including the above-described anatomical location (Sp), clinical parameters, and food movement paths. Whether or not can be read.
- the algorithm related to the reading unit 150 may be machine learning in advance through a plurality of learning data, and may be self-updated through such a learning process.
- Such learning can be applied to both supervised and unsupervised learning, and of course, it can be achieved by constructing an artificial neural network.
- the algorithms applied to the location designation unit 120, the location tracking unit 140, and the reading unit 150 may all be artificial intelligence algorithms based on machine learning.
- a plurality of algorithms may exist individually to perform the respective tasks of the position designation unit 120, the position tracking unit 140, and the reading unit 150, or the position designation unit 120 and the position tracking unit 140 ), there may be an integrated algorithm encompassing the reading unit 150.
- the video-perspective swallowing test is automatically performed by an artificial intelligence algorithm as described above, it is possible to reduce labor of medical personnel and secure objectivity of reading.
- FIG. 8 is a flow chart of a method of performing a video perspective swallowing test using the swallowing test apparatus 1 according to an embodiment of the present invention.
- a method of performing a swallowing test includes the steps of learning an algorithm through a plurality of images (S10); Inputting a video-perspective image in which movements of body tissues are photographed during food intake (S20); Designating an anatomical position (Sp) and a position of food corresponding to the standard anatomical position (St) in the video perspective image (S30); Step of setting the anatomical position (Sp) and the coordinates of the food (S40); Including the step of tracking the degree of movement of the anatomical position (Sp) and the position of food (S50), and the step of reading whether or not there is a swallowing disorder by analyzing the degree of movement of the anatomical position (Sp) and the movement path of the food (S60). can do.
- a plurality of video perspective images V photographed for swallowing test are provided to the algorithm as learning data.
- the algorithm may be learned by constructing an artificial neural network, and may be updated by itself by reflecting the results experienced during the learning process.
- the algorithm can perform a task by extracting features from a plurality of video perspective images V by itself.
- a video-perspective image (V) is input in real time from a separate video-perspective image capturing device, or a video-perspective image that has already been taken Can be entered.
- the anatomy selected in advance from the video perspective image that can be different from each other according to the individual's body structure Find and match the position corresponding to the enemy standard position (St).
- a designation task can be performed through an algorithm learned through machine learning, thereby securing speed and objectivity.
- the relative coordinates of the designated anatomical position (Sp) are set through the step (S30) of designating the anatomical position (Sp) based on a specific point.
- the coordinates set in this way may be used when tracking the anatomical location (Sp) and the location of food (Sf) of the location tracking unit 140.
- the movement of the anatomical position (Sp) and the movement path of the food according to the passage of time between some of the divided frames of the video perspective image are determined.
- To track. by tracking the movement of the anatomical position Sp, clinical parameters to be used when analyzing the reading unit 150 to be described later can be derived.
- the artificial intelligence algorithm learned through machine learning analyzes the video perspective image to read whether the swallowing disorder is can do.
- the reading unit 150 compares the clinical parameter obtained from the location tracking unit 140 with the clinical parameter default value or compares the entire movement route of the obtained food with the standard route model of FIG. 7 to determine whether or not there is a swallowing disorder. Can be read.
- the method of determining whether the reader 150 has a swallowing disorder is not limited thereto, and other data values extracted from a video perspective image may be considered together.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Public Health (AREA)
- Theoretical Computer Science (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Molecular Biology (AREA)
- Pathology (AREA)
- Biophysics (AREA)
- Radiology & Medical Imaging (AREA)
- Animal Behavior & Ethology (AREA)
- Heart & Thoracic Surgery (AREA)
- Surgery (AREA)
- Veterinary Medicine (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Optics & Photonics (AREA)
- High Energy & Nuclear Physics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Physiology (AREA)
- Quality & Reliability (AREA)
- Computational Linguistics (AREA)
- Endocrinology (AREA)
- Gastroenterology & Hepatology (AREA)
- Databases & Information Systems (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
An apparatus for confirming a videofluoroscopic swallowing study is disclosed. The apparatus for confirming a videofluoroscopic swallowing study comprises: an image input unit into which a plurality of videofluoroscopic images are input, the images being obtained by sequentially photographing motions in an anatomical position while an examinee is eating food; an anatomical position designating unit for designating the anatomical position in the plurality of videofluoroscopic images such that same corresponds to a predetermined anatomical standard position, and designating a food position with regard to the anatomical position; a coordinate setting unit for setting coordinates of the anatomical position in the plurality of videofluoroscopic images and the food position; a position tracking unit for tracking a degree of motion of the anatomical position and a movement path of food by using the coordinates of the anatomical position in the plurality of videofluoroscopic images and the food position; and a confirming unit for analyzing the degree of motion of the anatomical position and the movement path of food to confirm whether or not a swallowing disorder exists.
Description
본 발명은 비디오투시 연하검사 판독 장치 및 방법에 관한 것으로, 보다 상세하게는, 인공지능 알고리즘 기반 비디오투시 연하검사 판독 장치 및 방법에 관한 것이다.The present invention relates to a video perspective swallowing test reading apparatus and method, and more particularly, to an artificial intelligence algorithm-based video perspective swallowing test reading device and method.
삼킴(연하)는 구강내 조직의 조화로운 움직임에 의해 이루어진다. 이와 관련하여 삼킴장애(곤란)은 근신경 계통의 이상 또는 구강에서 상부식도에 이르는 구간의 구조적 이상으로 인해 발생하는 삼킴의 어려움을 말한다. Swallowing (swallowing) is achieved by the harmonious movement of tissues in the oral cavity. In this regard, swallowing disorder (difficulty) refers to difficulty in swallowing caused by abnormalities in the muscular nervous system or structural abnormalities in the section from the oral cavity to the upper esophagus.
이러한 삼킴장애는 신생아부터 노인에 이르기까지 모든 연령대에서 발생할 수 있고, 다양한 선천적인 기형이나 구조적인 손상, 혹은 의학적인 상태의 결과로 발생될 수 있다. 구체적으로, 삼킴장애를 발생시키는 원인은 다양하며, 특히 단순한 치아의 이상이나 보형물에 의한 가벼운 원인이 있고, 뇌졸중(Stroker) 등 기타 신경성 질환에 의한 신경 마비로 인한 입, 인두 및 식도 근육의 마비, 인두나 식도가 좁아진 협착, 주변 기관의 기형에 따른 식도 압박, 식도의 연축으로 인한 경련, 음식을 입으로 식도로 이동시키는 추진 장애 등이 있으며, 증상으로는 침을 흘리거나 음식물이 식도에서 멈춘 느낌 등이 나타날 수 있다.These swallowing disorders can occur at any age, from newborns to the elderly, and can occur as a result of various congenital anomalies, structural damage, or medical conditions. Specifically, there are various causes of swallowing disorders, in particular, there are mild causes of simple tooth abnormalities or implants, and paralysis of the muscles of the mouth, pharynx and esophagus due to nerve paralysis caused by other neurological diseases such as stroke (Stroker), There are narrowing of the pharynx or esophagus, compression of the esophagus due to malformations of the surrounding organs, convulsions due to spasm of the esophagus, and impulse to move food from the mouth to the esophagus. Symptoms include drooling or feeling that food has stopped in the esophagus. Etc. may appear.
일반적으로, 삼킴장애 여부를 판독하기 위하여 비디오투시 연하검사(VFS, videofluoroscopic swallowing study)가 이루어진다. 특히, 의료 지식을 겸비한 의료인이 여러 프레임으로 구성된 복수의 VFS파일을 직접 관찰함으로써 기도흡인(aspiration) 또는 비강흡입 여부를 판독하고 있다.In general, a videofluoroscopic swallowing study (VFS) is performed to determine whether there is a swallowing disorder. In particular, medical personnel with medical knowledge are reading whether airway aspiration or nasal inhalation by directly observing a plurality of VFS files composed of several frames.
그러나, 의료인에 의한 삼킴장애 여부 판독은 의료인의 경험 또는 역량에 따라 정확도의 편차가 크다. 그리고 판독을 위한 별도의 시간을 필요로 한다는 단점이 있다.However, the accuracy of the reading of the swallowing disorder by a medical practitioner is large depending on the experience or capability of the medical practitioner. In addition, there is a disadvantage in that it requires a separate time for reading.
본 발명의 일 실시예는 비디오투시 영상으로부터 삼킴장애 여부를 스스로 판독할 수 있는 비디오투시 연하검사 판독 장치 및 방법을 제공하고자 한다.An embodiment of the present invention is to provide a video perspective swallowing test reading apparatus and method capable of self-reading whether or not swallowing disorder from a video perspective image.
본 발명의 일 실시예는 객관성과 정확성이 높은 비디오투시 연하검사 판독 장치 및 방법을 제공하고자 한다.An embodiment of the present invention is to provide a video perspective swallowing test reading apparatus and method with high objectivity and accuracy.
본 발명의 일 측면에 따르면, 비디오투시 연하검사 판독 장치로서, 피검사자의 음식물 섭취 중에 해부학적 위치의 움직임이 순차적으로 촬영된 복수의 비디오투시 영상이 입력되는 영상 입력부; 상기 복수의 비디오투시 영상에서 상기 해부학적 위치를 미리 설정된 해부학적 표준 위치에 대응되도록 지정하고 상기 해부학적 위치에 대한 음식물의 위치를 지정하는 해부학적 위치 지정부; 상기 복수의 비디오투시 영상의 상기 해부학적 위치 및 상기 음식물의 위치의 좌표를 설정하는 좌표 설정부; 상기 복수의 비디오투시 영상의 상기 해부학적 위치 및 상기 음식물의 위치의 좌표를 이용하여 상기 해부학적 위치가 움직이는 정도와 상기 음식물의 이동 경로를 추적하는 위치 추적부 및 상기 해부학적 위치가 움직이는 정도와 상기 음식물의 이동 경로를 분석하여 삼킴장애 여부를 판독하는 판독부를 포함하는, 비디오투시 연하검사 판독 장치가 제공된다.According to an aspect of the present invention, there is provided an apparatus for reading a video fluoroscopy swallowing test, comprising: an image input unit for inputting a plurality of video fluoroscopy images sequentially photographing movements of an anatomical position while a test subject eats food; An anatomical position designating unit for designating the anatomical position to correspond to a preset anatomical standard position in the plurality of video perspective images and designating a position of food with respect to the anatomical position; A coordinate setting unit for setting coordinates of the anatomical position of the plurality of video perspective images and the position of the food; Using the anatomical position of the plurality of video perspective images and coordinates of the position of the food, a position tracking unit for tracking the moving degree of the anatomical position and the moving path of the food, and the degree of the anatomical position moving and the There is provided a video-visual swallowing test reading apparatus including a reading unit that analyzes a movement path of food and reads whether or not there is a swallowing disorder.
이때, 상기 영상 입력부, 위치 지정부, 좌표 설정부, 위치 추적부 또는 판독부 중 적어도 하나는 기계 학습을 통해 학습되는 알고리즘을 이용할 수 있다.In this case, at least one of the image input unit, the position designating unit, the coordinate setting unit, the location tracking unit, and the reading unit may use an algorithm learned through machine learning.
이때, 상기 알고리즘은 인공 신경망을 구축하여 상기 기계 학습을 수행할 수 있다.In this case, the algorithm may perform the machine learning by constructing an artificial neural network.
이때, 상기 해부학적 표준 위치는 혀(tongue), 경구개(hard palate), 연구개(soft palate), 하악각(mandibular angle), 후두개(epiglottis), 후두개골(valleculae), 후두전정(laryngeal vestibule), 제4목뼈(C4 vertebra), 이상와(pyriform sinus), 상부식도 괄약근(UES), 성대(vocal cord), 식도(esophagus) 또는 기관(trachea) 중 적어도 하나일 수 있다.At this time, the standard anatomical positions are tongue, hard palate, soft palate, mandibular angle, epiglottis, valleculae, laryngeal vestibule, It may be at least one of a fourth cervical vertebra (C4 vertebra), a pyriform sinus, an upper esophageal sphincter (UES), a vocal cord, an esophagus, or a trachea.
이때, 상기 위치 추적부는 상기 비디오투시 영상으로부터 분할된 영상에 포함된 상기 해부학적 위치 및 음식물의 위치를 누적할 수 있다.In this case, the position tracking unit may accumulate the anatomical position and the position of food included in an image divided from the video perspective image.
이때, 상기 판독부의 삼킴장애 여부 판독 결과는 기도흡인, 비강흡입 또는 정상 중 어느 하나일 수 있다.In this case, a result of reading whether the reading unit has a swallowing disorder may be any one of airway suction, nasal suction, or normal.
이때, 상기 판독부는 미리 정해진 임상적 파라미터를 기준으로 상기 좌표의 이동 정도를 비교하여 상기 삼킴장애 여부를 판독할 수 있다.In this case, the reading unit may compare the degree of movement of the coordinates based on a predetermined clinical parameter to read whether the swallowing disorder has occurred.
이때, 상기 판독부는 음식물의 이동경로를 표준 경로 모델과 비교하여 상기 삼킴장애 여부를 판독할 수 있다.In this case, the reading unit may compare the movement path of the food with a standard path model to read whether the swallowing disorder has occurred.
본 발명의 다른 측면에 따르면, 기계 학습 기반의 알고리즘을 이용하여 비디오투시 연하검사 판독을 하는 방법으로서, 복수의 비디오투시 영상을 입력하여 상기 알고리즘을 학습시키는 단계; 피검사자의 음식물 섭취 중에 해부학적 위치의 움직임이 촬영된 비디오투시 영상을 입력하는 단계; 상기 비디오투시 영상에서 상기 해부학적 위치를 미리 설정된 해부학적 표준 위치에 대응되도록 지정하고 상기 해부학적 위치에 대한 음식물의 위치를 지정하는 단계; 상기 해부학적 위치 및 상기 음식물의 위치의 좌표를 설정하는 단계; 상기 해부학적 위치가 움직이는 정도와 음식물의 위치를 추적하는 단계 및 상기 해부학적 위치가 움직이는 정도와 음식물의 이동경로를 분석하여 삼킴장애 여부를 판독하는 단계를 포함하는, 비디오투시 연하검사 판독 방법이 제공된다.According to another aspect of the present invention, there is provided a method of reading a video-perspective swallowing test using a machine learning-based algorithm, the method comprising: inputting a plurality of video-perspective images to learn the algorithm; Inputting a video-perspective image in which the motion of the anatomical position is photographed while the test subject eats food; Designating the anatomical position to correspond to a preset anatomical standard position in the video perspective image and designating a position of food with respect to the anatomical position; Setting coordinates of the anatomical position and the position of the food; There is provided a method for reading a swallowing test by video perspective, including the step of tracking the degree of movement of the anatomical position and the position of food, and reading whether or not there is a swallowing disorder by analyzing the degree of movement of the anatomical position and the movement path of the food. do.
본 발명의 일 실시예에 따르면, 기계 학습을 통해 학습된 알고리즘을 통해 비디오투시 영상으로부터 삼킴장애 여부를 판독할 수 있어서 의료인에 영상 판독으로 인한 비용과 시간을 절감할 수 있다.According to an embodiment of the present invention, it is possible to read whether a swallowing disorder has occurred from a video perspective image through an algorithm learned through machine learning, so that it is possible to save cost and time due to image reading for medical personnel.
본 발명의 일 실시예에 따르면, 기계 학습을 통해 학습된 알고리즘을 통해 비디오투시 영상으로부터 삼킴장애 여부를 판독할 수 있어서 비디오투시 연하 검사의 객관성과 정확성을 제고할 수 있다.According to an embodiment of the present invention, it is possible to read whether a swallowing disorder has occurred from a video perspective image through an algorithm learned through machine learning, thereby improving objectivity and accuracy of a video perspective swallowing test.
도 1은 본 발명의 일 실시예에 따른 비디오투시 연하검사 판독 장치의 구성도이다. 1 is a block diagram of a video perspective swallowing test reading apparatus according to an embodiment of the present invention.
도 2는 본 발명의 일 실시예에 따른 비디오투시 연하검사 판독 장치의 좌표 설정부에서 지정하는 해부학적 표준 위치를 나타낸 도면이다.FIG. 2 is a diagram showing a standard anatomical position designated by a coordinate setting unit of the apparatus for reading video fluoroscopy swallowing tests according to an exemplary embodiment of the present invention.
도 3은 본 발명의 일 실시예에 따른 비디오투시 연하검사 판독 장치가 분할된 영상의 음식물의 위치를 하나의 영상에 누적하는 것의 일례를 나타낸 도면이다.FIG. 3 is a diagram illustrating an example of accumulating a position of food in a segmented image in one image by the video perspective swallowing test reading apparatus according to an exemplary embodiment of the present invention.
도 4은 본 발명의 일 실시예에 따른 비디오투시 연하검사 판독 장치가 정상 판독을 하는 경우의 VFS파일을 나타낸 도면이다.4 is a diagram showing a VFS file when a video-perspective swallowing test reading apparatus normally reads according to an embodiment of the present invention.
도 5은 본 발명의 일 실시예에 따른 비디오투시 연하검사 판독 장치가 기도흡인 판독을 하는 경우의 VFS파일을 나타낸 도면이다.FIG. 5 is a diagram showing a VFS file when a video-visual swallowing test reading apparatus according to an embodiment of the present invention performs airway suction reading.
도 6은 본 발명의 일 실시예에 따른 비디오투시 연하검사 판독 장치가 비강흡입 판독을 하는 경우의 VFS파일을 나타낸 도면이다.FIG. 6 is a diagram illustrating a VFS file when a video-visual swallowing test reading apparatus according to an embodiment of the present invention performs nasal suction reading.
도 7은 본 발명의 일 실시예에 따른 비디오투시 연하검사 판독 장치가 삼킴장애 여부 판독시 대비하는 표준 경로 모델을 나타낸 도면이다.FIG. 7 is a diagram showing a standard path model prepared for reading whether a swallowing disorder is detected by a video-visual swallowing test reading apparatus according to an embodiment of the present invention.
도 8는 본 발명의 일 실시예에 따른 비디오투시 연하검사 판독 장치가 비디오투시 연하검사를 수행하는 방법의 순서도이다.8 is a flowchart of a method of performing a video-perspective swallowing test by a video-perspective swallowing test reading apparatus according to an embodiment of the present invention.
이하, 첨부한 도면을 참고로 하여 본 발명의 실시예에 대하여 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자가 용이하게 실시할 수 있도록 상세히 설명한다. 본 발명은 여러 가지 상이한 형태로 구현될 수 있으며 여기에서 설명하는 실시예에 한정되지 않는다. 도면에서 본 발명을 명확하게 설명하기 위해서 설명과 관계없는 부분은 생략하였으며, 명세서 전체를 통하여 동일 또는 유사한 구성요소에 대해서는 동일한 참조부호를 붙였다.Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those of ordinary skill in the art may easily implement the present invention. The present invention may be implemented in various different forms and is not limited to the embodiments described herein. In the drawings, parts irrelevant to the description are omitted in order to clearly describe the present invention, and the same reference numerals are assigned to the same or similar components throughout the specification.
본 개시의 일부 실시예는 기능적인 블록 구성들 및 다양한 처리 단계들로 나타내어질 수 있다. 이러한 기능 블록들의 일부 또는 전부는, 특정 기능들을 실행하는 다양한 개수의 하드웨어 및/또는 소프트웨어 구성들로 구현될 수 있다. 예를 들어, 본 개시의 기능 블록들은 하나 이상의 마이크로프로세서들에 의해 구현되거나, 소정의 기능을 위한 회로 구성들에 의해 구현될 수 있다. 또한, 예를 들어, 본 개시의 기능 블록들은 다양한 프로그래밍 또는 스크립팅 언어로 구현될 수 있다. 기능 블록들은 하나 이상의 프로세서들에서 실행되는 알고리즘으로 구현될 수 있다. 또한, 본 개시는 전자적인 환경 설정, 신호 처리, 및/또는 데이터 처리 등을 위하여 종래 기술을 채용할 수 있다. “매커니즘”, “요소”, “수단” 및 “구성”등과 같은 용어는 넓게 사용될 수 있으며, 기계적이고 물리적인 구성들로서 한정되는 것은 아니다.Some embodiments of the present disclosure may be represented by functional block configurations and various processing steps. Some or all of these functional blocks may be implemented with various numbers of hardware and/or software components that perform specific functions. For example, the functional blocks of the present disclosure may be implemented by one or more microprocessors, or may be implemented by circuit configurations for a predetermined function. In addition, for example, the functional blocks of the present disclosure may be implemented in various programming or scripting languages. Functional blocks may be implemented as an algorithm executed on one or more processors. In addition, the present disclosure may employ conventional techniques for electronic environment setting, signal processing, and/or data processing. Terms such as “mechanism”, “element”, “means” and “composition” can be used widely, and are not limited to mechanical and physical configurations.
본 발명의 일 실시예에 따른 비디오투시 연하검사 판독 장치(1)는 비디오투시 영상을 분석하여 삼킴장애 여부를 판독하는 장치로서, 특히 기계 학습을 통해 학습된 알고리즘을 이용하여 의료인의 판독 없이 비디오투시 연하검사를 수행하는 장치에 관한 것이다.The video-perspective swallowing test reading apparatus 1 according to an embodiment of the present invention is a device that analyzes a video-perspective image and reads whether swallowing disorders exist. In particular, by using an algorithm learned through machine learning, a video-perspective without reading by a medical person. It relates to a device for performing swallowing test.
도 1은 본 발명의 일 실시예에 따른 비디오투시 연하검사 판독 장치의 구성도이다. 도 2는 본 발명의 일 실시예에 따른 비디오투시 연하검사 판독 장치의 좌표 설정부에서 지정하는 해부학적 표준 위치를 나타낸 도면이다. 도 3은 본 발명의 일 실시예에 따른 비디오투시 연하검사 판독 장치가 분할된 영상의 음식물의 위치를 하나의 영상에 누적하는 것의 일례를 나타낸 도면이다. 도 4 내지 도 6는 각각 본 발명의 일 실시예에 따른 비디오투시 연하검사 판독 장치가 정상, 기도흡인, 비강흡입 판독을 하는 경우의 VFS파일을 나타낸 도면이다. 도 7은 본 발명의 일 실시예에 따른 비디오투시 연하검사 판독 장치가 삼킴장애 여부 판독시 대비하는 표준 경로 모델을 나타낸 도면이다.1 is a block diagram of a video perspective swallowing test reading apparatus according to an embodiment of the present invention. FIG. 2 is a diagram showing a standard anatomical position designated by a coordinate setting unit of the apparatus for reading video fluoroscopy swallowing tests according to an exemplary embodiment of the present invention. FIG. 3 is a diagram illustrating an example of accumulating a position of food in a segmented image in one image by the video perspective swallowing test reading apparatus according to an exemplary embodiment of the present invention. 4 to 6 are diagrams each showing a VFS file when a video fluoroscopic swallowing test reading apparatus according to an embodiment of the present invention performs normal, airway suction, and nasal suction readings. FIG. 7 is a diagram showing a standard path model prepared for reading whether a swallowing disorder is detected by a video-visual swallowing test reading apparatus according to an embodiment of the present invention.
도 1을 참조하면, 본 발명의 일 실시예에 따른 비디오투시 연하검사 판독 장치(1, 이하 연하검사 장치)는 영상 입력부(110), 해부학적 위치 지정부(120), 좌표 설정부(130), 위치 추적부(140) 및 판독부(150)를 포함한다.Referring to FIG. 1, a video perspective swallowing test reading device (1, hereinafter swallowing test device) according to an embodiment of the present invention includes an image input unit 110, an anatomical position designation unit 120, and a coordinate setting unit 130. , And a location tracking unit 140 and a reading unit 150.
본 발명의 일 실시예에서, 영상 입력부(110)는 피검사자의 음식물 섭취 중에 촬영된 비디오투시 영상(V)을 입력 받는다. 비디오투시 영상(V)은 피검사자의 신체 조직의 움직임을 확인할 수 있는 영상으로서, 이때 신체 조직은 구강, 인두, 식도, 후두 또는 기관 중 적어도 하나일 수 있다. 또한 비디오투시 영상(V)은 X선을 이용하는 X-ray 영상뿐만 아니라 자기공명영상(MRI) 영상 등을 포함할 수 있다.In an embodiment of the present invention, the image input unit 110 receives a video perspective image (V) photographed while a test subject eats food. The video perspective image V is an image capable of confirming the movement of the body tissue of the test subject, and in this case, the body tissue may be at least one of an oral cavity, a pharynx, an esophagus, a larynx, or an organ. In addition, the video perspective image V may include not only an X-ray image using X-rays but also a magnetic resonance image (MRI) image.
이때, 비디오투시 영상(V) 촬영 시 피검사자는 조영제를 포함하는 음식물을 섭취할 수 있다. 이것은 비디오투시 영상(V) 내에서 음식물의 위치를 보다 명확하게 표시하기 위함이다.At this time, when taking a video perspective image (V), the test subject may consume food including a contrast agent. This is to more clearly indicate the position of the food in the video perspective image (V).
본 발명의 일 실시예에 따른 연하검사 장치(1)의 해부학적 위치 지정부(120)는 영상 입력부(110)를 통해 입력된 비디오투시 영상(V)에서 해부학적 표준 위치(St)에 대응되도록 해부학적 위치(Sp)를 지정할 수 있다. 여기서 지정이란 매핑(mapping)의 의미를 가질 수 있다.The anatomical position designating unit 120 of the swallowing test apparatus 1 according to an embodiment of the present invention corresponds to the standard anatomical position St in the video perspective image V input through the image input unit 110. Anatomical location (Sp) can be specified. Here, designation may mean mapping.
이때, 도 2를 참조하면, 해부학적 표준 위치(St)는 후술될 판독부(150)가 삼킴장애 여부 판독시 중요하게 고려되는 주요 신체 조직이며, 후술될 해부학적 위치(Sp) 지정 시 기준이 되는 가상의 해부학적 위치(Sp)를 의미한다. 일례로, 도 2에 도시된 바와 같이, 해부학적 표준 위치(St)는 혀(tongue), 경구개(hard palate), 연구개(soft palate), 하악각(mandibular angle), 후두개(epiglottis), 후두개골(valleculae), 후두전정(laryngeal vestibule), 제4목뼈(C4 vertebra), 이상와(pyriform sinus), 상부식도 괄약근(UES), 성대(vocal cord), 식도(esophagus) 또는 기관(trachea) 중 적어도 하나일 수 있다. 그러나 이들은 해부학적 표준 위치(St)의 예시에 불과하며, 본 발명의 일 실시예에 따른 연하검사 장치(1)의 해부학적 표준 위치(St)가 이에 제한되는 것은 아니다.At this time, referring to FIG. 2, the standard anatomical position (St) is a major body tissue that is considered important when reading whether the reading unit 150 to be described later has a swallowing disorder, and the reference when designating an anatomical position (Sp) to be described later is It means the virtual anatomical location (Sp). For example, as shown in Figure 2, the standard anatomical position (St) is tongue, hard palate, soft palate, mandibular angle, epiglottis, epiglottis (valleculae), laryngeal vestibule, fourth cervical vertebra (C4 vertebra), pyriform sinus, upper esophageal sphincter (UES), vocal cord, esophagus, or trachea Can be However, these are only examples of the standard anatomical position (St), and the standard anatomical position (St) of the swallowing examination apparatus 1 according to an embodiment of the present invention is not limited thereto.
또한, 해부학적 위치(Sp)는 특정 비디오투시 영상(V)에서 해부학적 표준 위치(St)에 대응되도록 지정된 신체 조직으로서, 도 4 내지 도 6에 표시된 10, 20, 30, 40 또는 50 등이 해부학적 표준 위치(St)의 일례이다.In addition, the anatomical position (Sp) is a body tissue designated to correspond to the standard anatomical position (St) in a specific video perspective image (V), such as 10, 20, 30, 40 or 50 shown in Figs. This is an example of the standard anatomical position (St).
이때, 지정되는 해부학적 위치(Sp)는 해부학적 표준 위치(St)에 해당하는 신체 조직 영역 중 가장 관찰이 용이한 지점, 신체 조직 중 가장 중심에 위치하는 지점, 신체 조직을 대표할 수 있는 지점 또는 가장 민감하게 반응하는 지점 등과 같이 하나의 점으로 지정될 수 있다.At this time, the designated anatomical position (Sp) is the most easily observed point among the body tissue regions corresponding to the standard anatomical position (St), the point at the center of the body tissue, and a point that can represent the body tissue. Alternatively, it can be designated as a single point, such as the point that reacts most sensitively.
지정된 해부학적 위치(Sp)는 후술될 위치 추적부(140)의 음식물 이동경로 추적시 음식물이 신체 조직 중 어디에 위치하고 있는지 알려주는 공간적 지표로 이용될 수 있다.The designated anatomical location Sp may be used as a spatial index indicating where food is located among body tissues when tracking the food movement path of the location tracking unit 140 to be described later.
본 발명의 일 실시예에서, 위치 지정부(120)는 해부학적 위치(Sp)와 함께 음식물의 위치(Sf)도 함께 지정한다. 해부학적 위치(Sp)와는 달리 섭취된 음식물의 위치(Sf)는 신체 기관을 따라 이동한다. 따라서 시간 순으로 분할된 복수의 분할 영상에서 음식물의 위치는 서로 상이할 수 있다. In one embodiment of the present invention, the positioning unit 120 designates the position of food (Sf) together with the anatomical position (Sp). Unlike the anatomical position (Sp), the position (Sf) of ingested food moves along the body organs. Therefore, positions of foods in a plurality of divided images divided in chronological order may be different from each other.
도 3에 도시된 바와 같이 각각의 분할된 영상마다 상이한 음식물의 위치를 하나의 영상으로 누적하면, 음식물의 이동경로를 파악할 수 있다. 이미 전술한 바와 같이, 음식물의 위치(Sf)를 보다 명확하게 지정하기 위해 조영제가 이용된다.As illustrated in FIG. 3, when different positions of food are accumulated in one image for each divided image, the movement path of the food can be identified. As already described above, a contrast medium is used to more clearly designate the position (Sf) of the food.
본 발명의 일 실시예에서, 위치 지정부(120)는 기계 학습을 통해 학습된 알고리즘을 이용하여 해부학적 위치(Sp)와 음식물의 위치(Sf)를 지정할 수 있다. 즉, 의료인의 도움이 없이도 알고리즘을 이용하여 임의의 비디오투시 영상에 자동으로 해부학적 위치(Sp)를 지정할 수 있다. 이때, 기계 학습이란 특정한 샘플 데이터로부터 반복적으로 학습을 행하고, 특정한 샘플 데이터에 잠재하는 특징을 찾아내고, 학습 결과를 새로운 데이터에 적용시킴으로써, 그 발견된 특징에 따라서 장래를 예측할 수 있게 하는 알고리즘을 의미한다.In an embodiment of the present invention, the location designation unit 120 may designate an anatomical location Sp and a food location Sf using an algorithm learned through machine learning. In other words, it is possible to automatically designate an anatomical position (Sp) on an arbitrary video perspective image using an algorithm without the aid of a medical professional. At this time, machine learning refers to an algorithm that repeatedly learns from specific sample data, finds potential features in specific sample data, and applies the learning results to new data to predict the future according to the discovered features. do.
본 발명의 일 실시예에서, 기계 학습은 하나 이상의 인공신경망(Artificial Neural Network)을 이용한 학습 모델에 기초할 수 있다. 이때, 인공신경망은 인간의 신경망 구조를 본떠 만든 알고리즘의 일종으로서, 하나 이상의 노드 또는 뉴런(neuron)을 포함하는 하나 이상의 레이어를 포함할 수 있고 각각의 노드는 시냅스(synapse)를 통해 연결될 수 있다. In an embodiment of the present invention, machine learning may be based on a learning model using one or more artificial neural networks. In this case, the artificial neural network is a kind of algorithm modeled after the human neural network structure, and may include one or more nodes or one or more layers including neurons, and each node may be connected through a synapse.
이때, 인공신경망은, 전문지식을 갖춘 의료인에 의해 지정된 해부학적 위치(Sp)를 포함하는 비디오투시 영상의 데이터를 입력 값으로 하는 지도 학습 학습(supervised learning)에 기초할 수 있다. 또한 인공신경망은, 비지도 학습(unsupervised learning)에 기초하여 별다른 지도 없이 복수의 비디오투시 영상으로부터 특징(Feature)이나 패턴을 발견함으로써 해부학적 위치(Sp)또는 음식물의 위치(Sf)를 지정하는 방법을 학습할 수도 있다.In this case, the artificial neural network may be based on supervised learning in which data of a video perspective image including an anatomical location Sp designated by a medical professional with expertise is used as an input value. In addition, the artificial neural network is a method of designating an anatomical location (Sp) or food location (Sf) by discovering a feature or pattern from a plurality of video perspective images without any guidance based on unsupervised learning. You can also learn.
본 발명의 일 실시예에서, 좌표 설정부(130)는 지정된 해부학적 위치(Sp) 및 음식물의 위치(Sf)에 대하여 특정 지점을 기준으로 한 상대 좌표를 설정한다. 해부학적 위치(Sp)및 음식물의 위치(Sf)를 수치화함으로써 보다 다양하고 정밀한 데이터 처리를 가능하게 하기 위함이다. 일례로, 후술될 위치 추적부(140)의 음식물의 이동경로 분석시, 상대 좌표를 고려하여 보다 정확한 이동경로 누적을 달성할 수 있다.In an embodiment of the present invention, the coordinate setting unit 130 sets relative coordinates based on a specific point with respect to the designated anatomical location Sp and the food location Sf. This is to enable more diverse and precise data processing by quantifying the anatomical location (Sp) and the food location (Sf). For example, when analyzing the movement path of food by the location tracking unit 140 to be described later, it is possible to achieve more accurate movement path accumulation in consideration of relative coordinates.
본 발명의 일 실시예에서, 위치 추적부(140)는 해부학적 위치(Sp)와 음식물의 위치(Sf)를 추적할 수 있다. In an embodiment of the present invention, the location tracking unit 140 may track an anatomical location Sp and a location Sf of food.
본 발명의 일 실시예에서, 비디오투시 영상(V)은 복수의 프레임으로 분할될 수 있다. 본 발명의 일 실시예에 따른 연하검사 장치(1)는 복수의 프레임 중 일부 프레임만을 분석에 이용할 수 있다. 이것은 데이터 처리량을 감소시킴으로써, 연하검사에 소모되는 에너지와 시간을 절약하기 위함이다. 위치 추적부(140)는 이와 같이 복수의 프레임 중 선택된 일부 프레임들 사이에 해부학적 위치(Sp)와 음식물의 위치(Sf)와 관련하여 어떠한 공간적 움직임이 있었는지 시간의 흐름에 따라 확인하고 분석한다.In an embodiment of the present invention, the video perspective image V may be divided into a plurality of frames. The swallowing test apparatus 1 according to an embodiment of the present invention may use only some frames among a plurality of frames for analysis. This is to save energy and time consumed in swallowing test by reducing data throughput. The location tracking unit 140 checks and analyzes according to the passage of time what spatial movement has occurred in relation to the anatomical location (Sp) and the food location (Sf) between some of the selected frames as described above. .
먼저, 해부학적 위치(Sp)를 추적한다는 것은 해부학적 위치(Sp)가 대표하는 신체 조직의 움직임을 추적하는 것을 의미한다. 이를 위해 위치 추적부(140)는 해부학적 위치(Sp)와 관련된 좌표의 이동을 추적할 수 있다. 이와 같은 신체 조직의 움직임 정도를 임상적 파라미터라고 한다. 임상적 파라미터는 후술될 판독부(150)가 삼킴장애 여부를 판단할 때 이용될 수 있다. First, tracking the anatomical location Sp means tracking the movement of the body tissue represented by the anatomical location Sp. To this end, the location tracking unit 140 may track movement of coordinates related to the anatomical location Sp. The degree of movement of such body tissues is called a clinical parameter. The clinical parameter may be used when the reading unit 150 to be described later determines whether or not there is a swallowing disorder.
또한, 해부학적 위치(Sp)의 추적은 삼킴장애의 치료에 도움이 될 수도 있다. 일례로, 해부학적 위치(Sp)가 움직이는 정도를 분석하여 신체 조직 중 어떤 신체 조직이 삼킴장애를 유발하는지 판단하는 데 도움이 될 수 있다. In addition, tracking of the anatomical location (Sp) may be helpful in the treatment of swallowing disorders. For example, by analyzing the degree of movement of the anatomical position Sp, it may be helpful to determine which body tissue among the body tissues causes swallowing disorder.
다음으로, 음식물의 위치(Sf)를 추적한다는 것은 음식물의 상대적인 위치 이동을 확인하는 것으로서, 음식물이 어떠한 신체 조직에 인접하여 위치하는 지를 시간의 흐름에 따라 추적하는 것을 의미한다. 다시 도 3을 참조하면, 위치 추적부(140)는 복수의 프레임 중 분석을 위해 선택된 일부 프레임에 포함된 음식물의 위치를 개별적으로 확인한 후 이를 종합적으로 누적함으로써, 전체적인 음식물의 이동 경로를 제공할 수 있다.Next, tracking the position Sf of the food means to check the relative positional movement of the food, and it means tracking which body tissue the food is located adjacent to over time. Referring back to FIG. 3, the location tracking unit 140 individually checks the positions of foods included in some frames selected for analysis among a plurality of frames, and then comprehensively accumulates them, thereby providing an overall food movement path. have.
이때, 일부 프레임과 프레임 사이의 음식물의 이동은 알고리즘에 의해 추정될 수 있다. 이와 같은 추정 작업을 통해 일부 프레임 사이의 간극을 채움으로써, 도 4 내지 6에 도시된 것처럼 전체적으로 연속적인 음식물의 이동경로를 나타낼 수 있다. 이때, 추정 작업은 기계 학습을 통해 학습된 알고리즘에 의하여 수행될 수 있다. 알고리즘이 보다 정교한 추정을 위하여 인공신경망을 구축하여 설계될 수 있음은 물론이다.At this time, the movement of food between some frames and frames may be estimated by an algorithm. By filling the gaps between some of the frames through such an estimation operation, as shown in FIGS. 4 to 6, it is possible to represent a continuous moving path of food as a whole. In this case, the estimation operation may be performed by an algorithm learned through machine learning. It goes without saying that the algorithm can be designed by constructing an artificial neural network for more sophisticated estimation.
본 발명의 일 실시예에서, 판독부(150)는 위치 추적부(140)를 통해 획득된 해부학적 위치(Sp)의 움직임 정도와 음식물의 이동경로를 기초로 판단하여 삼킴장애 여부를 판독할 수 있다. 그러나 해부학적 위치(Sp)와 음식물의 이동 경로는 판독부(150)의 판독 근거의 일례에 불과하며, 해부학적 위치(Sp)와 음식물의 이동 경로 외에 다른 정보를 포함하여 삼킴장애 여부를 판단할 수 있음은 물론이다.In an embodiment of the present invention, the reading unit 150 may determine whether or not there is a swallowing disorder based on the degree of movement of the anatomical position Sp obtained through the position tracking unit 140 and the movement path of the food. have. However, the anatomical location (Sp) and the movement path of food are only an example of the basis of the reading of the reading unit 150, and other information other than the anatomical location (Sp) and the movement path of food can be included to determine whether or not there is a swallowing disorder. Of course you can.
판독부(150)는 해부학적 위치(Sp)의 움직임과 관련하여, 위치 추적부(140)로부터 획득된 임상적 파라미터 값을 임상적 파라미터의 기준값과 비교함으로써, 삼킴장애 여부를 판단할 수 있다. 또한 각 신체조직의 임상적 파라미터 값을 서로 비교하여 삼킴장애 여부를 판단할 수도 있을 것이다.The reading unit 150 may determine whether or not there is a swallowing disorder by comparing the clinical parameter value obtained from the location tracking unit 140 with the reference value of the clinical parameter in relation to the movement of the anatomical position Sp. In addition, the clinical parameter values of each body tissue may be compared with each other to determine whether swallowing disorders exist.
또한 판독부(150)는 음식물의 이동경로와 관련하여, 위치 추적부(140)을 통해 획득된 전체적인 음식물 이동경로를, 도 7의 표준 경로 모델과 비교하여 삼킴장애 여부를 판단할 수도 있다. 도 4 내지 6을 참조하면, 도 4 및 도 7(a)와 같이 음식물이 구강(10)-인두(20)-식도(30) 순으로 이동한 것으로 확인될 경우, 판독부(150)는 정상으로 판독할 수 있다. 그러나 도 5 및 도 7(b)와 같이, 음식물이 식도(30) 외에도 후두(40)-기관(50)으로 이동된 것으로 확인될 경우, 판독부(150)는 '기도흡인'으로 판독할 수 있다. 또한 도 6 및 도 7(c)과 같이, 음식물이 비강(60)으로 유입되는 경로가 확인될 경우, '비강역류'로 판독할 수 있다.In addition, the reading unit 150 may determine whether or not there is a swallowing disorder by comparing the overall food movement path obtained through the location tracking unit 140 with the standard path model of FIG. 7 in relation to the movement path of the food. 4 to 6, as shown in FIGS. 4 and 7(a), when it is confirmed that the food has moved in the order of the oral cavity 10-the pharynx 20-the esophagus 30, the reading unit 150 is normal. Can be read as. However, as shown in FIGS. 5 and 7(b), when it is confirmed that food has been moved to the larynx 40-tracheal 50 in addition to the esophagus 30, the reading unit 150 can read it as'airway suction'. have. In addition, as shown in FIGS. 6 and 7 (c), when a path through which food flows into the nasal cavity 60 is confirmed, it may be read as'nasal reflux'.
본 발명의 일 실시예에서, 판독부(150)의 삼킴장애 여부 판독 또한 기계 학습을 통해 학습된 알고리즘을 통해 이루어질 수 있다. 보다 상세히, 판독부(150)의 삼킴장애 여부의 판독시 이용되는 알고리즘은 상술한 해부학적 위치(Sp), 임상적 파라미터, 음식물의 이동경로 등을 포함하여 다양한 판단 근거를 종합적으로 판단하여 삼킴장애 여부를 판독할 수 있다. In an embodiment of the present invention, the reading of whether the reading unit 150 has a swallowing disorder may also be performed through an algorithm learned through machine learning. In more detail, the algorithm used when reading whether or not the reading unit 150 has a swallowing disorder is a swallowing disorder by comprehensively judging a variety of judgment grounds, including the above-described anatomical location (Sp), clinical parameters, and food movement paths. Whether or not can be read.
이때 판독부(150)와 관련된 알고리즘은 다수의 학습데이터를 통해 미리 기계 학습이 수행될 수 있으며, 이러한 학습 과정을 통해 스스로 갱신될 수도 있다. 이러한 학습은 지도학습, 비지도 학습 모두 적용될 수 있으며, 인공신경망을 구축하여 이루어질 수 있음은 물론이다.In this case, the algorithm related to the reading unit 150 may be machine learning in advance through a plurality of learning data, and may be self-updated through such a learning process. Such learning can be applied to both supervised and unsupervised learning, and of course, it can be achieved by constructing an artificial neural network.
살펴본 바와 같이 위치 지정부(120), 위치 추적부(140), 판독부(150)에 적용되는 알고리즘은 모두 기계 학습에 기반한 인공지능 알고리즘일 수 있다. 이때, 위치 지정부(120), 위치 추적부(140), 판독부(150) 각각의 작업을 수행하기 위해 복수의 알고리즘이 개별적으로 존재할 수도 있고, 위치 지정부(120), 위치 추적부(140), 판독부(150)를 아우르는 통합 알고리즘이 존재할 수도 있다. As described above, the algorithms applied to the location designation unit 120, the location tracking unit 140, and the reading unit 150 may all be artificial intelligence algorithms based on machine learning. In this case, a plurality of algorithms may exist individually to perform the respective tasks of the position designation unit 120, the position tracking unit 140, and the reading unit 150, or the position designation unit 120 and the position tracking unit 140 ), there may be an integrated algorithm encompassing the reading unit 150.
본 발명의 일 실시예에 따른 연하검사 장치(1)는 이처럼 인공지능 알고리즘에 의하여 비디오투시 연하검사가 자동으로 수행되므로, 의료인의 수고를 덜 수 있으며, 판독의 객관성을 확보할 수 있다.In the swallowing test apparatus 1 according to an embodiment of the present invention, since the video-perspective swallowing test is automatically performed by an artificial intelligence algorithm as described above, it is possible to reduce labor of medical personnel and secure objectivity of reading.
이하 도면을 달리하여, 본 발명의 일 실시예에 따른 연하검사 장치(1)를 이용하여 비디오투시 연하검사를 하는 방법(이하 연하검사를 하는 방법)을 설명한다.Hereinafter, a method of performing a video-perspective swallowing test using the swallowing test apparatus 1 according to an embodiment of the present invention (hereinafter, a method of performing a swallowing test) will be described with different drawings.
도 8는 본 발명의 일 실시예에 따른 연하검사 장치(1)를 이용하여 비디오투시 연하검사를 하는 방법의 순서도이다.8 is a flow chart of a method of performing a video perspective swallowing test using the swallowing test apparatus 1 according to an embodiment of the present invention.
도 8를 참조하면, 본 발명의 일 실시예에 따른 연하검사를 하는 방법은, 복수의 영상을 통해 알고리즘을 학습시키는 단계(S10); 음식물 섭취 중에 신체 조직의 움직임이 촬영된 비디오투시 영상을 입력하는 단계(S20); 비디오투시 영상에서 해부학적 표준 위치(St)에 대응하는 해부학적 위치(Sp) 및 음식물의 위치를 지정하는 단계(S30); 해부학적 위치(Sp) 및 음식물의 좌표를 설정하는 단계(S40); 해부학적 위치(Sp)가 움직이는 정도와 음식물의 위치를 추적하는 단계(S50) 및 해부학적 위치(Sp)가 움직이는 정도와 음식물의 이동경로를 분석하여 삼킴장애 여부를 판독하는 단계(S60)를 포함할 수 있다.Referring to FIG. 8, a method of performing a swallowing test according to an embodiment of the present invention includes the steps of learning an algorithm through a plurality of images (S10); Inputting a video-perspective image in which movements of body tissues are photographed during food intake (S20); Designating an anatomical position (Sp) and a position of food corresponding to the standard anatomical position (St) in the video perspective image (S30); Step of setting the anatomical position (Sp) and the coordinates of the food (S40); Including the step of tracking the degree of movement of the anatomical position (Sp) and the position of food (S50), and the step of reading whether or not there is a swallowing disorder by analyzing the degree of movement of the anatomical position (Sp) and the movement path of the food (S60). can do.
복수의 영상을 통해 알고리즘을 학습시키는 단계(S10)에서는 연하검사를 위해 촬영된 복수의 비디오투시 영상(V)을 알고리즘에 학습데이터로서 제공하여 이루어진다. 이때, 알고리즘은 인공신경망을 구축하여 학습될 수 있으며 학습 과정 중 경험된 결과를 반영하여 스스로 갱신될 수도 있다. 또한 알고리즘은 복수의 비디오투시 영상(V)으로부터 특징(feature)을 스스로 추출하여 과업을 수행할 수 있다.In the step of learning an algorithm through a plurality of images (S10), a plurality of video perspective images V photographed for swallowing test are provided to the algorithm as learning data. In this case, the algorithm may be learned by constructing an artificial neural network, and may be updated by itself by reflecting the results experienced during the learning process. In addition, the algorithm can perform a task by extracting features from a plurality of video perspective images V by itself.
음식물 섭취 중에 신체 조직의 움직임이 촬영된 비디오투시 영상(V)을 입력하는 단계(S20)에서는 별도의 비디오투시 영상 촬영 장치로부터 실시간으로 비디오투시 영상(V)이 입력되거나, 이미 촬영된 비디오투시 영상이 입력될 수 있다.In the step (S20) of inputting a video-perspective image (V) in which the movements of the body tissues are captured while eating food (S20), a video-perspective image (V) is input in real time from a separate video-perspective image capturing device, or a video-perspective image that has already been taken Can be entered.
비디오투시 영상(V)에서 해부학적 표준 위치(St)에 대응하는 해부학적 위치(Sp)를 지정하는 단계(S30)에서는 개개인의 신체 구조에 따라 서로 상이할 수 있는 비디오투시 영상에서 미리 선정된 해부학적 표준 위치(St)에 대응하는 위치를 찾아서 매칭시킨다. 이와 같은 지정 작업은 앞서 살펴본 바와 같이 기계 학습을 통해 학습된 알고리즘을 통해 이루어짐으로써 신속성과 객관성을 확보할 수 있다.In the step of designating the anatomical position (Sp) corresponding to the standard anatomical position (St) in the video perspective image (V) (S30), the anatomy selected in advance from the video perspective image that can be different from each other according to the individual's body structure Find and match the position corresponding to the enemy standard position (St). As described above, such a designation task can be performed through an algorithm learned through machine learning, thereby securing speed and objectivity.
해부학적 위치(Sp)의 좌표를 설정하는 단계(S40)에서는 특정 지점을 기준으로 해부학적 위치(Sp)를 지정하는 단계(S30)를 통해 지정된 해부학적 위치(Sp)의 상대 좌표를 설정한다. 이와 같이 설정된 좌표는 위치 추적부(140)의 해부학적 위치(Sp)및 음식물의 위치(Sf) 추적시 이용될 수 있다.In the step (S40) of setting the coordinates of the anatomical position (Sp), the relative coordinates of the designated anatomical position (Sp) are set through the step (S30) of designating the anatomical position (Sp) based on a specific point. The coordinates set in this way may be used when tracking the anatomical location (Sp) and the location of food (Sf) of the location tracking unit 140.
해부학적 위치(Sp)가 움직이는 정도와 음식물의 위치를 추적하는 단계(S40)에서는 비디오투시 영상 중 분할된 일부 프레임 사이에서 시간의 흐름에 따른 해부학적 위치(Sp)의 움직임과 음식물의 이동경로를 추적한다. 특히 해부학적 위치(Sp)의 움직임을 추적함으로써, 후술될 판독부(150)의 분석시 이용될 임상적 파라미터가 도출될 수 있다.In the step (S40) of tracking the degree of movement of the anatomical position (Sp) and the position of the food (S40), the movement of the anatomical position (Sp) and the movement path of the food according to the passage of time between some of the divided frames of the video perspective image are determined. To track. In particular, by tracking the movement of the anatomical position Sp, clinical parameters to be used when analyzing the reading unit 150 to be described later can be derived.
해부학적 위치(Sp)가 움직이는 정도와 음식물의 이동경로를 분석하여 삼킴장애 여부를 판독하는 단계(S60)에서는 기계 학습을 통해 학습된 인공지능 알고리즘이 비디오투시 영상을 분석함으로써, 삼킴장애 여부를 판독할 수 있다. 이때, 판독부(150)는 위치 추적부(140)로부터 획득된 임상적 파라미터를 임상적 파라미터 기본값과 비교하거나 또는 획득된 음식물의 전체 이동경로를 도 7의 표준 경로 모델과 대조함으로써 삼킴장애 여부를 판독할 수 있다. 그러나, 본 발명의 일 실시예에서, 판독부(150)의 삼킴장애 여부 판단 방법이 이에 한정되는 것은 아니며, 비디오투시 영상으로부터 추출되는 다른 데이터값도 함께 고려될 수 있다.In the step (S60) of analyzing the degree of movement of the anatomical position (Sp) and the movement path of food to read whether there is a swallowing disorder, the artificial intelligence algorithm learned through machine learning analyzes the video perspective image to read whether the swallowing disorder is can do. At this time, the reading unit 150 compares the clinical parameter obtained from the location tracking unit 140 with the clinical parameter default value or compares the entire movement route of the obtained food with the standard route model of FIG. 7 to determine whether or not there is a swallowing disorder. Can be read. However, in an embodiment of the present invention, the method of determining whether the reader 150 has a swallowing disorder is not limited thereto, and other data values extracted from a video perspective image may be considered together.
이상에서 본 발명의 일 실시예에 대하여 설명하였으나, 본 발명의 사상은 본 명세서에 제시되는 실시 예에 제한되지 아니하며, 본 발명의 사상을 이해하는 당업자는 동일한 사상의 범위 내에서, 구성요소의 부가, 변경, 삭제, 추가 등에 의해서 다른 실시 예를 용이하게 제안할 수 있을 것이나, 이 또한 본 발명의 사상범위 내에 든다고 할 것이다.Although an embodiment of the present invention has been described above, the spirit of the present invention is not limited to the embodiment presented in the present specification, and those skilled in the art who understand the spirit of the present invention can add components within the scope of the same idea. It will be possible to easily propose other embodiments by changing, deleting, adding, etc., but it will be said that this is also within the scope of the present invention.
Claims (9)
- 비디오투시 연하검사 판독 장치로서,As a video-visual swallowing test reading device,피검사자의 음식물 섭취 중에 해부학적 위치의 움직임이 순차적으로 촬영된 복수의 비디오투시 영상이 입력되는 영상 입력부;An image input unit for inputting a plurality of video perspective images sequentially photographing movements of an anatomical position while the test subject eats food;상기 복수의 비디오투시 영상에서 상기 해부학적 위치를 미리 설정된 해부학적 표준 위치에 대응되도록 지정하고 상기 해부학적 위치에 대한 음식물의 위치를 지정하는 해부학적 위치 지정부; An anatomical position designating unit for designating the anatomical position to correspond to a preset anatomical standard position in the plurality of video perspective images and designating a position of food with respect to the anatomical position;상기 복수의 비디오투시 영상의 상기 해부학적 위치 및 상기 음식물의 위치의 좌표를 설정하는 좌표 설정부;A coordinate setting unit for setting coordinates of the anatomical position of the plurality of video perspective images and the position of the food;상기 복수의 비디오투시 영상의 상기 해부학적 위치 및 상기 음식물의 위치의 좌표를 이용하여 상기 해부학적 위치가 움직이는 정도와 상기 음식물의 이동 경로를 추적하는 위치 추적부 및A position tracking unit for tracking the degree of movement of the anatomical position and the movement path of the food using the coordinates of the anatomical position of the plurality of video perspective images and the position of the food, and상기 해부학적 위치가 움직이는 정도와 상기 음식물의 이동 경로를 분석하여 삼킴장애 여부를 판독하는 판독부를 포함하는, 비디오투시 연하검사 판독 장치.Comprising a reading unit for reading whether a swallowing disorder by analyzing the degree of movement of the anatomical position and the movement path of the food, video perspective swallowing test reading device.
- 제1항에 있어서,The method of claim 1,상기 영상 입력부, 위치 지정부, 좌표 설정부, 위치 추적부 또는 판독부 중 적어도 하나는 기계 학습을 통해 학습되는 알고리즘을 이용하는, 비디오투시 연하검사 판독 장치.At least one of the image input unit, the position designation unit, the coordinate setting unit, the position tracking unit, and the reading unit uses an algorithm that is learned through machine learning.
- 제2항에 있어서,The method of claim 2,상기 알고리즘은 인공 신경망을 구축하여 상기 기계 학습을 수행하는, 비디오투시 연하검사 판독 장치.The algorithm constructs an artificial neural network to perform the machine learning, a video-perspective swallowing test reading device.
- 제1항에 있어서,The method of claim 1,상기 해부학적 표준 위치는 혀(tongue), 경구개(hard palate), 연구개(soft palate), 하악각(mandibular angle), 후두개(epiglottis), 후두개골(valleculae), 후두전정(laryngeal vestibule), 제4목뼈(C4 vertebra), 이상와(pyriform sinus), 상부식도 괄약근(UES), 성대(vocal cord), 식도(esophagus) 또는 기관(trachea) 중 적어도 하나인, 비디오투시 연하검사 판독 장치.The standard anatomical positions are tongue, hard palate, soft palate, mandibular angle, epiglottis, valleculae, laryngeal vestibule, and fourth. Cervical vertebra (C4 vertebra), pyriform sinus, upper esophageal sphincter (UES), vocal cord, esophagus (esophagus), or at least one of trachea (trachea), video fluoroscopic swallowing test reading device.
- 제1항에 있어서, The method of claim 1,상기 위치 추적부는 상기 비디오투시 영상으로부터 분할된 영상에 포함된 상기 해부학적 위치 및 음식물의 위치를 누적하는, 비디오투시 연하검사 판독 장치.The position tracking unit accumulates the anatomical position and the position of food included in an image divided from the video perspective image, a video perspective swallowing test reading device.
- 제1항에 있어서,The method of claim 1,상기 판독부의 삼킴장애 여부 판독 결과는 기도흡인, 비강흡입 또는 정상 중 어느 하나인, 비디오투시 연하검사 판독 장치.The reading result of the swallowing disorder of the reading unit is any one of airway suction, nasal suction, or normal, video fluoroscopic swallowing test reading device.
- 제1항에 있어서,The method of claim 1,상기 판독부는 미리 정해진 임상적 파라미터를 기준으로 상기 좌표의 이동 정도를 비교하여 상기 삼킴장애 여부를 판독하는, 비디오투시 연하검사 판독 장치.The reading unit reads whether or not the swallowing disorder by comparing the degree of movement of the coordinates based on a predetermined clinical parameter, video perspective swallowing test reading device.
- 제1항에 있어서,The method of claim 1,상기 판독부는 음식물의 이동경로를 표준 경로 모델과 비교하여 상기 삼킴장애 여부를 판독하는, 비디오투시 연하검사 판독 장치.The reading unit compares the movement path of the food with a standard path model and reads whether the swallowing disorder has occurred.
- 기계 학습 기반의 알고리즘을 이용하여 비디오투시 연하검사 판독을 하는 방법으로서,As a method of reading a video perspective swallowing test using a machine learning-based algorithm,복수의 비디오투시 영상을 입력하여 상기 알고리즘을 학습시키는 단계;Learning the algorithm by inputting a plurality of video perspective images;피검사자의 음식물 섭취 중에 해부학적 위치의 움직임이 촬영된 비디오투시 영상을 입력하는 단계;Inputting a video-perspective image in which a motion of an anatomical position is photographed while a test subject eats food;상기 비디오투시 영상에서 상기 해부학적 위치를 미리 설정된 해부학적 표준 위치에 대응되도록 지정하고 상기 해부학적 위치에 대한 음식물의 위치를 지정하는 단계;Designating the anatomical position to correspond to a preset anatomical standard position in the video perspective image and designating a position of food with respect to the anatomical position;상기 해부학적 위치 및 상기 음식물의 위치의 좌표를 설정하는 단계;Setting coordinates of the anatomical position and the position of the food;상기 해부학적 위치가 움직이는 정도와 음식물의 위치를 추적하는 단계 및Tracking the degree of movement of the anatomical position and the position of the food, and상기 해부학적 위치가 움직이는 정도와 음식물의 이동경로를 분석하여 삼킴장애 여부를 판독하는 단계를 포함하는, 비디오투시 연하검사 판독 방법.Comprising the step of reading whether or not swallowing disorder by analyzing the degree of movement of the anatomical position and the movement path of the food, video perspective swallowing test reading method.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020190117391A KR102094828B1 (en) | 2019-09-24 | 2019-09-24 | Apparatus and Method for Videofluoroscopic Swallowing Study |
KR10-2019-0117391 | 2019-09-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021060700A1 true WO2021060700A1 (en) | 2021-04-01 |
Family
ID=70467593
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/010735 WO2021060700A1 (en) | 2019-09-24 | 2020-08-13 | Apparatus and method for confirming videofluoroscopic swallowing study |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102094828B1 (en) |
WO (1) | WO2021060700A1 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102418073B1 (en) * | 2020-06-08 | 2022-07-06 | 고려대학교 산학협력단 | Apparatus and method for artificial intelligence based automatic analysis of video fluoroscopic swallowing study |
KR102430526B1 (en) | 2020-07-13 | 2022-08-05 | 단국대학교 산학협력단 | Method and apparatus for detecting dysphagia automatically based on machine learning |
KR102595012B1 (en) * | 2021-02-10 | 2023-10-26 | 서울대학교산학협력단 | Apparatus and method for tracking hyoid bone |
KR102631213B1 (en) | 2021-02-26 | 2024-02-06 | 동서대학교 산학협력단 | Diagnostic device for dysphagia using near infrared technology |
KR102650787B1 (en) * | 2021-05-17 | 2024-03-25 | 단국대학교 산학협력단 | Method to track trajectory of hyoid bone |
KR102551723B1 (en) * | 2021-06-10 | 2023-07-06 | 영남대학교 산학협력단 | Apparatus for determining presence of penetration or aspiration of dysphagia patient using VFSS and method thereof |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6060076B2 (en) * | 1977-12-28 | 1985-12-27 | 日本電気株式会社 | voice recognition device |
KR20150118484A (en) * | 2014-04-14 | 2015-10-22 | 삼성전자주식회사 | Method and Apparatus for medical image registration |
US9782118B2 (en) * | 2013-05-17 | 2017-10-10 | Wisconsin Alumni Research Foundation | Diagnosis of swallowing disorders using high resolution manometry |
KR101912569B1 (en) * | 2018-07-11 | 2018-10-26 | 전북대학교산학협력단 | The object tracking system of video images |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
MX2011007578A (en) * | 2009-01-15 | 2011-08-04 | Nestec Sa | Methods of diagnosing and treating dysphagia. |
IN2014CN00645A (en) * | 2011-06-30 | 2015-04-03 | Meiji Co Ltd |
-
2019
- 2019-09-24 KR KR1020190117391A patent/KR102094828B1/en active IP Right Grant
-
2020
- 2020-08-13 WO PCT/KR2020/010735 patent/WO2021060700A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6060076B2 (en) * | 1977-12-28 | 1985-12-27 | 日本電気株式会社 | voice recognition device |
US9782118B2 (en) * | 2013-05-17 | 2017-10-10 | Wisconsin Alumni Research Foundation | Diagnosis of swallowing disorders using high resolution manometry |
KR20150118484A (en) * | 2014-04-14 | 2015-10-22 | 삼성전자주식회사 | Method and Apparatus for medical image registration |
KR101912569B1 (en) * | 2018-07-11 | 2018-10-26 | 전북대학교산학협력단 | The object tracking system of video images |
Non-Patent Citations (1)
Title |
---|
ZHANG ZHENWEI, COYLE JAMES L., SEJDIĆ ERVIN: "Automatic hyoid bone detection in fluoroscopic images using deep learning", SCIENTIFIC REPORTS, vol. 8, no. 1, 1 December 2018 (2018-12-01), XP055793816, DOI: 10.1038/s41598-018-30182-6 * |
Also Published As
Publication number | Publication date |
---|---|
KR102094828B1 (en) | 2020-04-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021060700A1 (en) | Apparatus and method for confirming videofluoroscopic swallowing study | |
WO2019132168A1 (en) | System for learning surgical image data | |
WO2018155765A1 (en) | Method and device analyzing plaque from computed tomography image | |
WO2021040327A1 (en) | Apparatus and method for predicting cardiovascular risk factor | |
WO2023063646A1 (en) | Bone density derivation method for hip fracture diagnosis based on machine learning, and bone density derivation program using same | |
WO2019098415A1 (en) | Method for determining whether subject has developed cervical cancer, and device using same | |
WO2022158654A1 (en) | Body component analysis method using medical image, and apparatus therefor | |
CN109447985A (en) | Colonoscopic images analysis method, device and readable storage medium storing program for executing | |
WO2019182273A1 (en) | Sleep apnea severity testing device | |
CN114999646B (en) | Newborn exercise development assessment system, method, device and storage medium | |
WO2019066357A1 (en) | Method, system, and non-transitory computer-readable recording medium for monitoring residual urine by using bio-impedance | |
WO2019231263A1 (en) | Virtual reality-based rehabilitation system and method | |
WO2022131642A1 (en) | Apparatus and method for determining disease severity on basis of medical images | |
WO2020101264A1 (en) | Method and apparatus for calculating coronary artery calcium score | |
CN116246778B (en) | Intelligent diagnosis platform for lung function detection | |
WO2013077558A1 (en) | Robot-based autism diagnosis device using electroencephalogram and method therefor | |
WO2021010777A1 (en) | Apparatus and method for precise analysis of severity of arthritis | |
WO2024049208A1 (en) | Device and method for measuring air distribution in abdomen | |
WO2021054700A1 (en) | Method for providing tooth lesion information, and device using same | |
Khalifa et al. | Non-Invasive sensor-based estimation of anterior-posterior upper esophageal sphincter opening maximal distension | |
WO2020246676A1 (en) | System for automatic diagnosis of uterine cervical cancer | |
WO2023234476A1 (en) | System and method for predicting success or not of complete repair when repairing torn rotator cuff | |
WO2019098399A1 (en) | Bone mineral density estimation method and apparatus using same | |
CN114240934A (en) | Image data analysis method and system based on acromegaly | |
CN114451848A (en) | Endoscope capsule track guiding method, device and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20867308 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20867308 Country of ref document: EP Kind code of ref document: A1 |