WO2022158689A1 - System for analyzing clinical connection of subjective pain through voice analysis for preliminary questionnaire - Google Patents

System for analyzing clinical connection of subjective pain through voice analysis for preliminary questionnaire Download PDF

Info

Publication number
WO2022158689A1
WO2022158689A1 PCT/KR2021/016726 KR2021016726W WO2022158689A1 WO 2022158689 A1 WO2022158689 A1 WO 2022158689A1 KR 2021016726 W KR2021016726 W KR 2021016726W WO 2022158689 A1 WO2022158689 A1 WO 2022158689A1
Authority
WO
WIPO (PCT)
Prior art keywords
analysis
unit
information
voice
pain
Prior art date
Application number
PCT/KR2021/016726
Other languages
French (fr)
Korean (ko)
Inventor
김진성
Original Assignee
가톨릭대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 가톨릭대학교 산학협력단 filed Critical 가톨릭대학교 산학협력단
Publication of WO2022158689A1 publication Critical patent/WO2022158689A1/en

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/66Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/20ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/02User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages

Definitions

  • the present invention relates to a clinical linkage analysis system for subjective pain through voice analysis for prior questionnaires, and more particularly, to clinical linkage analysis of subjective pain through voice analysis for prior questionnaires to improve the accuracy of prior questionnaires for spinal pain. It's about the system.
  • the doctor should look at the patient's X-ray, MRI, and CT images and determine the patient's condition and treatment method by referring to the interview with the patient.
  • some lesions may be asymptomatic, some lesions may be mildly symptomatic, and some lesions may be predominantly symptomatic lesions.
  • symptoms appear differently depending on the location of the lesion and the nature of the lesion, it is necessary to catch the patient's most discomfort through interview with the patient, and it is also necessary to find out which lesion requires the most urgent treatment among various lesions. It took a lot of time.
  • the doctor must decide whether surgery is necessary for the main symptomatic lesion or whether non-surgical treatment is sufficient.
  • An object of the present invention to solve the above problems is to provide a clinical linkage analysis system for subjective pain through voice analysis for pre-interview in order to improve the diagnostic accuracy of artificial intelligence.
  • a configuration of the present invention for achieving the above object is a chatbot module provided to acquire the analysis data necessary to know the condition of the subject; a server module provided to analyze the analysis data acquired by the chatbot module to derive diagnostic data; and a terminal module provided to display the diagnostic data derived by the server module to the examinee.
  • the analysis data may include questionnaire information, audio information, and image information.
  • the chatbot module includes: a chatbot communication unit provided to receive the questionnaire information from the server module and provide the analysis data to the server module; a display unit provided to display the questionnaire information provided from the chatbot communication unit to the examinee; a speaker unit provided to read the questionnaire information to the subject; and a voice recognition unit configured to acquire questionnaire information by recognizing the examinee's answer, and to obtain voice information recorded by the examinee's voice.
  • the chatbot module may further include a photographing unit provided to acquire image information by recording the behavior and facial expressions of the examinee during the interview.
  • the server module includes: a server communication unit provided to receive analysis data from the chatbot module; a questionnaire analysis unit provided to analyze the questionnaire information; a voice analyzer provided to analyze the voice information; a behavior analysis unit provided to analyze the behavior of the examinee from the image information; an expression analysis unit provided to analyze the expression of the subject in the image information; and a diagnosing unit configured to generate expected disease information by synthesizing the analysis results of the questionnaire analysis unit, the voice analysis unit, the behavior analysis unit, and the expression analysis unit to diagnose the condition of the subject.
  • the questionnaire analysis unit may be configured to calculate a pain score for each body part according to a pain-related questionnaire, a visual analogue scale (VAS) score, and an Oswestry Disability Index (ODI) score.
  • VAS visual analogue scale
  • the voice analysis unit may be configured to analyze the tone of the examinee's voice from the voice information and additionally calculate a pain weight for each body part from the pain score.
  • the behavior analysis unit may be configured to analyze the behavior of the examinee from the image information and additionally calculate a pain weight for each body part from the pain score.
  • the expression analysis unit may be configured to analyze the facial expression of the examinee from the image information and additionally calculate a pain weight for each body part from the pain score.
  • the diagnosis unit includes a pain score for each body part calculated by the questionnaire analysis unit for each body part additionally calculated according to the analysis results of the voice analysis unit, the behavior analysis unit, and the expression analysis unit. It may be characterized in that it is provided to generate the predicted disease information by summing the pain weights.
  • the terminal module includes: a terminal communication unit that receives information about the predicted disease diagnosed from the server module; and an information display unit for displaying the expected disease information provided to the terminal communication unit to the examinee.
  • the terminal module may further include an input unit provided to input diagnostic condition information on an actual condition of the examinee diagnosed by the examinee.
  • the server module may further include a learning unit configured to perform machine learning by comparing the diagnostic condition information input to the input unit with the predicted condition information derived by the diagnosis unit. .
  • the input unit may be provided to further input personality information of the examinee, and the diagnosis unit may be provided to adjust a pain weight according to the personality information.
  • diagnosis unit calculates predicted diagnostic information by adding the voice, facial expression, and behavior of the examinee, it is possible to more accurately predict the condition of the examinee.
  • FIG. 1 is an exemplary configuration diagram of a system for analyzing clinical linkage of subjective pain through voice analysis for pre-questionnaires according to an embodiment of the present invention.
  • a chatbot module provided to acquire the analysis data necessary to know the condition of the subject; a server module provided to analyze the analysis data acquired by the chatbot module to derive diagnostic data; and a terminal module provided to display the diagnostic data derived by the server module to the examinee.
  • FIG. 1 is an exemplary configuration diagram of a system for analyzing clinical linkage of subjective pain through voice analysis for pre-questionnaires according to an embodiment of the present invention.
  • the clinical linkage analysis system 100 of subjective pain through voice analysis for pre-interview may include a chatbot module 110 , a server module 120 , and a terminal module 130 .
  • the chatbot module 110 is provided to acquire the analysis data necessary to know the condition of the subject, the chatbot communication unit 111, the display unit 112, the speaker unit 113, the voice recognition unit 114 and the photographing unit (115).
  • the analysis data may include questionnaire information, audio information, and image information.
  • the chatbot communication unit 111 may be provided to receive the questionnaire information from the server module 120 and provide the analysis data to the server module 120 .
  • the chatbot communication unit 111 includes a visual analogue scale (VAS) questionnaire, an Oswestry Disability Index (ODI) questionnaire, and other pain-related questionnaires from the server module 120 to ask questions necessary to understand the condition of the examinee. may be arranged to be provided.
  • VAS visual analogue scale
  • Oswestry Disability Index Oswestry Disability Index
  • the chatbot communication unit 111 may provide the provided questionnaires to the display unit 112 and the speaker unit 113 , and transmit information received from the voice recognition unit 114 and the photographing unit 115 to the server module ( 120) may be provided.
  • the display unit 112 may be provided to display the questionnaire information provided from the chatbot communication unit 111 to the examinee.
  • the display unit 112 may be provided as a touch screen, in addition to showing a questionnaire to the examinee, so that the examinee can directly select an answer.
  • the speaker unit 113 may be provided to read the questionnaire information to the examinee.
  • the speaker unit 113 may read the questionnaire displayed on the display unit 112 to the examinee and guide the subject to answer.
  • the speaker unit 113 provided in this way can induce various facial expression changes and behavior changes by inducing the subject to have a natural conversation like an actual questionnaire.
  • the voice recognition unit 114 may be provided to obtain the questionnaire information by recognizing the answer of the examinee, and to obtain the voice information recorded by the examinee's voice.
  • the photographing unit 115 may be provided to obtain image information by recording the behavior and facial expressions of the examinee during the interview.
  • the obtained questionnaire information, voice information, and image information may be transmitted to the server module 120 by the chatbot communication unit 111 .
  • the server module 120 is provided to analyze the analysis data obtained by the chatbot module 110 to derive diagnostic data, and a server communication unit 121 , a questionnaire analysis unit 122 , and a voice analysis unit 123 . ), a behavior analysis unit 124 , an expression analysis unit 125 , a diagnosis unit 126 , and a learning unit 127 .
  • the server communication unit 121 may be provided to receive analysis data from the chatbot module 110 .
  • the server communication unit 121 may receive questionnaire information, voice information, and image information from the chatbot communication unit 111 .
  • the server communication unit 121 may provide the questionnaire information to the voice analysis report or the questionnaire analysis unit 122, and may provide the voice information to the voice analysis unit.
  • the server communication unit 121 may provide the image information to the behavior analysis unit 124 and the expression analysis unit 125 .
  • the questionnaire analysis unit 122 may be provided to analyze the questionnaire information.
  • the questionnaire analysis unit 122 may be provided to provide the contents of a visual analogue scale (VAS) questionnaire, an ODI (Oswestry Disability Index) questionnaire, and other questionnaires to the chatbot module 110 through the server communication unit 121 .
  • VAS visual analogue scale
  • ODI Olwestry Disability Index
  • the questionnaire analysis unit 122 may be provided to receive the examinee's answer to the questionnaire analyzed by the voice analysis unit 123 .
  • the examinee directly selects the answer to the questionnaire on the display unit 112 , it may be provided to directly receive the answer to the questionnaire from the server communication unit 121 without going through the voice analysis unit 123 . .
  • the prepared questionnaire analysis unit 122 may be provided to calculate a pain score for each body part according to a visual analogue scale (VAS) score and an Oswestry Disability Index (ODI) score answered by the examinee.
  • VAS visual analogue scale
  • the voice analyzer 123 may be provided to analyze the voice information.
  • the voice analysis unit 123 may be provided to analyze the answer to the examinee's questionnaire, derive questionnaire information, and then provide it to the questionnaire analysis unit.
  • the voice analysis unit 123 may be provided to analyze the tone of the examinee's voice from the voice information and additionally calculate a pain weight for each body part from the pain score.
  • the voice analysis unit 123 may be provided to additionally calculate a pain weight for a point at which the voice tone abruptly rises or falls compared to the overall average in a specific answer of the examinee's questionnaire.
  • the voice analyzer 123 may be provided to calculate a higher weight for a body part that is repeatedly mentioned among answers of the examinee.
  • the behavior analysis unit 124 may be provided to analyze the behavior of the examinee in the image information.
  • the behavior analysis unit 124 may be provided to analyze the behavior of the examinee in the image information and additionally calculate a pain weight for each body part from the pain score.
  • the behavior analysis unit 124 additionally calculates the pain weight for the body part corresponding to the question when there is a large change in posture, such as when the hand moves a lot in a specific answer among the questionnaire of the examinee in the image information. may be arranged to do so.
  • the expression analysis unit 125 may be provided to analyze the expression of the subject in the image information.
  • the facial expression analysis unit 125 may be provided to analyze the facial expression of the examinee from the image information and additionally calculate a pain weight for each body part from the pain score.
  • the facial expression analysis unit 125 may be provided to additionally calculate a pain weight for a body part corresponding to a specific question when a facial expression change, such as a frown, occurs in a specific question during the questionnaire.
  • the diagnosis unit 126 synthesizes the analysis results of the questionnaire analysis unit 122, the voice analysis unit 123, the behavior analysis unit 124, and the expression analysis unit 125 to diagnose the condition of the subject. Thus, it may be provided to generate expected disease information.
  • the diagnosis unit 126 includes the voice analysis unit 123 , the behavior analysis unit 124 , and the expression analysis unit 125 based on the pain score for each body part calculated by the questionnaire analysis unit 122 . ) may be provided to generate the predicted disease information by summing the pain weights for each body part additionally calculated according to the analysis result.
  • a high pain score may be assigned to the buttocks, pelvis, thigh side, back thigh, side calf, waist, back calf, instep, sole of the foot, big toe, etc.
  • a high pain score may be assigned to the buttocks, pelvis, thigh side, back thigh, side calf, waist, back calf, instep, sole of the foot, big toe, etc.
  • pain weights are given to the pain scores of the waist, sacrum, pelvis, the back of the thigh, the back of the calf, and the sole of the foot. It is assumed that the diagnosis unit 126 may generate information on the expected pathology that the most urgent part corresponds to lumbar vertebrae 5 and sacral vertebrae 1, and that there is a herniated disc of the lumbar vertebrae.
  • the server module 120 is provided to diagnose not only the results of the questionnaire, but also the changes in the subject's voice, behavior, and facial expression, to diagnose the subject's condition and to diagnose the part that needs to be solved most urgently.
  • diagnosis unit 126 may be provided to recommend possible treatment methods according to the pain level for each body part in the predicted disease information.
  • the terminal module 130 is provided to display the diagnostic data derived by the server module 120 to the examinee, and may include a terminal communication unit 131 , an information display unit 132 , and an input unit 133 .
  • the terminal communication unit 131 may be provided to receive the diagnosed predicted disease information from the server module 120 .
  • the terminal communication unit 131 receives the diagnosis data, which is the diagnosis data, from the server module 120 and provides it to the information display unit 132, and transmits the information input to the input unit 133 to the server module. It may be provided to provide to 120 .
  • the information display unit 132 may be provided to display the expected disease information provided to the terminal communication unit 131 to the examinee.
  • the information display unit may include all electronic devices such as a computer, a mobile phone, a notebook computer, and a monitor.
  • the examinee can anticipate the patient's condition and the painful region based on the expected disease information displayed on the information display unit and start the interview, it is possible to more quickly diagnose the condition of the examinee and determine the treatment regimen.
  • the input unit 133 may be provided to input diagnostic condition information on the actual condition of the examinee diagnosed by the examinee.
  • the input unit 133 may be provided to further input the treatment method, the personality of the examinee, and the like.
  • the personality refers to a degree of reaction to pain, and it may be provided to input whether the exaggeration is severe compared to the expected pain as a score or the like.
  • the terminal communication unit 131 may be provided to provide the diagnostic condition information, the personality information of the examinee, a treatment method, and the like from the input unit 133 to the server communication unit 121 .
  • the server communication unit 121 may be provided to provide the diagnostic condition information and treatment method to the learning unit 127 .
  • the learning unit 127 may be provided to perform machine learning by comparing the diagnostic condition information input to the input unit 133 provided in the terminal module 130 with the expected condition information derived by the diagnosis unit 126 .
  • the learning unit 127 is provided to perform machine learning by artificial intelligence by comparing the recommended treatment method with the treatment method diagnosed by the examinee, and updates the diagnosis unit 126 according to the machine learning result. It can help make an accurate diagnosis.
  • the server communication unit 121 provides the personality information of the examinee to the diagnosis unit 126 , so that the diagnosis unit 126 may be provided to adjust the pain weight according to the personality information.
  • the weight of the pain may be lowered.
  • the present invention prepared as described above diagnoses the condition of the examinee by comprehensively judging the change in voice, behavior, and facial expression of the examinee by artificial intelligence, not only the results of the questionnaire, and diagnoses the most urgently needed part among them. may be arranged to do so.
  • the examiner can anticipate the patient's condition and painful area and start the interview, so that the patient can accurately diagnose the condition of the examinee within a shorter outpatient period and determine the most appropriate treatment regimen.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Medical Treatment And Welfare Office Work (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The present invention relates to a system for analyzing the clinical connection of subjective pain through voice analysis for a preliminary questionnaire and, more specifically, to a system for analyzing the clinical connection of subjective pain through voice analysis for a preliminary questionnaire in order to improve the accuracy of a preliminary questionnaire for spondylalgia. To this end, the present invention provides a system for analyzing the clinical connection of subjective pain through voice analysis for a preliminary questionnaire, comprising: a chatbot module for acquiring analysis data needed to identify the nature of a disease of an examinee; a server module for deriving diagnosis data by analyzing the analysis data acquired by means of the chatbot module; and a terminal module for displaying, to an examiner, the diagnosis data derived by means of the server module.

Description

사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템Clinical linkage analysis system of subjective pain through voice analysis for pre-interview
본 발명은 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템에 관한 것으로, 보다 상세하게는 척추 통증에 대한 사전문진의 정확도를 향상시키기 위한 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템에 관한 것이다.The present invention relates to a clinical linkage analysis system for subjective pain through voice analysis for prior questionnaires, and more particularly, to clinical linkage analysis of subjective pain through voice analysis for prior questionnaires to improve the accuracy of prior questionnaires for spinal pain. It's about the system.
전 연령대에서 스마트폰, 컴퓨터 이용이 증가하고, 앉아서 생활하는 시간이 길어지면서 척추 질환자 수가 해마다 큰 폭으로 증가하고 있는 추세이다.As the use of smartphones and computers of all age groups increases, and the time spent sitting for a longer period of time, the number of patients with spinal diseases is increasing significantly every year.
이에 따라, 척추 질환으로 인해서 병원을 찾는 사람들도 해마다 증가하고 있다.Accordingly, the number of people visiting the hospital due to spinal diseases is increasing every year.
이처럼 척추 통증으로 병원을 찾은 환자와의 외래 진료시 의사와 환자와의 인터뷰는 환자가 어떤 점을 가장 불편해하는지 파악하기 위해 필요한 과정이라고 할 수 있다.As such, an interview with a doctor and a patient during outpatient treatment with a patient who has visited the hospital for back pain is a necessary process to find out what the patient is most uncomfortable with.
인터뷰에서 의사는 환자의 X-Ray, MRI, CT 이미지를 보고, 환자와의 문진을 참고하여 환자의 병증 및 치료 방법을 결정해야 한다.During the interview, the doctor should look at the patient's X-ray, MRI, and CT images and determine the patient's condition and treatment method by referring to the interview with the patient.
그러나, 의사가 각 이미지를 통해 5가지 병소들을 찾아낸다고 하더라도, 어떤 병소들은 무증상인 경우가 있고 어떤 병소는 경미한 유증상, 어떤 병소는 주된 유증상 병소일 수 있다. 즉, 병소의 위치와 병변의 성격에 따라 증상들은 다르게 나타나기 때문에 환자와의 문진을 통해 환자가 어느 부분을 가장 불편해하는지 캐치해야 하며 여러 병소 중에 가장 시급하게 치료가 필요한 병소가 어디인지 찾아내는 데에도 많은 시간이 소요되었다.However, even if a doctor finds 5 lesions through each image, some lesions may be asymptomatic, some lesions may be mildly symptomatic, and some lesions may be predominantly symptomatic lesions. In other words, since symptoms appear differently depending on the location of the lesion and the nature of the lesion, it is necessary to catch the patient's most discomfort through interview with the patient, and it is also necessary to find out which lesion requires the most urgent treatment among various lesions. It took a lot of time.
또한, 의사는 주된 유증상 병소에 대해서도 수술을 반드시 필요로 하는지 비수술치료로 충분한지 등을 결정해야 하는데, 이처럼 복잡한 결정을 3분 내지 5분의 짧은 외래 시간동안 정확하게 결정하는 것은 현실적으로 어려움이 있었다.In addition, the doctor must decide whether surgery is necessary for the main symptomatic lesion or whether non-surgical treatment is sufficient.
<선행기술문헌><Prior art literature>
한국공개특허 제10-2020-0123593호Korean Patent Publication No. 10-2020-0123593
상기와 같은 문제를 해결하기 위한 본 발명의 목적은 인공지능의 진단 정확도를 향상시키기 위한 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템을 제공하는 것이다.An object of the present invention to solve the above problems is to provide a clinical linkage analysis system for subjective pain through voice analysis for pre-interview in order to improve the diagnostic accuracy of artificial intelligence.
본 발명이 이루고자 하는 기술적 과제는 이상에서 언급한 기술적 과제로 제한되지 않으며, 언급되지 않은 또 다른 기술적 과제들은 아래의 기재로부터 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자에게 명확하게 이해될 수 있을 것이다.The technical problems to be achieved by the present invention are not limited to the technical problems mentioned above, and other technical problems not mentioned can be clearly understood by those of ordinary skill in the art to which the present invention belongs from the description below. There will be.
상기와 같은 목적을 달성하기 위한 본 발명의 구성은 피검진자의 병증을 알기 위해 필요한 분석데이터를 획득하도록 마련된 챗봇모듈; 상기 챗봇모듈에 의해 획득된 상기 분석데이터를 분석하여 진단데이터를 도출하도록 마련된 서버모듈; 및 상기 서버모듈에 의해 도출된 진단데이터를 검진자에게 디스플레이하도록 마련된 단말모듈을 포함하는 것을 특징으로 하는 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템을 제공한다.A configuration of the present invention for achieving the above object is a chatbot module provided to acquire the analysis data necessary to know the condition of the subject; a server module provided to analyze the analysis data acquired by the chatbot module to derive diagnostic data; and a terminal module provided to display the diagnostic data derived by the server module to the examinee.
본 발명의 실시예에 있어서, 상기 분석데이터는, 문진 정보, 음성 정보, 영상 정보를 포함하는 것을 특징으로 할 수 있다.In an embodiment of the present invention, the analysis data may include questionnaire information, audio information, and image information.
본 발명의 실시예에 있어서, 상기 챗봇모듈은, 상기 서버모듈로부터 문진 정보를 제공받고, 상기 서버모듈에 상기 분석데이터를 제공하도록 마련된 챗봇통신부; 상기 챗봇통신부로부터 제공받은 상기 문진 정보를 피검진자에게 표시하도록 마련된 디스플레이부; 상기 문진 정보를 피검진자에게 읽어주도록 마련된 스피커부; 및 피검진자의 답변을 인식하여 문진 정보를 획득하고, 피검진자의 음성을 녹음한 음성 정보를 획득하도록 마련된 음성인식부를 포함하는 것을 특징으로 할 수 있다.In an embodiment of the present invention, the chatbot module includes: a chatbot communication unit provided to receive the questionnaire information from the server module and provide the analysis data to the server module; a display unit provided to display the questionnaire information provided from the chatbot communication unit to the examinee; a speaker unit provided to read the questionnaire information to the subject; and a voice recognition unit configured to acquire questionnaire information by recognizing the examinee's answer, and to obtain voice information recorded by the examinee's voice.
본 발명의 실시예에 있어서, 상기 챗봇모듈은, 문진시 피검진자의 행동 및 표정을 녹화하여 영상 정보를 획득하도록 마련된 촬영부를 더 포함하는 것을 특징으로 할 수 있다.In an embodiment of the present invention, the chatbot module may further include a photographing unit provided to acquire image information by recording the behavior and facial expressions of the examinee during the interview.
본 발명의 실시예에 있어서, 상기 서버모듈은, 상기 챗봇모듈로부터 분석데이터를 제공받도록 마련된 서버통신부; 상기 문진 정보를 분석하도록 마련된 문진분석부; 상기 음성 정보를 분석하도록 마련된 음성분석부; 상기 영상 정보에서 피검진자의 행동을 분석하도록 마련된 행동분석부; 상기 영상 정보에서 피검진자의 표정을 분석하도록 마련된 표정분석부; 및 상기 문진분석부, 상기 음성분석부, 상기 행동분석부, 상기 표정분석부의 분석 결과를 종합하여 피검진자의 병증을 진단하여 예상 병증 정보를 생성하도록 마련된 진단부를 포함하는 것을 특징으로 할 수 있다.In an embodiment of the present invention, the server module includes: a server communication unit provided to receive analysis data from the chatbot module; a questionnaire analysis unit provided to analyze the questionnaire information; a voice analyzer provided to analyze the voice information; a behavior analysis unit provided to analyze the behavior of the examinee from the image information; an expression analysis unit provided to analyze the expression of the subject in the image information; and a diagnosing unit configured to generate expected disease information by synthesizing the analysis results of the questionnaire analysis unit, the voice analysis unit, the behavior analysis unit, and the expression analysis unit to diagnose the condition of the subject.
본 발명의 실시예에 있어서, 상기 문진분석부는, 통증관련 설문, VAS(visual analogue scale) 점수 및 ODI(Oswestry Disability Index) 점수에 따라 신체 부위 별로 통증 점수를 산정하도록 마련된 것을 특징으로 할 수 있다.In an embodiment of the present invention, the questionnaire analysis unit may be configured to calculate a pain score for each body part according to a pain-related questionnaire, a visual analogue scale (VAS) score, and an Oswestry Disability Index (ODI) score.
본 발명의 실시예에 있어서, 상기 음성분석부는, 상기 음성 정보에서 피검진자의 목소리 톤을 분석하여 상기 통증 점수에서 신체 부위 별로 통증 가중치를 추가 산정하도록 마련된 것을 특징으로 할 수 있다.In an embodiment of the present invention, the voice analysis unit may be configured to analyze the tone of the examinee's voice from the voice information and additionally calculate a pain weight for each body part from the pain score.
본 발명의 실시예에 있어서, 상기 행동분석부는, 상기 영상 정보에서 피검진자의 행동을 분석하여 상기 통증 점수에서 신체 부위 별로 통증 가중치를 추가 산정하도록 마련된 것을 특징으로 할 수 있다.In an embodiment of the present invention, the behavior analysis unit may be configured to analyze the behavior of the examinee from the image information and additionally calculate a pain weight for each body part from the pain score.
본 발명의 실시예에 있어서, 상기 표정분석부는, 상기 영상 정보에서 피검진자의 얼굴 표정을 분석하여 상기 통증 점수에서 신체 부위 별로 통증 가중치를 추가 산정하도록 마련된 것을 특징으로 할 수 있다.In an embodiment of the present invention, the expression analysis unit may be configured to analyze the facial expression of the examinee from the image information and additionally calculate a pain weight for each body part from the pain score.
본 발명의 실시예에 있어서, 상기 진단부는, 상기 문진분석부에 의해 산정된 신체 부위 별 통증 점수에 상기 음성분석부, 상기 행동분석부, 상기 표정분석부의 분석 결과에 따라 추가 산정된 신체 부위 별 통증 가중치를 합산하여 상기 예상 병증 정보를 생성하도록 마련된 것을 특징으로 할 수 있다.In an embodiment of the present invention, the diagnosis unit includes a pain score for each body part calculated by the questionnaire analysis unit for each body part additionally calculated according to the analysis results of the voice analysis unit, the behavior analysis unit, and the expression analysis unit. It may be characterized in that it is provided to generate the predicted disease information by summing the pain weights.
본 발명의 실시예에 있어서, 상기 단말모듈은, 상기 서버모듈로부터 진단된 예상 병증 정보를 제공받는 단말통신부; 및 상기 단말통신부에 제공된 예상 병증 정보를 검진자에게 표시하는 정보표시부를 포함하는 것을 특징으로 할 수 있다.In an embodiment of the present invention, the terminal module includes: a terminal communication unit that receives information about the predicted disease diagnosed from the server module; and an information display unit for displaying the expected disease information provided to the terminal communication unit to the examinee.
본 발명의 실시예에 있어서, 상기 단말모듈은, 검진자로부터 진단된 피검진자의 실제 병증에 대한 진단 병증 정보를 입력하도록 마련된 입력부를 더 포함하는 것을 특징으로 할 수 있다.In an embodiment of the present invention, the terminal module may further include an input unit provided to input diagnostic condition information on an actual condition of the examinee diagnosed by the examinee.
본 발명의 실시예에 있어서, 상기 서버모듈은, 상기 입력부에 입력된 상기 진단 병증 정보와 진단부에 의해 도출된 예상 병증 정보를 비교하여 머신러닝하도록 마련된 학습부를 더 포함하는 것을 특징으로 할 수 있다.In an embodiment of the present invention, the server module may further include a learning unit configured to perform machine learning by comparing the diagnostic condition information input to the input unit with the predicted condition information derived by the diagnosis unit. .
본 발명의 실시예에 있어서, 상기 입력부에는 피검진자의 성격 정보가 더 입력되도록 마련되며, 상기 진단부는 상기 성격 정보에 따라 통증 가중치를 조정하도록 마련된 것을 특징으로 할 수 있다.In an embodiment of the present invention, the input unit may be provided to further input personality information of the examinee, and the diagnosis unit may be provided to adjust a pain weight according to the personality information.
상기와 같은 구성에 따르는 본 발명의 효과는, 피검진자와의 외래 진료 전 사전 문진을 통해 환자의 신체 부위 별 통증 정도를 쉽게 파악할 수 있다.According to the effect of the present invention according to the above configuration, it is possible to easily grasp the pain level for each body part of the patient through a preliminary questionnaire before outpatient treatment with the examinee.
진단부가 피검진자의 음성, 표정, 행동을 부가하여 예상 진단 정보를 산출함으로써, 보다 정확하게 피검진자의 병증을 예상할 수 있다.Since the diagnosis unit calculates predicted diagnostic information by adding the voice, facial expression, and behavior of the examinee, it is possible to more accurately predict the condition of the examinee.
의사가 피검진자의 병증에 대해 보다 잘 이해한 상태에서 피검진자와 외래 진료를 진행할 수 있기 때문에, 짧은 외래 진료 시간 내에 보다 정확하게 환자에게 시급하게 치료가 필요한 병소 및 치료 방법을 결정할 수 있다.Since the doctor can conduct outpatient treatment with the examinee with a better understanding of the subject's condition, it is possible to more accurately determine the lesion and treatment method that urgently require treatment for the patient within a short outpatient consultation time.
본 발명의 효과는 상기한 효과로 한정되는 것은 아니며, 본 발명의 상세한 설명 또는 특허청구범위에 기재된 발명의 구성으로부터 추론 가능한 모든 효과를 포함하는 것으로 이해되어야 한다.The effects of the present invention are not limited to the above effects, and it should be understood to include all effects that can be inferred from the configuration of the invention described in the detailed description or claims of the present invention.
도 1은 본 발명의 일실시예에 따른 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템의 구성예시도이다.1 is an exemplary configuration diagram of a system for analyzing clinical linkage of subjective pain through voice analysis for pre-questionnaires according to an embodiment of the present invention.
본 발명에 따른 가장 바람직한 일 실시예는, 피검진자의 병증을 알기 위해 필요한 분석데이터를 획득하도록 마련된 챗봇모듈; 상기 챗봇모듈에 의해 획득된 상기 분석데이터를 분석하여 진단데이터를 도출하도록 마련된 서버모듈; 및 상기 서버모듈에 의해 도출된 진단데이터를 검진자에게 디스플레이하도록 마련된 단말모듈을 포함하는 것을 특징으로 한다.A most preferred embodiment according to the present invention, a chatbot module provided to acquire the analysis data necessary to know the condition of the subject; a server module provided to analyze the analysis data acquired by the chatbot module to derive diagnostic data; and a terminal module provided to display the diagnostic data derived by the server module to the examinee.
이하에서는 첨부한 도면을 참조하여 본 발명을 설명하기로 한다. 그러나 본 발명은 여러 가지 상이한 형태로 구현될 수 있으며, 따라서 여기에서 설명하는 실시예로 한정되는 것은 아니다. 그리고 도면에서 본 발명을 명확하게 설명하기 위해서 설명과 관계없는 부분은 생략하였으며, 명세서 전체를 통하여 유사한 부분에 대해서는 유사한 도면 부호를 붙였다.Hereinafter, the present invention will be described with reference to the accompanying drawings. However, the present invention may be embodied in several different forms, and thus is not limited to the embodiments described herein. And in order to clearly explain the present invention in the drawings, parts irrelevant to the description are omitted, and similar reference numerals are attached to similar parts throughout the specification.
명세서 전체에서, 어떤 부분이 다른 부분과 "연결(접속, 접촉, 결합)"되어 있다고 할 때, 이는 "직접적으로 연결"되어 있는 경우뿐 아니라, 그 중간에 다른 부재를 사이에 두고 "간접적으로 연결"되어 있는 경우도 포함한다. 또한 어떤 부분이 어떤 구성요소를 "포함"한다고 할 때, 이는 특별히 반대되는 기재가 없는 한 다른 구성요소를 제외하는 것이 아니라 다른 구성요소를 더 구비할 수 있다는 것을 의미한다.Throughout the specification, when a part is "connected (connected, contacted, coupled)" with another part, it is not only "directly connected" but also "indirectly connected" with another member interposed therebetween. "Including cases where In addition, when a part "includes" a certain component, this means that other components may be further provided without excluding other components unless otherwise stated.
본 명세서에서 사용한 용어는 단지 특정한 실시예를 설명하기 위해 사용된 것으로, 본 발명을 한정하려는 의도가 아니다. 단수의 표현은 문맥상 명백하게 다르게 뜻하지 않는 한, 복수의 표현을 포함한다. 본 명세서에서, "포함하다" 또는 "가지다" 등의 용어는 명세서상에 기재된 특징, 숫자, 단계, 동작, 구성요소, 부품 또는 이들을 조합한 것이 존재함을 지정하려는 것이지, 하나 또는 그 이상의 다른 특징들이나 숫자, 단계, 동작, 구성요소, 부품 또는 이들을 조합한 것들의 존재 또는 부가 가능성을 미리 배제하지 않는 것으로 이해되어야 한다.The terms used herein are used only to describe specific embodiments, and are not intended to limit the present invention. The singular expression includes the plural expression unless the context clearly dictates otherwise. In the present specification, terms such as “comprise” or “have” are intended to designate that a feature, number, step, operation, component, part, or combination thereof described in the specification exists, but one or more other features It is to be understood that this does not preclude the possibility of the presence or addition of numbers, steps, operations, components, parts, or combinations thereof.
이하 첨부된 도면을 참고하여 본 발명의 실시예를 상세히 설명하기로 한다.Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
도 1은 본 발명의 일실시예에 따른 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템의 구성예시도이다.1 is an exemplary configuration diagram of a system for analyzing clinical linkage of subjective pain through voice analysis for pre-questionnaires according to an embodiment of the present invention.
도 1을 참조하면, 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템(100)은 챗봇모듈(110), 서버모듈(120) 및 단말모듈(130)을 포함할 수 있다.Referring to FIG. 1 , the clinical linkage analysis system 100 of subjective pain through voice analysis for pre-interview may include a chatbot module 110 , a server module 120 , and a terminal module 130 .
상기 챗봇모듈(110)은 피검진자의 병증을 알기 위해 필요한 분석데이터를 획득하도록 마련되며, 챗봇통신부(111), 디스플레이부(112), 스피커부(113), 음성인식부(114) 및 촬영부(115)를 포함할 수 있다.The chatbot module 110 is provided to acquire the analysis data necessary to know the condition of the subject, the chatbot communication unit 111, the display unit 112, the speaker unit 113, the voice recognition unit 114 and the photographing unit (115).
여기서, 상기 분석데이터는, 문진 정보, 음성 정보, 영상 정보를 포함할 수 있다.Here, the analysis data may include questionnaire information, audio information, and image information.
상기 챗봇통신부(111)는, 상기 서버모듈(120)로부터 문진 정보를 제공받고, 상기 서버모듈(120)에 상기 분석데이터를 제공하도록 마련될 수 있다.The chatbot communication unit 111 may be provided to receive the questionnaire information from the server module 120 and provide the analysis data to the server module 120 .
구체적으로, 상기 챗봇통신부(111)는 상기 서버모듈(120)로부터 VAS(visual analogue scale) 설문 및 ODI(Oswestry Disability Index) 설문 및 기타 통증 관련 설문을 포함하여 피검진자의 병증을 파악하기 위해 필요한 질문들을 제공받도록 마련될 수 있다.Specifically, the chatbot communication unit 111 includes a visual analogue scale (VAS) questionnaire, an Oswestry Disability Index (ODI) questionnaire, and other pain-related questionnaires from the server module 120 to ask questions necessary to understand the condition of the examinee. may be arranged to be provided.
그리고, 챗봇통신부(111)는 제공받은 설문들을 상기 디스플레이부(112) 및 스피커부(113)에 제공할 수 있으며, 음성인식부(114) 및 촬영부(115)로부터 제공받은 정보를 서버모듈(120)로 제공하도록 마련될 수 있다.In addition, the chatbot communication unit 111 may provide the provided questionnaires to the display unit 112 and the speaker unit 113 , and transmit information received from the voice recognition unit 114 and the photographing unit 115 to the server module ( 120) may be provided.
상기 디스플레이부(112)는 상기 챗봇통신부(111)로부터 제공받은 상기 문진 정보를 피검진자에게 표시하도록 마련될 수 있다.The display unit 112 may be provided to display the questionnaire information provided from the chatbot communication unit 111 to the examinee.
이때, 상기 디스플레이부(112)는 피검진자에게 설문을 보여주기만 하는 것이 아니라 터치 스크린으로 마련되어 피검진자가 답변을 직접 선택할 수 있도록 마련될 수도 있다.In this case, the display unit 112 may be provided as a touch screen, in addition to showing a questionnaire to the examinee, so that the examinee can directly select an answer.
상기 스피커부(113)는 상기 문진 정보를 피검진자에게 읽어주도록 마련될 수 있다.The speaker unit 113 may be provided to read the questionnaire information to the examinee.
구체적으로, 상기 스피커부(113)는 상기 디스플레이부(112)에 표시된 설문 내용을 피검진자에게 읽어주고, 답변을 하도록 안내할 수 있다. 이처럼 마련된 스피커부(113)는 피검진자가 실제 문진과 같이 자연스럽게 대화를 하도록 유도함으로써 다양한 표정 변화 및 행동 변화를 이끌어낼 수 있다.Specifically, the speaker unit 113 may read the questionnaire displayed on the display unit 112 to the examinee and guide the subject to answer. The speaker unit 113 provided in this way can induce various facial expression changes and behavior changes by inducing the subject to have a natural conversation like an actual questionnaire.
상기 음성인식부(114)는 피검진자의 답변을 인식하여 문진 정보를 획득하고, 피검진자의 음성을 녹음한 음성 정보를 획득하도록 마련될 수 있다.The voice recognition unit 114 may be provided to obtain the questionnaire information by recognizing the answer of the examinee, and to obtain the voice information recorded by the examinee's voice.
상기 촬영부(115)는 문진시 피검진자의 행동 및 표정을 녹화하여 영상 정보를 획득하도록 마련될 수 있다.The photographing unit 115 may be provided to obtain image information by recording the behavior and facial expressions of the examinee during the interview.
이처럼 획득된 문진 정보, 음성 정보, 영상 정보는 상기 챗봇통신부(111)에 의해 서버모듈(120)에 전송될 수 있다.The obtained questionnaire information, voice information, and image information may be transmitted to the server module 120 by the chatbot communication unit 111 .
상기 서버모듈(120)은, 상기 챗봇모듈(110)에 의해 획득된 상기 분석데이터를 분석하여 진단데이터를 도출하도록 마련되며, 서버통신부(121), 문진분석부(122), 음성분석부(123), 행동분석부(124), 표정분석부(125), 진단부(126) 및 학습부(127)를 포함할 수 있다.The server module 120 is provided to analyze the analysis data obtained by the chatbot module 110 to derive diagnostic data, and a server communication unit 121 , a questionnaire analysis unit 122 , and a voice analysis unit 123 . ), a behavior analysis unit 124 , an expression analysis unit 125 , a diagnosis unit 126 , and a learning unit 127 .
상기 서버통신부(121)는 상기 챗봇모듈(110)로부터 분석데이터를 제공받도록 마련될 수 있다.The server communication unit 121 may be provided to receive analysis data from the chatbot module 110 .
구체적으로, 상기 서버통신부(121)는 상기 챗봇통신부(111)로부터 문진 정보, 음성 정보, 영상 정보를 전송받을 수 있다. 상기 서버통신부(121)는 문진 정보를 상기 음성분석보 또는 문진분석부(122)에 제공하며, 음성 정보를 음성 분석부에 제공할 수 있다. 그리고 상기 서버통신부(121)는 영상 정보를 행동분석부(124)와 표정분석부(125)에 제공할 수 있다.Specifically, the server communication unit 121 may receive questionnaire information, voice information, and image information from the chatbot communication unit 111 . The server communication unit 121 may provide the questionnaire information to the voice analysis report or the questionnaire analysis unit 122, and may provide the voice information to the voice analysis unit. In addition, the server communication unit 121 may provide the image information to the behavior analysis unit 124 and the expression analysis unit 125 .
상기 문진분석부(122)는 상기 문진 정보를 분석하도록 마련될 수 있다.The questionnaire analysis unit 122 may be provided to analyze the questionnaire information.
상기 문진분석부(122)는 VAS(visual analogue scale) 설문 및 ODI(Oswestry Disability Index) 설문 및 기타 설문의 내용을 서버통신부(121)를 통해 챗봇모듈(110)에 제공하도록 마련될 수 있다.The questionnaire analysis unit 122 may be provided to provide the contents of a visual analogue scale (VAS) questionnaire, an ODI (Oswestry Disability Index) questionnaire, and other questionnaires to the chatbot module 110 through the server communication unit 121 .
그리고, 상기 문진분석부(122)는 상기 음성분석부(123)에 의해 분석된 문진에 대한 피검진자의 답변을 제공받도록 마련될 수 있다. 또는, 피검진자가 문진에 대한 답을 디스플레이부(112)에 직접 선택한 방식을 취한 경우에는 음성분석부(123)를 거치지 않고 직접 서버통신부(121)로부터 문진에 대한 답변을 제공받도록 마련될 수도 있다.In addition, the questionnaire analysis unit 122 may be provided to receive the examinee's answer to the questionnaire analyzed by the voice analysis unit 123 . Alternatively, when the examinee directly selects the answer to the questionnaire on the display unit 112 , it may be provided to directly receive the answer to the questionnaire from the server communication unit 121 without going through the voice analysis unit 123 . .
이처럼 마련된, 상기 문진분석부(122)는 피검진자가 답변한 VAS(visual analogue scale) 점수 및 ODI(Oswestry Disability Index) 점수에 따라 신체 부위 별로 통증 점수를 산정하도록 마련될 수 있다.The prepared questionnaire analysis unit 122 may be provided to calculate a pain score for each body part according to a visual analogue scale (VAS) score and an Oswestry Disability Index (ODI) score answered by the examinee.
상기 음성분석부(123)는 상기 음성 정보를 분석하도록 마련될 수 있다.The voice analyzer 123 may be provided to analyze the voice information.
구체적으로, 상기 음성분석부(123)는, 피검진자의 문진에 대한 답변을 분석하여 문진 정보를 도출한 후 이를 문진 분석부에 제공하도록 마련될 수 있다.Specifically, the voice analysis unit 123 may be provided to analyze the answer to the examinee's questionnaire, derive questionnaire information, and then provide it to the questionnaire analysis unit.
그리고, 상기 음성분석부(123)는, 상기 음성 정보에서 피검진자의 목소리 톤을 분석하여 상기 통증 점수에서 신체 부위 별로 통증 가중치를 추가 산정하도록 마련될 수 있다.In addition, the voice analysis unit 123 may be provided to analyze the tone of the examinee's voice from the voice information and additionally calculate a pain weight for each body part from the pain score.
일 예로, 상기 음성분석부(123)는 피검진자의 설문 중 특정 답변에서 목소리 톤이 전체 평균에 비해 급격하게 올라가거나 내려가는 지점에 대해 통증 가중치를 추가 산정하도록 마련될 수 있다.For example, the voice analysis unit 123 may be provided to additionally calculate a pain weight for a point at which the voice tone abruptly rises or falls compared to the overall average in a specific answer of the examinee's questionnaire.
또는 상기 음성분석부(123)는 피검진자의 답변 중 반복적으로 언급되는 신체 부위에 대해 가중치를 더 높게 산정하도록 마련될 수 있다.Alternatively, the voice analyzer 123 may be provided to calculate a higher weight for a body part that is repeatedly mentioned among answers of the examinee.
상기 행동분석부(124)는 상기 영상 정보에서 피검진자의 행동을 분석하도록 마련될 수 있다.The behavior analysis unit 124 may be provided to analyze the behavior of the examinee in the image information.
구체적으로, 상기 행동분석부(124)는, 상기 영상 정보에서 피검진자의 행동을 분석하여 상기 통증 점수에서 신체 부위 별로 통증 가중치를 추가 산정하도록 마련될 수 있다.Specifically, the behavior analysis unit 124 may be provided to analyze the behavior of the examinee in the image information and additionally calculate a pain weight for each body part from the pain score.
일 예로, 상기 행동분석부(124)는 영상 정보에서 피검진자의 설문 중 특정 답변에서 손이 많이 움직이거는 등 자세의 변화가 많아질 경우 해당 질문에서 해당하는 신체 부위에 대한 통증 가중치를 추가 산정하도록 마련될 수 있다.As an example, the behavior analysis unit 124 additionally calculates the pain weight for the body part corresponding to the question when there is a large change in posture, such as when the hand moves a lot in a specific answer among the questionnaire of the examinee in the image information. may be arranged to do so.
상기 표정분석부(125)는 상기 영상 정보에서 피검진자의 표정을 분석하도록 마련될 수 있다.The expression analysis unit 125 may be provided to analyze the expression of the subject in the image information.
구체적으로, 상기 표정분석부(125)는 상기 영상 정보에서 피검진자의 얼굴 표정을 분석하여 상기 통증 점수에서 신체 부위 별로 통증 가중치를 추가 산정하도록 마련될 수 있다.Specifically, the facial expression analysis unit 125 may be provided to analyze the facial expression of the examinee from the image information and additionally calculate a pain weight for each body part from the pain score.
일 예로, 상기 표정분석부(125)는 피검진자가 설문 중 특정 질문에서 얼굴 표정이 찡그리는 등의 표정 변화가 발생한 경우 특정 질문에 해당하는 신체 부위에 대한 통증 가중치를 추가 산정하도록 마련될 수 있다.For example, the facial expression analysis unit 125 may be provided to additionally calculate a pain weight for a body part corresponding to a specific question when a facial expression change, such as a frown, occurs in a specific question during the questionnaire.
상기 진단부(126)는 상기 문진분석부(122), 상기 음성분석부(123), 상기 행동분석부(124), 상기 표정분석부(125)의 분석 결과를 종합하여 피검진자의 병증을 진단하여 예상 병증 정보를 생성하도록 마련될 수 있다.The diagnosis unit 126 synthesizes the analysis results of the questionnaire analysis unit 122, the voice analysis unit 123, the behavior analysis unit 124, and the expression analysis unit 125 to diagnose the condition of the subject. Thus, it may be provided to generate expected disease information.
구체적으로, 상기 진단부(126)는, 상기 문진분석부(122)에 의해 산정된 신체 부위 별 통증 점수에 상기 음성분석부(123), 상기 행동분석부(124), 상기 표정분석부(125)의 분석 결과에 따라 추가 산정된 신체 부위 별 통증 가중치를 합산하여 상기 예상 병증 정보를 생성하도록 마련될 수 있다.Specifically, the diagnosis unit 126 includes the voice analysis unit 123 , the behavior analysis unit 124 , and the expression analysis unit 125 based on the pain score for each body part calculated by the questionnaire analysis unit 122 . ) may be provided to generate the predicted disease information by summing the pain weights for each body part additionally calculated according to the analysis result.
일 예로, 상기 문진분석부(122)의 분석 결과, 엉치, 골반, 허벅지 옆부분, 허벅지 뒷부분, 종아리 옆부분, 허리, 종아리 뒷부분, 발등, 발바닥, 엄지발가락 등에 높은 통증 점수가 매겨질 수 있다. 이 경우, 요추 추간판 탈출증으로 요추 4번, 5번, 천추 1번에 문제가 있는 것으로 볼 수 있다.For example, as a result of the analysis of the questionnaire analysis unit 122, a high pain score may be assigned to the buttocks, pelvis, thigh side, back thigh, side calf, waist, back calf, instep, sole of the foot, big toe, etc. In this case, it can be seen that there is a problem in the 4th, 5th, and 1st sacrum vertebrae due to herniation of the lumbar vertebrae.
여기서, 상기 음성분석부(123), 행동분석부(124), 표정분석부(125)의 분석 결과에 따라 허리, 엉치, 골반, 허벅지 뒷부분, 종아리 뒷부분, 발바닥의 통증 점수에 통증 가중치가 부여된다고 가정하면, 진단부(126)는 가장 문제가 시급한 부분이 요추 5번, 천추 1번에 해당하며, 요추 추간판 탈출증이 있다고 예상 병증 정보를 생성할 수 있다.Here, according to the analysis results of the voice analysis unit 123, the behavior analysis unit 124, and the expression analysis unit 125, pain weights are given to the pain scores of the waist, sacrum, pelvis, the back of the thigh, the back of the calf, and the sole of the foot. It is assumed that the diagnosis unit 126 may generate information on the expected pathology that the most urgent part corresponds to lumbar vertebrae 5 and sacral vertebrae 1, and that there is a herniated disc of the lumbar vertebrae.
이처럼, 상기 서버모듈(120)은 문진 결과만이 아니라, 피검진자의 음성, 행동, 표정 변화를 종합적으로 판단하여 피검진자의 병증을 진단하고 그 중에서도 가장 시급하게 해결이 필요한 부분을 진단하도록 마련될 수 있다.As such, the server module 120 is provided to diagnose not only the results of the questionnaire, but also the changes in the subject's voice, behavior, and facial expression, to diagnose the subject's condition and to diagnose the part that needs to be solved most urgently. can
또한, 상기 진단부(126)는 예상 병증 정보에서 신체 부위 별 통증 정도에 따라 가능한 치료 방법들을 함께 추천하도록 마련될 수도 있다.In addition, the diagnosis unit 126 may be provided to recommend possible treatment methods according to the pain level for each body part in the predicted disease information.
상기 단말모듈(130)은 상기 서버모듈(120)에 의해 도출된 진단데이터를 검진자에게 디스플레이하도록 마련되며, 단말통신부(131), 정보표시부(132), 입력부(133)를 포함할 수 있다.The terminal module 130 is provided to display the diagnostic data derived by the server module 120 to the examinee, and may include a terminal communication unit 131 , an information display unit 132 , and an input unit 133 .
상기 단말통신부(131)는 상기 서버모듈(120)로부터 진단된 예상 병증 정보를 제공받도록 마련될 수 있다.The terminal communication unit 131 may be provided to receive the diagnosed predicted disease information from the server module 120 .
구체적으로, 상기 단말통신부(131)는 상기 서버모듈(120)로부터 진단된 진단데이터인 예상 병증 정보를 제공받아 정보표시부(132)에 제공하고, 상기 입력부(133)에 입력된 정보를 상기 서버모듈(120)에 제공하도록 마련될 수 있다.Specifically, the terminal communication unit 131 receives the diagnosis data, which is the diagnosis data, from the server module 120 and provides it to the information display unit 132, and transmits the information input to the input unit 133 to the server module. It may be provided to provide to 120 .
상기 정보표시부(132)는 상기 단말통신부(131)에 제공된 예상 병증 정보를 검진자에게 표시하도록 마련될 수 있다.The information display unit 132 may be provided to display the expected disease information provided to the terminal communication unit 131 to the examinee.
상기 정보 표시부는 컴퓨터, 휴대폰, 노트북, 모니터 등의 전자 장치를 모두 포함할 수 있다.The information display unit may include all electronic devices such as a computer, a mobile phone, a notebook computer, and a monitor.
검진자는 상기 정보 표시부에 표시된 예상 병증 정보를 토대로 피검진자의 병증 및 통증이 심한 부위를 예상하고 문진을 시작할 수 있기 때문에 보다 신속하게 피검진자에 대한 병증을 진단하고 치료 요법을 결정하게 될 수 있다.Since the examinee can anticipate the patient's condition and the painful region based on the expected disease information displayed on the information display unit and start the interview, it is possible to more quickly diagnose the condition of the examinee and determine the treatment regimen.
상기 입력부(133)는 검진자로부터 진단된 피검진자의 실제 병증에 대한 진단 병증 정보를 입력하도록 마련될 수 있다. 그리고, 상기 입력부(133)에는 치료 방법이나, 피검진자의 성격 등에 대해서도 더 입력하도록 마련될 수 있다. 여기서 성격이란, 통증에 대해 반응하는 정도로서, 예상되는 통증에 비해 과장이 심한편인지 등을 점수 등으로 입력하도록 마련될 수 있다.The input unit 133 may be provided to input diagnostic condition information on the actual condition of the examinee diagnosed by the examinee. In addition, the input unit 133 may be provided to further input the treatment method, the personality of the examinee, and the like. Here, the personality refers to a degree of reaction to pain, and it may be provided to input whether the exaggeration is severe compared to the expected pain as a score or the like.
상기 단말통신부(131)는 상기 입력부(133)로부터 상기 진단 병증 정보, 피검진자의 성격 정보, 치료 방법 등을 서버통신부(121)로 제공하도록 마련될 수 있다.The terminal communication unit 131 may be provided to provide the diagnostic condition information, the personality information of the examinee, a treatment method, and the like from the input unit 133 to the server communication unit 121 .
상기 서버통신부(121)는 상기 진단 병증 정보, 치료 방법을 상기 학습부(127)에 제공하도록 마련될 수 있다.The server communication unit 121 may be provided to provide the diagnostic condition information and treatment method to the learning unit 127 .
상기 학습부(127)는 상기 단말모듈(130)에 마련된 입력부(133)에 입력된 진단 병증 정보와 진단부(126)에 의해 도출된 예상 병증 정보를 비교하여 머신러닝하도록 마련될 수 있다.The learning unit 127 may be provided to perform machine learning by comparing the diagnostic condition information input to the input unit 133 provided in the terminal module 130 with the expected condition information derived by the diagnosis unit 126 .
그리고, 상기 학습부(127)는 추천한 치료 방법과 검진자가 진단한 치료 방법을 비교하여 인공 지능에 의해 머신러닝을 하도록 마련되고, 머신러닝한 결과에 따라 상기 진단부(126)를 업데이트하여 보다 정확한 진단이 이루어지도록 할 수 있다.In addition, the learning unit 127 is provided to perform machine learning by artificial intelligence by comparing the recommended treatment method with the treatment method diagnosed by the examinee, and updates the diagnosis unit 126 according to the machine learning result. It can help make an accurate diagnosis.
또한, 상기 서버통신부(121)는 피검진자의 성격 정보를 진단부(126)에 제공함으로써, 상기 진단부(126)는 상기 성격 정보에 따라 상기 통증 가중치를 조정하도록 마련될 수 있다.In addition, the server communication unit 121 provides the personality information of the examinee to the diagnosis unit 126 , so that the diagnosis unit 126 may be provided to adjust the pain weight according to the personality information.
일 예로, 예상되는 통증에 비해 몸짓이나 목소리 톤의 변동이 큰 사람일 경우, 이에 대한 통증 가중치를 낮추도록 할 수 있다.As an example, in the case of a person who has a large change in gesture or voice tone compared to the expected pain, the weight of the pain may be lowered.
전술한 바와 같이 마련된 본 발명은 문진 결과만이 아니라, 피검진자의 음성, 행동, 표정 변화를 인공지능에 의해 종합적으로 판단하여 피검진자의 병증을 진단하고 그 중에서도 가장 시급하게 해결이 필요한 부분을 진단하도록 마련될 수 있다.The present invention prepared as described above diagnoses the condition of the examinee by comprehensively judging the change in voice, behavior, and facial expression of the examinee by artificial intelligence, not only the results of the questionnaire, and diagnoses the most urgently needed part among them. may be arranged to do so.
그리고, 검진자는 산출된 예상 병증 정보를 토대로 피검진자의 병증 및 통증이 심한 부위를 예상하고 문진을 시작할 수 있기 때문에 짧은 외래 시간 내에 보다 피검진자에 대한 병증을 정확하게 진단하고 가장 적절한 치료 요법을 결정하게 될 수 있다.And, based on the calculated predicted disease information, the examiner can anticipate the patient's condition and painful area and start the interview, so that the patient can accurately diagnose the condition of the examinee within a shorter outpatient period and determine the most appropriate treatment regimen. can be
전술한 본 발명의 설명은 예시를 위한 것이며, 본 발명이 속하는 기술분야의 통상의 지식을 가진 자는 본 발명의 기술적 사상이나 필수적인 특징을 변경하지 않고서 다른 구체적인 형태로 쉽게 변형이 가능하다는 것을 이해할 수 있을 것이다. 그러므로 이상에서 기술한 실시예들은 모든 면에서 예시적인 것이며 한정적이 아닌 것으로 이해해야만 한다. 예를 들어, 단일형으로 설명되어 있는 각 구성 요소는 분산되어 실시될 수도 있으며, 마찬가지로 분산된 것으로 설명되어 있는 구성 요소들도 결합된 형태로 실시될 수 있다.The foregoing description of the present invention is for illustration, and those of ordinary skill in the art to which the present invention pertains can understand that it can be easily modified into other specific forms without changing the technical spirit or essential features of the present invention. will be. Therefore, it should be understood that the embodiments described above are illustrative in all respects and not restrictive. For example, each component described as a single type may be implemented in a distributed manner, and likewise components described as distributed may also be implemented in a combined form.
본 발명의 범위는 후술하는 특허청구범위에 의하여 나타내어지며, 특허청구범위의 의미 및 범위 그리고 그 균등 개념으로부터 도출되는 모든 변경 또는 변형된 형태가 본 발명의 범위에 포함되는 것으로 해석되어야 한다.The scope of the present invention is indicated by the following claims, and all changes or modifications derived from the meaning and scope of the claims and their equivalents should be construed as being included in the scope of the present invention.
<부호의 설명><Explanation of code>
100: 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템100: Clinical linkage analysis system of subjective pain through voice analysis for prior interview
110: 챗봇모듈110: chatbot module
111: 챗봇통신부111: chatbot communication department
112: 디스플레이부112: display unit
113: 스피커부113: speaker unit
114: 음성인식부114: voice recognition unit
115: 촬영부115: shooting department
120: 서버모듈120: server module
121: 서버통신부121: server communication unit
122: 문진분석부122: Questionnaire analysis department
123: 음성분석부123: voice analysis unit
124: 행동분석부124: behavior analysis unit
125: 표정분석부125: expression analysis unit
126: 진단부126: diagnostic unit
127: 학습부127: study department
130: 단말모듈130: terminal module
131: 단말통신부131: terminal communication unit
132: 정보표시부132: information display unit
133: 입력부133: input unit

Claims (14)

  1. 피검진자의 병증을 알기 위해 필요한 분석데이터를 획득하도록 마련된 챗봇모듈;A chatbot module provided to acquire analysis data necessary to know the condition of the examinee;
    상기 챗봇모듈에 의해 획득된 상기 분석데이터를 분석하여 진단데이터를 도출하도록 마련된 서버모듈; 및a server module provided to analyze the analysis data acquired by the chatbot module to derive diagnostic data; and
    상기 서버모듈에 의해 도출된 진단데이터를 검진자에게 디스플레이하도록 마련된 단말모듈을 포함하는 것을 특징으로 하는 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템.Clinical linkage analysis system for subjective pain through voice analysis for pre-interview, characterized in that it comprises a terminal module provided to display the diagnostic data derived by the server module to the examinee.
  2. 제 1 항에 있어서,The method of claim 1,
    상기 분석데이터는,The analysis data is
    문진 정보, 음성 정보, 영상 정보를 포함하는 것을 특징으로 하는 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템.Clinical linkage analysis system of subjective pain through voice analysis for pre-interview, characterized in that it includes questionnaire information, audio information, and image information.
  3. 제 1 항에 있어서,The method of claim 1,
    상기 챗봇모듈은,The chatbot module is
    상기 서버모듈로부터 문진 정보를 제공받고, 상기 서버모듈에 상기 분석데이터를 제공하도록 마련된 챗봇통신부;a chatbot communication unit configured to receive questionnaire information from the server module and provide the analysis data to the server module;
    상기 챗봇통신부로부터 제공받은 상기 문진 정보를 피검진자에게 표시하도록 마련된 디스플레이부;a display unit provided to display the questionnaire information provided from the chatbot communication unit to the examinee;
    상기 문진 정보를 피검진자에게 읽어주도록 마련된 스피커부; 및a speaker unit provided to read the questionnaire information to the subject; and
    피검진자의 답변을 인식하여 문진 정보를 획득하고, 피검진자의 음성을 녹음한 음성 정보를 획득하도록 마련된 음성인식부를 포함하는 것을 특징으로 하는 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템.Clinical linkage analysis system of subjective pain through voice analysis for pre-questionnaires, characterized in that it includes a voice recognition unit configured to acquire the questionnaire information by recognizing the answer of the examinee and to obtain the voice information recorded by the examinee's voice.
  4. 제 3 항에 있어서,4. The method of claim 3,
    상기 챗봇모듈은,The chatbot module is
    문진시 피검진자의 행동 및 표정을 녹화하여 영상 정보를 획득하도록 마련된 촬영부를 더 포함하는 것을 특징으로 하는 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템.Clinical linkage analysis system of subjective pain through voice analysis for pre-interview, characterized in that it further comprises a photographing unit provided to obtain image information by recording the behavior and facial expressions of the examinee during the interview.
  5. 제 2 항에 있어서,3. The method of claim 2,
    상기 서버모듈은,The server module,
    상기 챗봇모듈로부터 분석데이터를 제공받도록 마련된 서버통신부;a server communication unit provided to receive analysis data from the chatbot module;
    상기 문진 정보를 분석하도록 마련된 문진분석부;a questionnaire analysis unit provided to analyze the questionnaire information;
    상기 음성 정보를 분석하도록 마련된 음성분석부;a voice analysis unit provided to analyze the voice information;
    상기 영상 정보에서 피검진자의 행동을 분석하도록 마련된 행동분석부;a behavior analysis unit provided to analyze the behavior of the examinee from the image information;
    상기 영상 정보에서 피검진자의 표정을 분석하도록 마련된 표정분석부; 및an expression analysis unit provided to analyze the expression of the subject in the image information; and
    상기 문진분석부, 상기 음성분석부, 상기 행동분석부, 상기 표정분석부의 분석 결과를 종합하여 피검진자의 병증을 진단하여 예상 병증 정보를 생성하도록 마련된 진단부를 포함하는 것을 특징으로 하는 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템.Voice for pre-question interview, characterized in that it comprises a diagnosis unit provided to diagnose the condition of the subject by synthesizing the analysis results of the questionnaire analysis unit, the voice analysis unit, the behavior analysis unit, and the expression analysis unit to generate information about the expected condition. Clinical relevance analysis system of subjective pain through analysis.
  6. 제 5 항에 있어서,6. The method of claim 5,
    상기 문진분석부는,The questionnaire analysis unit,
    VAS(visual analogue scale) 점수 및 ODI(Oswestry Disability Index) 점수에 따라 신체 부위 별로 통증 점수를 산정하도록 마련된 것을 특징으로 하는 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템.Clinical linkage analysis system of subjective pain through voice analysis for pre-questionnaires, characterized in that the pain score is calculated for each body part according to the VAS (visual analogue scale) score and ODI (Oswestry Disability Index) score.
  7. 제 6 항에 있어서,7. The method of claim 6,
    상기 음성분석부는,The voice analysis unit,
    상기 음성 정보에서 피검진자의 목소리 톤을 분석하여 상기 통증 점수에서 신체 부위 별로 통증 가중치를 추가 산정하도록 마련된 것을 특징으로 하는 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템.The clinical linkage analysis system of subjective pain through voice analysis for pre-questionnaires, characterized in that it is provided to additionally calculate a pain weight for each body part from the pain score by analyzing the voice tone of the examinee from the voice information.
  8. 제 6 항에 있어서,7. The method of claim 6,
    상기 행동분석부는,The behavior analysis unit,
    상기 영상 정보에서 피검진자의 행동을 분석하여 상기 통증 점수에서 신체 부위 별로 통증 가중치를 추가 산정하도록 마련된 것을 특징으로 하는 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템.The clinical linkage analysis system of subjective pain through voice analysis for pre-questionnaires, characterized in that it is provided to additionally calculate a pain weight for each body part from the pain score by analyzing the behavior of the examinee from the image information.
  9. 제 6 항에 있어서,7. The method of claim 6,
    상기 표정분석부는,The facial expression analysis unit,
    상기 영상 정보에서 피검진자의 얼굴 표정을 분석하여 상기 통증 점수에서 신체 부위 별로 통증 가중치를 추가 산정하도록 마련된 것을 특징으로 하는 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템.A clinical linkage analysis system for subjective pain through voice analysis for pre-questionnaires, characterized in that it is provided to additionally calculate a pain weight for each body part from the pain score by analyzing the face expression of the examinee from the image information.
  10. 제 6 항에 있어서,7. The method of claim 6,
    상기 진단부는,The diagnostic unit,
    상기 문진분석부에 의해 산정된 신체 부위 별 통증 점수에 상기 음성분석부, 상기 행동분석부, 상기 표정분석부의 분석 결과에 따라 추가 산정된 신체 부위 별 통증 가중치를 합산하여 상기 예상 병증 정보를 생성하도록 마련된 것을 특징으로 하는 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템.To generate the predicted disease information by adding the pain weight for each body part calculated by the voice analysis unit, the behavior analysis unit, and the expression analysis unit to the pain score for each body part calculated by the questionnaire analysis unit. Clinical linkage analysis system of subjective pain through voice analysis for pre-interview, characterized in that it is prepared.
  11. 제 1 항에 있어서,The method of claim 1,
    상기 단말모듈은,The terminal module,
    상기 서버모듈로부터 진단된 예상 병증 정보를 제공받는 단말통신부; 및a terminal communication unit that receives information about the predicted disease diagnosed from the server module; and
    상기 단말통신부에 제공된 예상 병증 정보를 검진자에게 표시하는 정보표시부를 포함하는 것을 특징으로 하는 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템.Clinical linkage analysis system of subjective pain through voice analysis for pre-interview, characterized in that it comprises an information display unit for displaying the expected disease information provided to the terminal communication unit to the examinee.
  12. 제 11 항에 있어서,12. The method of claim 11,
    상기 단말모듈은,The terminal module,
    검진자로부터 진단된 피검진자의 실제 병증에 대한 진단 병증 정보를 입력하도록 마련된 입력부를 더 포함하는 것을 특징으로 하는 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템.The clinical linkage analysis system of subjective pain through voice analysis for pre-questionnaires, characterized in that it further comprises an input unit configured to input diagnostic condition information on the actual condition of the examinee diagnosed by the examinee.
  13. 제 12 항에 있어서,13. The method of claim 12,
    상기 서버모듈은,The server module,
    상기 입력부에 입력된 상기 진단 병증 정보와 진단부에 의해 도출된 예상 병증 정보를 비교하여 머신러닝하도록 마련된 학습부를 더 포함하는 것을 특징으로 하는 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템.The clinical linkage analysis system of subjective pain through voice analysis for pre-questionnaires, further comprising a learning unit configured to perform machine learning by comparing the diagnostic condition information input to the input unit with the expected condition information derived by the diagnosis unit.
  14. 제 12 항에 있어서,13. The method of claim 12,
    상기 입력부에는 피검진자의 성격 정보가 더 입력되도록 마련되며,The input unit is provided to further input personality information of the subject,
    상기 진단부는 상기 성격 정보에 따라 통증 가중치를 조정하도록 마련된 것을 특징으로 하는 사전 문진용 음성 분석을 통한 주관적 통증의 임상 연계성 분석 시스템.The diagnostic unit is provided to adjust the pain weight according to the personality information.
PCT/KR2021/016726 2021-01-22 2021-11-16 System for analyzing clinical connection of subjective pain through voice analysis for preliminary questionnaire WO2022158689A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020210009652A KR102522172B1 (en) 2021-01-22 2021-01-22 Clinical linkage analysis system of subjective pain through voice analysis for pre-interview
KR10-2021-0009652 2021-01-22

Publications (1)

Publication Number Publication Date
WO2022158689A1 true WO2022158689A1 (en) 2022-07-28

Family

ID=82549501

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/016726 WO2022158689A1 (en) 2021-01-22 2021-11-16 System for analyzing clinical connection of subjective pain through voice analysis for preliminary questionnaire

Country Status (2)

Country Link
KR (1) KR102522172B1 (en)
WO (1) WO2022158689A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160060807A (en) * 2014-11-20 2016-05-31 권오봉 Patient Management System, Patient Management Server, Patient Mobile and Managing Method therefor
KR20190066689A (en) * 2017-12-06 2019-06-14 고려대학교 산학협력단 Medical service assisted system and method thereof
KR102118585B1 (en) * 2019-12-13 2020-06-03 가천대학교 산학협력단 Smart Mirror Chatbot System and Method for Senior Care
KR20200081520A (en) * 2018-12-14 2020-07-08 신라대학교 산학협력단 Robot system for health care service and method thereof
US10827973B1 (en) * 2015-06-30 2020-11-10 University Of South Florida Machine-based infants pain assessment tool

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102366251B1 (en) 2019-04-22 2022-02-22 주식회사 엠티에스컴퍼니 Method and system for human body medical examination using chatbot inpuiry

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160060807A (en) * 2014-11-20 2016-05-31 권오봉 Patient Management System, Patient Management Server, Patient Mobile and Managing Method therefor
US10827973B1 (en) * 2015-06-30 2020-11-10 University Of South Florida Machine-based infants pain assessment tool
KR20190066689A (en) * 2017-12-06 2019-06-14 고려대학교 산학협력단 Medical service assisted system and method thereof
KR20200081520A (en) * 2018-12-14 2020-07-08 신라대학교 산학협력단 Robot system for health care service and method thereof
KR102118585B1 (en) * 2019-12-13 2020-06-03 가천대학교 산학협력단 Smart Mirror Chatbot System and Method for Senior Care

Also Published As

Publication number Publication date
KR102522172B1 (en) 2023-04-14
KR20220106574A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
WO2012033244A1 (en) Self-examination method and self-examination device
WO2015194808A1 (en) Apparatus for diagnosing and treating dizziness
WO2022145782A2 (en) Big data and cloud system-based artificial intelligence emergency medical care decision making and emergency patient transporting system and method therefor
WO2017051944A1 (en) Method for increasing reading efficiency by using gaze information of user in medical image reading process and apparatus therefor
CN102959579A (en) Medical information display apparatus, operation method and program
WO2020213826A1 (en) Auxiliary method for diagnosis of lower urinary tract symptoms
WO2020080819A1 (en) Oral health prediction apparatus and method using machine learning algorithm
WO2022225199A1 (en) System for remote health status measurement through camera-based vital signs data extraction and electronic medical examination, and method therefor
Conrath et al. A clinical evaluation of four alternative telemedicine systems
WO2021230417A1 (en) Image standardization device for standardizing images captured by heterogeneous image capturing devices and storing and managing same
WO2018105995A2 (en) Device and method for health information prediction using big data
WO2021215809A1 (en) System and method for providing early diagnosis of cognitive disorder and community care matching service for elderly
Conrath et al. An experimental evaluation of alternative communication systems as used for medical diagnosis
WO2022158689A1 (en) System for analyzing clinical connection of subjective pain through voice analysis for preliminary questionnaire
WO2021201582A1 (en) Method and device for analyzing causes of skin lesion
WO2024090758A1 (en) Apparatus and method for providing customized cardiac rehabilitation content
JP2019197271A (en) Medical information processing system
Rangasamy et al. Role of telemedicine in health care system: a review
WO2023022485A9 (en) Health condition prediction system using asynchronous electrocardiogram
WO2023106516A1 (en) Method and server for dementia test based on questions and answers using artificial intelligence call
WO2017090805A1 (en) Method and device for determining skeletal muscle cross-sectional area calculation model of subject on basis of demographic factor and kinematic factor
WO2019098399A1 (en) Bone mineral density estimation method and apparatus using same
CN112135563A (en) Digital qualitative biomarkers for determining information processing speed
WO2020218754A1 (en) Physical examination method and system using health check query performed by chatbot, and physical examination method and system
Meyer et al. A method to evaluate and improve the usability of a robotic hand orthosis from the caregiver perspective

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21921448

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21921448

Country of ref document: EP

Kind code of ref document: A1