WO2023054768A1 - Deep learning model system for detecting pressure ulcer disease and determining stage of pressure ulcer disease, generation method therefor, and method for diagnosing pressure ulcer by using same - Google Patents

Deep learning model system for detecting pressure ulcer disease and determining stage of pressure ulcer disease, generation method therefor, and method for diagnosing pressure ulcer by using same Download PDF

Info

Publication number
WO2023054768A1
WO2023054768A1 PCT/KR2021/013466 KR2021013466W WO2023054768A1 WO 2023054768 A1 WO2023054768 A1 WO 2023054768A1 KR 2021013466 W KR2021013466 W KR 2021013466W WO 2023054768 A1 WO2023054768 A1 WO 2023054768A1
Authority
WO
WIPO (PCT)
Prior art keywords
pressure ulcer
deep learning
stage
dnn
level
Prior art date
Application number
PCT/KR2021/013466
Other languages
French (fr)
Korean (ko)
Inventor
전병흡
김건주
홍성표
Original Assignee
주식회사 피플앤드테크놀러지
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 피플앤드테크놀러지 filed Critical 주식회사 피플앤드테크놀러지
Publication of WO2023054768A1 publication Critical patent/WO2023054768A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis
    • A61B5/445Evaluating skin irritation or skin trauma, e.g. rash, eczema, wound, bed sore
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis
    • A61B5/447Skin evaluation, e.g. for skin disorder diagnosis specially adapted for aiding the prevention of ulcer or pressure sore development, i.e. before the ulcer or sore has developed
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4842Monitoring progression or stage of a disease
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders

Definitions

  • the present invention relates to a deep learning model system for capturing pressure ulcer diseases and determining stages, a generation method, and a method for diagnosing pressure ulcers using the same. It is about a deep learning model system, generation method, and method for diagnosing pressure ulcers using the deep learning model system for detecting pressure ulcer disease and determining the stage to receive appropriate medical services.
  • AI Artificial intelligence mimics the human brain and neural network of neurons, one day allowing computers and robots to think and act like humans.
  • ML Machine Learning
  • Deep Learning (DL) derived from artificial neural network algorithms is a technique used to cluster or classify data using artificial neural networks.
  • neural networks are statistical learning algorithms inspired by neural networks in biology (the central nervous system of animals).
  • An artificial neural network refers to an overall model that has problem-solving ability by changing synapse coupling strength through learning in which artificial neurons that form a network by synapse coupling.
  • the core of deep learning using artificial neural networks is prediction through classification.
  • Computers divide data just as humans classify objects by discovering patterns in countless data.
  • This classification method is based on supervised (supervisor/teacher) learning that is optimized for the problem by inputting a signal (correct answer) from the leader (supervisor/teacher) and unsupervised (supervisor/teacher) learning that does not require a teacher signal from the leader. there is.
  • neural networks are used to solve a wide range of problems, such as computer vision or speech recognition, that are generally difficult to solve with rule-based programming.
  • Such deep learning is used as an indicator for determining the behavioral state of the human body, and is used in various fields such as sports, medicine, health care, education, and art.
  • An object of the present invention is to provide an accurate pressure ulcer stage discrimination model using a step-by-step probability model for pressure sore affected areas and a diagnosis method using the same.
  • the present invention comprises the steps of (a) receiving a picture of a suspected pressure sore patient; (b) capturing a suspected affected area from an input photograph, and then labeling the photograph according to stages of pressure ulcer progression; (c) inputting the suspected affected area of the photograph labeled in each step into a CNN-binary discriminator and outputting the predicted probability as a logit value; (d) constructing a DNN that receives the 4-dimensional vector consisting of the logit values and determines the stage of pressure ulcer progression; and (e) learning the DNN by repeatedly inputting the 4-dimensional vector; Provides a method for generating an image-based pressure sore disease discrimination model comprising a.
  • the suspected affected area is captured through a deep learning object capturer
  • the CNN-binary discriminator has four, and the CNN-binary The discriminator learns four decision boundaries: negative - 1st level or higher, 1st lower level - 2nd higher level, 2nd lower level - 3rd level or higher, and 3rd lower level - 4th level respectively.
  • the CNN-binary discriminator learns using binary cross-entropy for the discrimination result for the step, and the DNN applies an excitation softmax function to a 5-dimensional vector. to determine the pressure sore stage, and the depth and width of the DNN hidden layer are repeatedly learned to determine optimal values.
  • the present invention also provides a deep learning system for detecting pressure ulcer diseases and determining stages, generated by the above-described image-based pressure ulcer disease discrimination model generation method.
  • the present invention also includes the steps of inputting a patient picture to the deep learning system described above; Receiving information about the stage of progression of pressure sores determined by the deep learning system from the photo; and transmitting the received information to a user terminal.
  • the present invention it is possible to implement a system capable of immediately determining information on the stage of pressure ulcers without the help of an expert by processing photos taken of suspected pressure ulcers through the deep learning artificial intelligence algorithm according to the present invention.
  • the decision boundary of various classifiers is independently learned by applying a probabilistic model that separates the features of each step, and the vector value synthesized from the characteristics that determines the feature of each step is input into a separate DNN to perform an ensemble technique. The final output is obtained through this process to maximize the accuracy of discrimination. Therefore, through this, it is possible to diagnose the stage of pressure ulcer progression without going through a direct medical examination, and transmit the result to a medical staff to receive prompt treatment for pressure ulcers.
  • FIG. 1 is a step diagram of a method for generating a pressure sore disease discrimination model according to an embodiment of the present invention.
  • FIG. 2 is a configuration diagram of a deep learning object catcher according to an embodiment of the present invention.
  • FIG. 3 is a diagram illustrating CNN binary discriminator label values according to an embodiment of the present invention.
  • FIG. 4 is a configuration diagram of a CNN binary discriminator learning structure according to an embodiment of the present invention.
  • FIG. 5 is a configuration diagram of a DNN discriminator learning structure according to an embodiment of the present invention.
  • FIG. 6 is a step diagram of a diagnosis service method according to an embodiment of the present invention.
  • the present invention comprises the steps of (a) receiving a picture of a suspected pressure sore patient; (b) capturing a suspected affected area from an input photograph, and then labeling the photograph according to stages of pressure ulcer progression; (c) inputting the suspected affected area of the photograph labeled in each step into a CNN-binary discriminator and outputting the predicted probability as a logit value; (d) constructing a DNN that receives the 4-dimensional vector consisting of the logit values and determines the stage of pressure ulcer progression; and (e) learning the DNN by repeatedly inputting the 4-dimensional vector; Provides a method for generating an image-based pressure sore disease discrimination model comprising a.
  • the suspected affected area is captured through a deep learning object capturer
  • the CNN-binary discriminator has four, and the CNN-binary The discriminator learns four decision boundaries: negative - 1st level or higher, 1st lower level - 2nd higher level, 2nd lower level - 3rd level or higher, and 3rd lower level - 4th level respectively.
  • the CNN-binary discriminator learns using binary cross-entropy for the discrimination result for the step, and the DNN applies an excitation softmax function to a 5-dimensional vector. to determine the pressure sore stage, and the depth and width of the DNN hidden layer are repeatedly learned to determine optimal values.
  • the present invention also provides a deep learning system for detecting pressure ulcer diseases and determining stages, generated by the above-described image-based pressure ulcer disease discrimination model generation method.
  • the present invention also includes the steps of inputting a patient picture to the deep learning system described above; Receiving information about the stage of progression of pressure sores determined by the deep learning system from the photo; and transmitting the received information to a user terminal.
  • this component when a component is described as "existing inside or connected to and installed" of another component, this component may be directly connected to or installed in contact with the other component.
  • a third component or means for fixing or connecting the corresponding component to another component may exist.
  • the present invention captures through the suspected affected area from a picture of a suspected pressure sore patient taken by a caregiver, medical staff, etc., labels the picture step by step based on this, and independently distinguishes each step in a DNN. It learns by inputting a value-based vector.
  • the present invention provides a method for generating a deep learning model for detecting pressure sores and determining stages, and a deep learning system for realizing the same.
  • the deep learning system includes a deep learning object catcher, a CNN-binary discriminator, a DNN system, etc.
  • the image-based pressure sore disease discrimination model according to the present invention will be described in more detail through the following examples and drawings. .
  • FIG. 2 is a configuration diagram of a deep learning object catcher according to an embodiment of the present invention.
  • a picture of a suspected pressure ulcer patient is input.
  • a picture may be taken from a terminal such as a family member or caregiver as well as a medical staff, and transmitted to a server providing an image-based pressure sore disease diagnosis service according to the present invention.
  • suspected affected area is captured from the input picture.
  • "Suspicious affected area” refers to a skin area that has characteristics different from other skin conditions and has characteristics of a pressure sore. Detect suspicious lesions.
  • the object catcher of one embodiment may use SSD or YOLO.
  • an embodiment of the present invention separately prepares four deep learning binary discriminators step by step that receive feature map vectors extracted using CNN and classify the steps. , Prepare to learn four decision boundaries: 1st level or lower - 2nd level or higher, 2nd level or lower - 3rd level or higher, and 3rd level or lower - 4th level. 3 shows CNN binary discriminator label values for each step according to an embodiment of the present invention.
  • a picture of a pressure sore affected area labeled for each step is used to learn the CNN-binary discriminator.
  • the output of the CNN-binary discriminator is given as a logit real value, and learning is performed using binary cross-entropy for the discrimination result.
  • 4 is a configuration diagram of a CNN binary discriminator learning structure according to an embodiment of the present invention.
  • a DNN is constructed that receives a 4-dimensional vector consisting of the logit values of the four binary discriminators and determines a step.
  • the output of the DNN is a 5-dimensional vector, and the step is determined by applying the softmax function here.
  • the optimal values for the depth and width of the DNN hidden layer are determined through trial and error by repeating learning.
  • the learning of the DNN is performed by applying the DNN through the CNN-binary discriminator learned by the above method with the step-by-step labeling of the pressure sore affected image as an input, and then using cross-entropy as a loss function It is done by
  • the pressure sore eradication model generated according to the present invention can label photos for each stage of pressure ulcer progression, and train the DNN to determine the stage of pressure ulcer progression with high accuracy and selectivity.
  • the present invention also provides a diagnosis service method for diagnosing pressure sores using the model and transmitting the results to medical staff.
  • FIG. 6 is a step diagram of a diagnosis service method according to an embodiment of the present invention.
  • the diagnosis method may include inputting a patient's picture into the image-based pressure sore disease discrimination model; Receiving information about the progression of pressure ulcers determined by the image-based pressure sore disease discrimination model from the photo; and transmitting the received information to a user terminal.
  • Industrial applicability is recognized as a deep learning model system for pressure sore disease detection and stage determination.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Theoretical Computer Science (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Psychiatry (AREA)
  • Dermatology (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Fuzzy Systems (AREA)
  • Image Analysis (AREA)

Abstract

Provided is an image-based method for generating a pressure ulcer disease determination model, the method comprising the steps of: (a) receiving, as an input, a photograph of a suspected pressure ulcer patient; (b) detecting a suspected affected area from the input photograph, and then labeling the photograph according to stage of pressure ulcer progression; (c) inputting the suspected affected area of the photograph labeled according to the stage into a CNN-binary discriminator, and outputting a predicted probability as a logit value; (d) receiving, as an input, a 4D vector consisting of the logit value to construct a DNN that determines the stage of pressure ulcer progression; and (e) learning the DNN by repeatedly inputting the 4D vector.

Description

욕창질환 포착과 단계결정을 위한 딥러닝 모델 시스템, 생성방법 및 이를 이용한 욕창진단 방법Deep learning model system for detecting pressure ulcer disease and determining the stage, method for generating, and method for diagnosing pressure ulcer using the same
본 발명은 욕창질환 포착과 단계결정을 위한 딥러닝 모델 시스템, 생성방법 및 이를 이용한 욕창진단 방법에 관한 것으로서, 보다 상세하게는 환자의 욕창의심 병변 사진을 딥러닝 신경망으로 처리하여 욕창 진행 단계에 따라 적절한 의료서비스를 받을 수 있는 욕창질환 포착과 단계결정을 위한 딥러닝 모델 시스템, 생성방법 및 이를 이용한 욕창진단 방법에 관한 것이다. The present invention relates to a deep learning model system for capturing pressure ulcer diseases and determining stages, a generation method, and a method for diagnosing pressure ulcers using the same. It is about a deep learning model system, generation method, and method for diagnosing pressure ulcers using the deep learning model system for detecting pressure ulcer disease and determining the stage to receive appropriate medical services.
인공지능(Artificial Intelligence, AI)은 인간의 뇌와 뉴런 신경망을 모방해 언젠가는 컴퓨터나 로봇들이 인간처럼 사고하고 행동하게 하는 것이다. Artificial intelligence (AI) mimics the human brain and neural network of neurons, one day allowing computers and robots to think and act like humans.
예를 들어, 우리는 사진만으로 개와 고양이를 아주 쉽게 구분할 수 있지만 컴퓨터는 구분하지 못한다. For example, we can very easily tell dogs and cats apart just by looking at pictures, but computers can't.
이를 위해 “기계 학습(Machine Learning, ML)” 기법이 고안되었는데, 이 기법은 많은 데이터를 컴퓨터에 입력하고 비슷한 것끼리 분류하도록 하는 기술로서, 저장된 개의 사진과 비슷한 사진이 입력되면, 이를 개의 사진이라고 컴퓨터가 분류하도록 하는 것이다. To this end, a “Machine Learning (ML)” technique was devised. This technique inputs a lot of data into a computer and classifies similar ones. It's for the computer to classify.
데이터를 어떻게 분류할 것인가에 따라, 의사결정 나무(Decision Tree)나 베이지안 망(Bayesian network), 서포트 벡터 머신(support vector machine, SVM), 그리고 인공 신경망(Artificial neural network) 등 많은 기계 학습 알고리즘이 등장했다. Depending on how to classify data, many machine learning algorithms such as decision trees, Bayesian networks, support vector machines (SVMs), and artificial neural networks have emerged. did.
그 중에 인공 신경망 알고리즘에서 파생된 딥러닝(Deep Learning, DL)은 인공 신경망을 이용하여 데이터를 군집화 하거나 분류하는데 사용하는 기술이다. Among them, Deep Learning (DL) derived from artificial neural network algorithms is a technique used to cluster or classify data using artificial neural networks.
기계 학습과 인지 과학에서의 인공 신경망은 생물학의 신경망(동물의 중추 신경계)에서 영감을 얻은 통계학적 학습 알고리즘이다. In machine learning and cognitive science, artificial neural networks are statistical learning algorithms inspired by neural networks in biology (the central nervous system of animals).
인공 신경망은 시냅스(synapse)의 결합으로 네트워크를 형성한 인공 뉴런(neurons)이 학습을 통해 시냅스의 결합 세기를 변화시켜, 문제 해결 능력을 가지는 모델 전반을 가리킨다. An artificial neural network refers to an overall model that has problem-solving ability by changing synapse coupling strength through learning in which artificial neurons that form a network by synapse coupling.
인공 신경망을 이용하는 딥러닝의 핵심은 분류를 통한 예측이다. The core of deep learning using artificial neural networks is prediction through classification.
수많은 데이터 속에서 패턴을 발견해 인간이 사물을 구분하듯 컴퓨터가 데이터를 나눈다. Computers divide data just as humans classify objects by discovering patterns in countless data.
이 같은 분별 방식은 지도자(감독자/교사)의 신호(정답) 입력에 의해서 문제에 최적화되어 가는 지도(감독/교사) 학습과 지도자의 교사 신호를 필요로 하지 않는 비지도(감독/교사) 학습이 있다. This classification method is based on supervised (supervisor/teacher) learning that is optimized for the problem by inputting a signal (correct answer) from the leader (supervisor/teacher) and unsupervised (supervisor/teacher) learning that does not require a teacher signal from the leader. there is.
일반적으로 입력으로부터 값을 계산하는 뉴런 시스템의 상호 연결로 표현되고 적응성이 있어 패턴 인식과 같은 기계 학습을 수행할 수 있다. It is usually represented as an interconnection of neuron systems that computes values from inputs and is adaptable, allowing machine learning such as pattern recognition to be performed.
데이터로부터 학습하는 다른 기계 학습과 같이, 신경망은 일반적으로 규칙 기반 프로그래밍으로 풀기 어려운 컴퓨터 비전(vision) 또는 음성 인식과 같은 다양한 범위의 문제를 푸는 데 이용된다.Like other machine learning methods that learn from data, neural networks are used to solve a wide range of problems, such as computer vision or speech recognition, that are generally difficult to solve with rule-based programming.
이와 같은 딥러닝은 인체의 행동 상태를 판단하기 위한 지표로서, 스포츠, 의료, 헬스케어, 교육, 예술 등 다양한 분야에서 활용되고 있다. Such deep learning is used as an indicator for determining the behavioral state of the human body, and is used in various fields such as sports, medicine, health care, education, and art.
한편 욕창 등 특정 피부질환의 경우 진행 단계에 따라 다른 양상을 가지고 연결되지 않는 특징이 있어 기존의 딥러닝의 CNN(Convolution Neural Network), DNN((Deep Neural Network) 알고리즘과 이를 조합한 기존의 사물포착분류기 등을 방법을 단순 이용하는 경우, 의료, 헬스케어 현장에서 필요한 정확도를 확보하지 못하는 어려움이 있다. 특히 욕창 의심 환자의 환부에 대해 욕창 질환의 진행 단계를 기존에 비해 더 정확히 포착하고 분류하는 딥러닝 모델의 개발과 그 학습 방법을 개발할 필요가 있다.On the other hand, in the case of specific skin diseases such as pressure ulcers, they have different aspects depending on the stage of progression and are not connected. In the case of simply using a classifier, etc., it is difficult to obtain the required accuracy in the medical and healthcare field.In particular, deep learning that more accurately captures and classifies the advanced stage of pressure ulcer disease for the affected area of a patient with suspected pressure ulcer than before It is necessary to develop the model and its learning method.
본 발명의 목적은 욕창환부의 단계별 확률모델을 이용하는 정확한 욕창단계 판별모델과 이를 이용한 진단방법을 제공하는 것이다. An object of the present invention is to provide an accurate pressure ulcer stage discrimination model using a step-by-step probability model for pressure sore affected areas and a diagnosis method using the same.
상기 과제를 해결하기 위하여, 본 발명은 (a) 욕창의심환자의 사진을 입력받는 단계; (b) 입력된 사진으로부터 의심환부영역을 포착한 후, 상기 사진을 욕창진행 단계별로 레이블링하는 단계; (c) 상기 단계별로 레이블링된 사진의 의심환부영역을 CNN-이진판별기에 입력하고 예측확률을 로짓값으로 출력하는 단계; (d) 상기 로짓값으로 이루어진 4차원 벡터를 입력받아 욕창진행단계를 판별하는 DNN을 구성하는 단계; 및 (e) 상기 4차원 벡터를 반복적으로 입력하여 상기 DNN을 학습시키는 단계; 를 포함하는 이미지 기반 욕창질환 판별모델 생성방법을 제공한다. In order to solve the above problems, the present invention comprises the steps of (a) receiving a picture of a suspected pressure sore patient; (b) capturing a suspected affected area from an input photograph, and then labeling the photograph according to stages of pressure ulcer progression; (c) inputting the suspected affected area of the photograph labeled in each step into a CNN-binary discriminator and outputting the predicted probability as a logit value; (d) constructing a DNN that receives the 4-dimensional vector consisting of the logit values and determines the stage of pressure ulcer progression; and (e) learning the DNN by repeatedly inputting the 4-dimensional vector; Provides a method for generating an image-based pressure sore disease discrimination model comprising a.
본 발명의 일 실시예에서, 상기 (b) 단계에서, 딥러닝 물체포착기를 통하여 상기 의심환부영역을 포착하며, 상기 (c) 단계에서, 상기 CNN-이진판별기에 4개이며, 상기 CNN-이진판별기는 각각 음성-1단계 이상, 1단계 이하-2단계 이상, 2단계 이하-3단계 이상, 3단계 이하-4단계의 4가지 결정 경계를 학습한다. In one embodiment of the present invention, in the step (b), the suspected affected area is captured through a deep learning object capturer, and in the step (c), the CNN-binary discriminator has four, and the CNN-binary The discriminator learns four decision boundaries: negative - 1st level or higher, 1st lower level - 2nd higher level, 2nd lower level - 3rd level or higher, and 3rd lower level - 4th level respectively.
본 발명의 일 실시예에서, 상기 CNN-이진판별기는 단계에 대한 판별 결과에 대하여 이진교차엔트로피(binary cross-entropy)를 이용하여 학습하며, 상기 DNN은, 5차원 벡터로 여기 소프트맥스함수를 적용하여 욕창 단계를 결정하며, 상기 DNN 은닉층의 깊이와 너비는 학습을 반복하여 최적의 값을 결정한다.In one embodiment of the present invention, the CNN-binary discriminator learns using binary cross-entropy for the discrimination result for the step, and the DNN applies an excitation softmax function to a 5-dimensional vector. to determine the pressure sore stage, and the depth and width of the DNN hidden layer are repeatedly learned to determine optimal values.
본 발명은 또한 상술한 이미지 기반 욕창질환 판별모델 생성방법에 의하여 생성된, 욕창질환 포착과 단계결정을 위한 딥러닝 시스템을 제공한다. The present invention also provides a deep learning system for detecting pressure ulcer diseases and determining stages, generated by the above-described image-based pressure ulcer disease discrimination model generation method.
본 발명은 또한 상술한 딥러닝 시스템에 환자 사진을 입력하는 단계; 상기 사진으로부터 상기 딥러닝 시스템에 의하여 판별된 욕창 진행 단계에 대한 정보를 수신하는 단계; 및 상기 수신된 정보를 사용자 단말로 전송하는 단계를 포함하는 것을 특징으로 하는, 욕창 진단 방법을 제공한다. The present invention also includes the steps of inputting a patient picture to the deep learning system described above; Receiving information about the stage of progression of pressure sores determined by the deep learning system from the photo; and transmitting the received information to a user terminal.
본 발명에 의할 경우, 욕창의심 부위의 촬영된 사진을 본 발명에 따른 딥러닝 인공지능 알고리즘을 통해 처리하여 욕창단계에 대한 정보를 전문가의 도움없이 바로 판별할 수 있는 시스템을 구현할 수 있다.According to the present invention, it is possible to implement a system capable of immediately determining information on the stage of pressure ulcers without the help of an expert by processing photos taken of suspected pressure ulcers through the deep learning artificial intelligence algorithm according to the present invention.
즉, 기존의 알려진 딥러닝 물체포착기나 분류기를 단계별 학습데이터를 이용하여 바로 학습시켜 적용한다면 욕창과 같은 피부질환의 경우 단계별 특성이 다양한 양상을 띄고 있는 경우 명확한 결정경계의 학습이 보장되지 않아 필요한 정확도를 보장할 수 없으며, 이 경우 필요한 정확도를 확보하기 위해서는 여러가지 특성 패턴을 동시에 학습 적용시키는 것이 필요하다. 하지만, 본 발명에서는 단계별 특징을 분리한 확률모델을 적용하여 다양한 분류기의 결정경계를 독립적으로 학습시키고 이로부터 각각의 단계별 특징을 결정하는 특성을 종합한 벡터값을 별개의 DNN에 입력하여 앙상블 기법을 통한 최종 출력을 얻어내어 판별의 정확성을 극대화한다. 따라서, 이를 통하여 욕창진행단계를 직접 문진 등을 거치지 않고 진단하여 이를 의료진 등에 전송하여 즉각적인 욕창치료를 받게 할 수 있다. In other words, if an existing known deep learning object catcher or classifier is directly learned and applied using step-by-step learning data, in the case of skin diseases such as pressure sores, where the step-by-step characteristics show various aspects, the learning of clear decision boundaries is not guaranteed, so the required accuracy cannot be guaranteed, and in this case, it is necessary to simultaneously learn and apply various characteristic patterns to secure the required accuracy. However, in the present invention, the decision boundary of various classifiers is independently learned by applying a probabilistic model that separates the features of each step, and the vector value synthesized from the characteristics that determines the feature of each step is input into a separate DNN to perform an ensemble technique. The final output is obtained through this process to maximize the accuracy of discrimination. Therefore, through this, it is possible to diagnose the stage of pressure ulcer progression without going through a direct medical examination, and transmit the result to a medical staff to receive prompt treatment for pressure ulcers.
도 1은 본 발명의 일 실시예에 따른 욕창질환 판별모델 생성방법의 단계도이다. 1 is a step diagram of a method for generating a pressure sore disease discrimination model according to an embodiment of the present invention.
도 2는 본 발명의 일 실시예에 따른 딥러닝 물체포착기의 구성도이다. 2 is a configuration diagram of a deep learning object catcher according to an embodiment of the present invention.
도 3은 본 발명의 일 실시예에 따른 CNN 이진판별기 레이블 값을 나타내는 도면이다. 3 is a diagram illustrating CNN binary discriminator label values according to an embodiment of the present invention.
도 4는 본 발명의 일 실시예에 따른 CNN 이진판별기 학습 구조에 대한 구성도이다. 4 is a configuration diagram of a CNN binary discriminator learning structure according to an embodiment of the present invention.
도 5는 본 발명의 일 실시예에 따른 DNN 판별기 학습 구조에 대한 구성도이다. 5 is a configuration diagram of a DNN discriminator learning structure according to an embodiment of the present invention.
도 6은 본 발명의 일 실시예에 따른 진단 서비스 방법에 대한 단계도이다.6 is a step diagram of a diagnosis service method according to an embodiment of the present invention.
상기 과제를 해결하기 위하여, 본 발명은 (a) 욕창의심환자의 사진을 입력받는 단계; (b) 입력된 사진으로부터 의심환부영역을 포착한 후, 상기 사진을 욕창진행 단계별로 레이블링하는 단계; (c) 상기 단계별로 레이블링된 사진의 의심환부영역을 CNN-이진판별기에 입력하고 예측확률을 로짓값으로 출력하는 단계; (d) 상기 로짓값으로 이루어진 4차원 벡터를 입력받아 욕창진행단계를 판별하는 DNN을 구성하는 단계; 및 (e) 상기 4차원 벡터를 반복적으로 입력하여 상기 DNN을 학습시키는 단계; 를 포함하는 이미지 기반 욕창질환 판별모델 생성방법을 제공한다. In order to solve the above problems, the present invention comprises the steps of (a) receiving a picture of a suspected pressure sore patient; (b) capturing a suspected affected area from an input photograph, and then labeling the photograph according to stages of pressure ulcer progression; (c) inputting the suspected affected area of the photograph labeled in each step into a CNN-binary discriminator and outputting the predicted probability as a logit value; (d) constructing a DNN that receives the 4-dimensional vector consisting of the logit values and determines the stage of pressure ulcer progression; and (e) learning the DNN by repeatedly inputting the 4-dimensional vector; Provides a method for generating an image-based pressure sore disease discrimination model comprising a.
본 발명의 일 실시예에서, 상기 (b) 단계에서, 딥러닝 물체포착기를 통하여 상기 의심환부영역을 포착하며, 상기 (c) 단계에서, 상기 CNN-이진판별기에 4개이며, 상기 CNN-이진판별기는 각각 음성-1단계 이상, 1단계 이하-2단계 이상, 2단계 이하-3단계 이상, 3단계 이하-4단계의 4가지 결정 경계를 학습한다. In one embodiment of the present invention, in the step (b), the suspected affected area is captured through a deep learning object capturer, and in the step (c), the CNN-binary discriminator has four, and the CNN-binary The discriminator learns four decision boundaries: negative - 1st level or higher, 1st lower level - 2nd higher level, 2nd lower level - 3rd level or higher, and 3rd lower level - 4th level respectively.
본 발명의 일 실시예에서, 상기 CNN-이진판별기는 단계에 대한 판별 결과에 대하여 이진교차엔트로피(binary cross-entropy)를 이용하여 학습하며, 상기 DNN은, 5차원 벡터로 여기 소프트맥스함수를 적용하여 욕창 단계를 결정하며, 상기 DNN 은닉층의 깊이와 너비는 학습을 반복하여 최적의 값을 결정한다.In one embodiment of the present invention, the CNN-binary discriminator learns using binary cross-entropy for the discrimination result for the step, and the DNN applies an excitation softmax function to a 5-dimensional vector. to determine the pressure sore stage, and the depth and width of the DNN hidden layer are repeatedly learned to determine optimal values.
본 발명은 또한 상술한 이미지 기반 욕창질환 판별모델 생성방법에 의하여 생성된, 욕창질환 포착과 단계결정을 위한 딥러닝 시스템을 제공한다. The present invention also provides a deep learning system for detecting pressure ulcer diseases and determining stages, generated by the above-described image-based pressure ulcer disease discrimination model generation method.
본 발명은 또한 상술한 딥러닝 시스템에 환자 사진을 입력하는 단계; 상기 사진으로부터 상기 딥러닝 시스템에 의하여 판별된 욕창 진행 단계에 대한 정보를 수신하는 단계; 및 상기 수신된 정보를 사용자 단말로 전송하는 단계를 포함하는 것을 특징으로 하는, 욕창 진단 방법을 제공한다. The present invention also includes the steps of inputting a patient picture to the deep learning system described above; Receiving information about the stage of progression of pressure sores determined by the deep learning system from the photo; and transmitting the received information to a user terminal.
이하, 첨부한 도면을 참고로 하여 본 발명의 바람직한 실시예에 대하여 상세히 설명하면 다음과 같다.Hereinafter, the preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings.
본 발명을 상세하게 설명하기 전에, 본 명세서에서 사용된 용어나 단어는 통상적이거나 사전적인 의미로 무조건 한정하여 해석되어서는 아니 되며, 본 발명의 발명자가 자신의 발명을 가장 최선의 방법으로 설명하기 위해서 각종 용어의 개념을 적절하게 정의하여 사용할 수 있다.Before explaining the present invention in detail, the terms or words used in this specification should not be construed unconditionally in a conventional or dictionary sense, and in order for the inventor of the present invention to explain his/her invention in the best way Concepts of various terms can be appropriately defined and used.
더 나아가 이들 용어나 단어는 본 발명의 기술적 사상에 부합하는 의미와 개념으로 해석되어야 함을 알아야 한다.Furthermore, it should be noted that these terms or words should be interpreted as meanings and concepts consistent with the technical idea of the present invention.
즉, 본 명세서에서 사용된 용어는 본 발명의 바람직한 실시예를 설명하기 위해서 사용되는 것일 뿐이고, 본 발명의 내용을 구체적으로 한정하려는 의도로 사용된 것이 아니다.That is, the terms used in this specification are only used to describe preferred embodiments of the present invention, and are not intended to specifically limit the contents of the present invention.
이들 용어는 본 발명의 여러 가지 가능성을 고려하여 정의된 용어임을 알아야 한다.It should be noted that these terms are terms defined in consideration of various possibilities of the present invention.
또한, 본 명세서에 있어서, 단수의 표현은 문맥상 명확하게 다른 의미로 지시하지 않는 이상, 복수의 표현을 포함할 수 있다.Also, in this specification, a singular expression may include a plurality of expressions unless the context clearly indicates otherwise.
또한, 유사하게 복수로 표현되어 있다고 하더라도 단수의 의미를 포함할 수 있음을 알아야 한다.In addition, it should be noted that similarly, even if expressed in a plurality, it may include a singular meaning.
본 명세서의 전체에 걸쳐서 어떤 구성 요소가 다른 구성 요소를 "포함"한다고 기재하는 경우에는, 특별히 반대되는 의미의 기재가 없는 한 임의의 다른 구성 요소를 제외하는 것이 아니라 임의의 다른 구성 요소를 더 포함할 수도 있다는 것을 의미할 수 있다.Throughout this specification, when a component is described as "including" another component, it does not exclude any other component, but further includes any other component, unless otherwise stated. It can mean you can do it.
더 나아가서, 어떤 구성 요소가 다른 구성 요소의 "내부에 존재하거나, 연결되어 설치된다"고 기재한 경우에는, 이 구성 요소가 다른 구성 요소와 직접적으로 연결되어 있거나 접촉하여 설치되어 있을 수 있다.Furthermore, when a component is described as "existing inside or connected to and installed" of another component, this component may be directly connected to or installed in contact with the other component.
또한, 일정한 거리를 두고 이격되어 설치되어 있을 수도 있으며, 일정한 거리를 두고 이격되어 설치되어 있는 경우에 대해서는 해당 구성 요소를 다른 구성 요소에 고정 내지 연결시키기 위한 제 3의 구성 요소 또는 수단이 존재할 수 있다.In addition, it may be installed at a certain distance, and in the case of being installed at a certain distance, a third component or means for fixing or connecting the corresponding component to another component may exist. .
한편, 상기 제 3의 구성 요소 또는 수단에 대한 설명은 생략될 수도 있음을 알아야 한다.Meanwhile, it should be noted that the description of the third component or means may be omitted.
반면에, 어떤 구성 요소가 다른 구성 요소에 "직접 연결"되어 있다거나, 또는 "직접 접속"되어 있다고 기재되는 경우에는, 제 3의 구성 요소 또는 수단이 존재하지 않는 것으로 이해하여야 한다.On the other hand, when it is described that a certain element is "directly connected" to another element, or is "directly connected", it should be understood that no third element or means exists.
마찬가지로, 각 구성 요소 간의 관계를 설명하는 다른 표현들, 즉 " ~ 사이에"와 "바로 ~ 사이에", 또는 " ~ 에 이웃하는"과 " ~ 에 직접 이웃하는" 등도 마찬가지의 취지를 가지고 있는 것으로 해석되어야 한다.Similarly, other expressions describing the relationship between components, such as "between" and "directly between", or "adjacent to" and "directly adjacent to" have the same meaning. should be interpreted as
또한, 본 명세서에 있어서 "일면", "타면", "일측", "타측", "제 1", "제 2" 등의 용어는, 하나의 구성 요소에 대해서 이 하나의 구성 요소가 다른 구성 요소로부터 명확하게 구별될 수 있도록 하기 위해서 사용된다.In addition, in the present specification, terms such as "one side", "the other side", "one side", "the other side", "first", and "second" refer to one component with respect to another component. It is used to make it clearly distinguishable from the elements.
하지만, 이와 같은 용어에 의해서 해당 구성 요소의 의미가 제한적으로 사용되는 것은 아님을 알아야 한다.However, it should be noted that the meaning of a corresponding component is not limitedly used by such a term.
또한, 본 명세서에서 "상", "하", "좌", "우" 등의 위치와 관련된 용어는, 사용된다면, 해당 구성 요소에 대해서 해당 도면에서의 상대적인 위치를 나타내고 있는 것으로 이해하여야 한다.In addition, in this specification, terms related to positions such as “upper”, “lower”, “left”, “right”, etc., if used, are to be understood as indicating relative positions of corresponding components in the drawing.
또한, 이들의 위치에 대해서 절대적인 위치를 특정하지 않는 이상은, 이들 위치 관련 용어가 절대적인 위치를 언급하고 있는 것으로 이해하여서는 아니 된다.In addition, unless an absolute location is specified for these locations, these location-related terms should not be understood as referring to an absolute location.
더욱이, 본 발명의 명세서에서는, "부", "기", "모듈", "장치" 등의 용어는, 사용된다면, 하나 이상의 기능이나 동작을 처리할 수 있는 단위를 의미한다.Moreover, in the specification of the present invention, terms such as "unit", "group", "module", and "device", if used, mean a unit capable of processing one or more functions or operations.
이는 하드웨어 또는 소프트웨어, 또는 하드웨어와 소프트웨어의 결합으로 구현될 수 있음을 알아야 한다.It should be noted that this may be implemented in hardware or software, or a combination of hardware and software.
본 명세서에 첨부된 도면에서 본 발명을 구성하는 각 구성 요소의 크기, 위치, 결합 관계 등은 본 발명의 사상을 충분히 명확하게 전달할 수 있도록 하기 위해서 또는 설명의 편의를 위해서 일부 과장 또는 축소되거나 생략되어 기술되어 있을 수 있고, 따라서 그 비례나 축척은 엄밀하지 않을 수 있다.In the drawings accompanying this specification, the size, position, coupling relationship, etc. of each component constituting the present invention is partially exaggerated, reduced, or omitted in order to sufficiently clearly convey the spirit of the present invention or for convenience of explanation. may be described, and therefore the proportions or scale may not be exact.
또한, 이하에서, 본 발명을 설명함에 있어서, 본 발명의 요지를 불필요하게 흐릴 수 있다고 판단되는 구성, 예를 들어, 종래 기술을 포함하는 공지 기술에 대한 상세한 설명은 생략될 수도 있다.In addition, in the following description of the present invention, a detailed description of a configuration that is determined to unnecessarily obscure the subject matter of the present invention, for example, a known technology including the prior art, may be omitted.
본 발명은 상기 과제를 해결하기 위하여 간병인, 의료진 등으로부터 촬영된 욕창의심환자의 사진으로부터 의심환부영역을 통하여 포착하고 이를 기준으로 단계별로 상기 사진을 레이블링하고, 각각 단계별로 독립적으로 별개의 DNN에 특징값 기반 벡터를 입력시켜 학습시킨다. 이로써 욕창과 같은 피부질환의 경우 단계별 특성이 다양한 양상을 띄고 있는 경우 명확한 결정경계의 학습이 보장되지 않아 필요한 정확도를 보장할 수 없는 종래 문제를 단계별 CNN-이진판별기와 DNN을 학습을 통하여 해결한다. 따라서, 본 발명은 욕창질환 포착과 단계결정을 위한 딥러닝 모델 생성방법과, 이를 구현하기 위한 딥러닝 시스템을 제공하다. 본 발명에 따른 딥러닝 시스템은, 딥러닝 물체포착기, CNN-이진판별기, DNN 시스템 등을 포함하며, 이하 실시예 및 도면을 통하여 본 발명에 따른 이미지 기반 욕창질환 판별모델을 보다 상세히 설명한다. In order to solve the above problem, the present invention captures through the suspected affected area from a picture of a suspected pressure sore patient taken by a caregiver, medical staff, etc., labels the picture step by step based on this, and independently distinguishes each step in a DNN. It learns by inputting a value-based vector. As a result, in the case of skin diseases such as pressure sores, when the characteristics of each step have various aspects, the learning of a clear decision boundary is not guaranteed, so the conventional problem that cannot guarantee the necessary accuracy is solved through step-by-step CNN-binary discriminator and DNN learning. Accordingly, the present invention provides a method for generating a deep learning model for detecting pressure sores and determining stages, and a deep learning system for realizing the same. The deep learning system according to the present invention includes a deep learning object catcher, a CNN-binary discriminator, a DNN system, etc. The image-based pressure sore disease discrimination model according to the present invention will be described in more detail through the following examples and drawings. .
도 2는 본 발명의 일 실시예에 따른 딥러닝 물체포착기의 구성도이다. 2 is a configuration diagram of a deep learning object catcher according to an embodiment of the present invention.
도 2를 참조하면, 먼저, 욕창의심환자의 사진을 입력 받는다. 사진은 의료진뿐만 아니라 가족 또는 간병인 등의 단말로부터 촬영되어, 본 발명에 따른 이미지 기반 욕창질환 진단서비스를 제공하는 서버 등으로 전송될 수 있다. Referring to FIG. 2 , first, a picture of a suspected pressure ulcer patient is input. A picture may be taken from a terminal such as a family member or caregiver as well as a medical staff, and transmitted to a server providing an image-based pressure sore disease diagnosis service according to the present invention.
이후 입력된 사진으로부터 의심환부영역을 포착한다. "의심환부영역"은 다른 피부 상태와는 상이한 특징을 가지며 욕창의 특징을 갖는 피부 영역을 지칭하며, 본 발명의 일 실시예에서 욕창 환부를 포착하도록 훈련된 딥러닝 물체포착기에 상기 사진을 입력하여 의심 환부를 포착한다. 일실시예의 물체포착기는 SSD 혹은 YOLO 등을 사용할 수 있다. Then, the suspected affected area is captured from the input picture. "Suspicious affected area" refers to a skin area that has characteristics different from other skin conditions and has characteristics of a pressure sore. Detect suspicious lesions. The object catcher of one embodiment may use SSD or YOLO.
이후 각 단계별로 레이블링된 사진의 의심환부영역을 CNN-이진판별기에 입력하고 예측확률의 로짓값을 출력한다. 이를 위하여 본 발명의 일실시예는 CNN을 이용하여 추출한 특징맵 벡터를 입력 받아 단계를 구분하는 딥러닝 이진판별기 4개를 단계별로 별도로 준비하며, 상기 CNN-이진판별기는 각각 음성-1단계 이상, 1단계 이하-2단계 이상, 2단계 이하-3단계 이상, 3단계 이하-4단계의 4가지 결정 경계를 학습하도록 준비한다. 도 3은 본 발명의 일 실시예에 따른 각 단게별 CNN 이진판별기 레이블 값을 나타낸다. Afterwards, the suspected affected area of the photo labeled in each step is input to the CNN-binary discriminator and the logit value of the prediction probability is output. To this end, an embodiment of the present invention separately prepares four deep learning binary discriminators step by step that receive feature map vectors extracted using CNN and classify the steps. , Prepare to learn four decision boundaries: 1st level or lower - 2nd level or higher, 2nd level or lower - 3rd level or higher, and 3rd level or lower - 4th level. 3 shows CNN binary discriminator label values for each step according to an embodiment of the present invention.
본 발명의 일 실시예에서는 CNN-이진판별기를 학습하기 위해 단계 별 레이블링이 되어 있는 욕창 환부의 사진을 이용한다. CNN-이진판별기의 출력은 로짓 실수 값으로 주어지고 학습은 판별 결과에 대해 이진교차엔트로피(binary cross-entropy)를 이용하여 이뤄진다. 도 4는 본 발명의 일 실시예에 따른 CNN 이진판별기 학습 구조에 대한 구성도이다. In one embodiment of the present invention, a picture of a pressure sore affected area labeled for each step is used to learn the CNN-binary discriminator. The output of the CNN-binary discriminator is given as a logit real value, and learning is performed using binary cross-entropy for the discrimination result. 4 is a configuration diagram of a CNN binary discriminator learning structure according to an embodiment of the present invention.
이후 상기 4가지 이진판별기의 로짓값으로 이루어진 4차원 벡터를 입력받아 단계를 판별하는 DNN을 구성한다. DNN의 출력은 5차원 벡터로 여기 소프트맥스함수를 적용하여 단계를 결정한다. DNN 은닉층의 깊이와 너비는 학습을 반복하여 시행착오를 거쳐 최적의 값을 결정한다. Thereafter, a DNN is constructed that receives a 4-dimensional vector consisting of the logit values of the four binary discriminators and determines a step. The output of the DNN is a 5-dimensional vector, and the step is determined by applying the softmax function here. The optimal values for the depth and width of the DNN hidden layer are determined through trial and error by repeating learning.
본 발명의 일 실시에에서, 상기 DNN의 학습은 단계별 레이블링이 된 욕창 환부사진을 입력으로 상기 방법으로 학습된 CNN-이진판별기를 거쳐 DNN을 적용 후 교차 엔트로피(cross-entropy)를 손실함수로 사용하여 이루어진다. In one embodiment of the present invention, the learning of the DNN is performed by applying the DNN through the CNN-binary discriminator learned by the above method with the step-by-step labeling of the pressure sore affected image as an input, and then using cross-entropy as a loss function It is done by
본 발명에 따라 생성된 욕창판멸 모델은, 욕창 진행 단계별로 사진을 레이블링하고, 이를 DNN에 학습시켜, 욕창 진행 단계를 높은 정확도와 선택도로 판별할 수 있다. The pressure sore eradication model generated according to the present invention can label photos for each stage of pressure ulcer progression, and train the DNN to determine the stage of pressure ulcer progression with high accuracy and selectivity.
본 발명은 또한 상기 모델을 이용하여 욕창을 진단하고 이를 의료진 등에게 전송하는 진단 서비스 방법을 제공한다. The present invention also provides a diagnosis service method for diagnosing pressure sores using the model and transmitting the results to medical staff.
도 6은 본 발명의 일 실시예에 따른 진단 서비스 방법에 대한 단계도이다.6 is a step diagram of a diagnosis service method according to an embodiment of the present invention.
도 6을 참조하면, 상기 진단방법은, 상기 이미지 기반 욕창질환 판별모델에 환자 사진을 입력하든 단계; 상기 사진으로부터 상기 이미지 기반 욕창질환 판별모델에 의하여 판별된 욕창 진행 단계에 대한 정보를 수신하는 단계; 및 상기 수신된 정보를 사용자 단말로 전송하는 단계를 포함한다. 이로써 보건 및 의료분야에서의 판단보조 시스템 오류는 간호, 간병인력 및 의료인력에게 업무부담으로 작용하는 문제를 해소하며, 욕창의심 환자의 간호 혹은 간병인력이 환부의 사진을 촬영하여 설계된 딥러닝 인공지능 알고리즘을 통해 처리하여 욕창단계에 대한 정보를 전문가의 도움없이 바로 판별할 수 있는 보조시스템을 구현할 수 있다. Referring to FIG. 6 , the diagnosis method may include inputting a patient's picture into the image-based pressure sore disease discrimination model; Receiving information about the progression of pressure ulcers determined by the image-based pressure sore disease discrimination model from the photo; and transmitting the received information to a user terminal. This solves the problem that errors in the judgment assistance system in the health and medical fields act as a burden on nursing, nursing and medical personnel, and deep learning artificial intelligence designed by taking pictures of the affected area by nursing or nursing personnel of patients with suspected bedsores By processing through an algorithm, it is possible to implement an auxiliary system that can immediately determine the information on the pressure sore stage without the help of an expert.
이상, 일부 예를 들어서 본 발명의 바람직한 여러 가지 실시예에 대해서 설명하였지만, 본 "발명을 실시하기 위한 구체적인 내용" 항목에 기재된 여러 가지 다양한 실시예에 관한 설명은 예시적인 것에 불과한 것이며, 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자라면 이상의 설명으로부터 본 발명을 다양하게 변형하여 실시하거나 본 발명과 균등한 실시를 행할 수 있다는 점을 잘 이해하고 있을 것이다.In the above, several preferred embodiments of the present invention have been described with some examples, but the description of the various embodiments described in the "Specific Contents for Carrying Out the Invention" section is merely illustrative, and the present invention Those skilled in the art will understand from the above description that the present invention can be practiced with various modifications or equivalent implementations of the present invention can be performed.
또한, 본 발명은 다른 다양한 형태로 구현될 수 있기 때문에 본 발명은 상술한 설명에 의해서 한정되는 것이 아니며, 이상의 설명은 본 발명의 개시 내용이 완전해지도록 하기 위한 것으로 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자에게 본 발명의 범주를 완전하게 알려주기 위해 제공되는 것일 뿐이며, 본 발명은 청구범위의 각 청구항에 의해서 정의될 뿐임을 알아야 한다.In addition, since the present invention can be implemented in various other forms, the present invention is not limited by the above description, and the above description is intended to complete the disclosure of the present invention and is common in the technical field to which the present invention belongs. It is only provided to completely inform those skilled in the art of the scope of the present invention, and it should be noted that the present invention is only defined by each claim of the claims.
욕창질환 포착과 단계결정을 위한 딥러닝 모델 시스템으로 산업상 이용가능성이 인정된다. Industrial applicability is recognized as a deep learning model system for pressure sore disease detection and stage determination.

Claims (7)

  1. a) 욕창의심환자의 사진을 입력받는 단계;a) receiving a picture of a suspected pressure ulcer patient;
    (b) 입력된 사진으로부터 의심환부영역을 포착한 후, 상기 사진을 욕창진행 단계별로 레이블링하는 단계;(b) capturing a suspected affected area from an input photograph, and then labeling the photograph according to stages of pressure ulcer progression;
    (c) 상기 단계별로 레이블링된 사진의 의심환부영역을 CNN-이진판별기에 입력하고 예측확률을 로짓값으로 출력하는 단계; (c) inputting the suspected affected area of the photograph labeled in each step into a CNN-binary discriminator and outputting the predicted probability as a logit value;
    (d) 상기 로짓값으로 이루어진 4차원 벡터를 입력받아 욕창진행단계를 판별하는 DNN을 구성하는 단계; 및 (d) constructing a DNN that receives the 4-dimensional vector consisting of the logit values and determines the stage of pressure ulcer progression; and
    (e) 상기 4차원 벡터를 반복적으로 입력하여 상기 DNN을 학습시키는 단계;(e) learning the DNN by repeatedly inputting the 4-dimensional vector;
    를 포함하는 이미지 기반 욕창질환 판별모델 생성방법. Image-based pressure sore disease discrimination model generation method comprising a.
  2. 제 1항에 있어서, According to claim 1,
    상기 (b) 단계에서, 딥러닝 물체포착기를 통하여 상기 의심환부영역을 포착하는 것을 특징으로 하는, 욕창질환 포착과 단계결정을 위한 딥러닝 모델 생성방법.In the step (b), the deep learning model generation method for trapping pressure sore disease and determining the stage, characterized in that for capturing the suspected affected area through a deep learning object capturer.
  3. 제 1항에 있어서, According to claim 1,
    상기 (c) 단계에서, 상기 CNN-이진판별기에 4개이며, 상기 CNN-이진판별기는 각각 음성-1단계 이상, 1단계 이하-2단계 이상, 2단계 이하-3단계 이상, 3단계 이하-4단계의 4가지 결정 경계를 학습하는 것을 특징으로 하는, 욕창질환 포착과 단계결정을 위한 딥러닝 모델 생성방법.In the step (c), there are 4 CNN-binary discriminators, respectively, the CNN-binary discriminator is voice-level 1 or higher, level 1 or lower-level 2 or higher, level 2 or lower-level 3 or higher, level 3 or lower- A method for generating a deep learning model for capturing pressure sore diseases and determining stages, characterized by learning four decision boundaries in four stages.
  4. 제 1항에 있어서, According to claim 1,
    상기 CNN-이진판별기는 단계에 대한 판별 결과에 대하여 이진교차엔트로피(binary cross-entropy)를 이용하여 학습하는 것을 특징으로 하는, 욕창질환 포착과 단계결정을 위한 딥러닝 모델 생성방법.The CNN-binary discriminator is characterized in that it learns using binary cross-entropy for the discrimination result for the stage, a method for generating a deep learning model for capturing pressure ulcer disease and determining the stage.
  5. 제 1항에 있어서, According to claim 1,
    상기 DNN은, 5차원 벡터로 여기 소프트맥스함수를 적용하여 욕창 단계를 결정하며, 상기 DNN 은닉층의 깊이와 너비는 학습을 반복하여 최적의 값을 결정하는 것을 특징으로 하는, 욕창질환 포착과 단계결정을 위한 딥러닝 모델 생성방법.The DNN determines the pressure sore stage by applying the excitation softmax function to a 5-dimensional vector, and the depth and width of the DNN hidden layer determine optimal values by repeating learning. How to create a deep learning model for
  6. 제 1항 내지 제 5항 중 어느 한 항에 따른 이미지 기반 욕창질환 판별모델 생성방법에 의하여 생성된, 욕창질환 포착과 단계결정을 위한 딥러닝 시스템.A deep learning system for capturing pressure ulcer diseases and determining stages, generated by the method for generating an image-based pressure ulcer disease discrimination model according to any one of claims 1 to 5.
  7. (a) 제 6항에 따른 딥러닝 시스템에 환자 사진을 입력하는 단계; (a) inputting a patient picture into the deep learning system according to claim 6;
    (b) 상기 사진으로부터 상기 딥러닝 시스템에 의하여 판별된 욕창 진행 단계에 대한 정보를 수신하는 단계; 및(b) receiving information about the progression of pressure sores determined by the deep learning system from the picture; and
    (c) 상기 수신된 정보를 사용자 단말로 전송하는 단계를 포함하는 것을 특징으로 하는, 욕창 진단 방법. (c) transmitting the received information to a user terminal.
PCT/KR2021/013466 2021-09-30 2021-09-30 Deep learning model system for detecting pressure ulcer disease and determining stage of pressure ulcer disease, generation method therefor, and method for diagnosing pressure ulcer by using same WO2023054768A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2021-0130059 2021-09-30
KR1020210130059A KR20230046720A (en) 2021-09-30 2021-09-30 A system, method for generating deep-learning model for detecting and determining decubitus ulcer stage, and its application to diagnosis of decubitus ulcer

Publications (1)

Publication Number Publication Date
WO2023054768A1 true WO2023054768A1 (en) 2023-04-06

Family

ID=85782978

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/013466 WO2023054768A1 (en) 2021-09-30 2021-09-30 Deep learning model system for detecting pressure ulcer disease and determining stage of pressure ulcer disease, generation method therefor, and method for diagnosing pressure ulcer by using same

Country Status (2)

Country Link
KR (2) KR20230046720A (en)
WO (1) WO2023054768A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117789977A (en) * 2023-11-30 2024-03-29 华中科技大学同济医学院附属同济医院 Novel intelligent early warning and prevention integrated method and system for pressure sores

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020095600A (en) * 2018-12-14 2020-06-18 キヤノン株式会社 Processing system, processing device, terminal device, processing method, and program
KR20210101285A (en) * 2018-12-14 2021-08-18 스펙트랄 엠디, 인크. Machine Learning Systems and Methods for Assessment, Healing Prediction and Treatment of Wounds
KR102304370B1 (en) * 2020-09-18 2021-09-24 동국대학교 산학협력단 Apparatus and method of analyzing status and change of wound area based on deep learning

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101604916B1 (en) 2014-08-06 2016-03-25 엘에스산전 주식회사 Structure for insulating input channels, temperature control apparatus comprising the structure, and method for controlling temperature
KR102127272B1 (en) 2017-12-22 2020-06-29 주식회사 웨어밸리 Automation of sql tuning method and system using statistic sql pattern analysis
CN110060296A (en) 2018-01-18 2019-07-26 北京三星通信技术研究有限公司 Estimate method, electronic equipment and the method and apparatus for showing virtual objects of posture

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020095600A (en) * 2018-12-14 2020-06-18 キヤノン株式会社 Processing system, processing device, terminal device, processing method, and program
KR20210101285A (en) * 2018-12-14 2021-08-18 스펙트랄 엠디, 인크. Machine Learning Systems and Methods for Assessment, Healing Prediction and Treatment of Wounds
KR102304370B1 (en) * 2020-09-18 2021-09-24 동국대학교 산학협력단 Apparatus and method of analyzing status and change of wound area based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CICCERI GIOVANNI, DE VITA FABRIZIO, BRUNEO DARIO, MERLINO GIOVANNI, PULIAFITO ANTONIO: "A deep learning approach for pressure ulcer prevention using wearable computing", HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES, vol. 10, no. 1, 1 December 2020 (2020-12-01), XP093053314, DOI: 10.1186/s13673-020-0211-8 *
KIM MYOUNG SOO, RYU JUNG MI: "Development and Utilization of a Clinical Decision Support System Contents for Pressure Ulcer Prevention Care", JOURNAL OF HEALTH INFORMATICS AND STATISTICS, vol. 45, no. 4, 30 November 2020 (2020-11-30), pages 365 - 372, XP093053313, ISSN: 2465-8014, DOI: 10.21032/jhis.2020.45.4.365 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117789977A (en) * 2023-11-30 2024-03-29 华中科技大学同济医学院附属同济医院 Novel intelligent early warning and prevention integrated method and system for pressure sores

Also Published As

Publication number Publication date
KR20240032002A (en) 2024-03-08
KR20230046720A (en) 2023-04-06

Similar Documents

Publication Publication Date Title
WO2020207377A1 (en) Method, device, and system for image recognition model training and image recognition
Dawud et al. Application of deep learning in neuroradiology: brain haemorrhage classification using transfer learning
US20200211706A1 (en) Intelligent traditional chinese medicine diagnosis method, system and traditional chinese medicine system
WO2020050635A1 (en) Method and system for automatically segmenting blood vessels in medical image by using machine learning and image processing algorithm
WO2021045507A2 (en) Method and apparatus for predicting region-specific cerebral cortical contraction rate on basis of ct image
CN110390674A (en) Image processing method, device, storage medium, equipment and system
WO2019031794A1 (en) Method for generating prediction result for predicting occurrence of fatal symptoms of subject in advance and device using same
KR20240032002A (en) A system, method for generating deep-learning model for detecting and determining decubitus ulcer stage, and its application to diagnosis of decubitus ulcer
CN110427994A (en) Digestive endoscope image processing method, device, storage medium, equipment and system
Rongjun et al. Collaborative extreme learning machine with a confidence interval for P2P learning in healthcare
CN110489577A (en) Medical imaging management method and device, ophthalmoscopic image processing method, electronic equipment
Kumar et al. Malaria detection using deep convolution neural network
WO2023182702A1 (en) Artificial intelligence diagnosis data processing device and method for digital pathology images
WO2023121051A1 (en) Patient information provision method, patient information provision apparatus, and computer-readable recording medium
WO2022158843A1 (en) Method for refining tissue specimen image, and computing system performing same
WO2021137395A1 (en) Problematic behavior classification system and method based on deep neural network algorithm
WO2020209614A1 (en) Method and device for analysis of ultrasound image in first trimester of pregnancy
WO2024123057A1 (en) Method and analysis device for visualizing bone tumor in humerus using chest x-ray image
Sayeed et al. Detecting Malaria from Segmented Cell Images of Thin Blood Smear Dataset using Keras from Tensorflow
WO2023277644A1 (en) Method for providing information required for oral disease diagnosis, and apparatus for performing same
WO2023128040A1 (en) Healthcare system
WO2022019356A1 (en) Method for annotating pathogenic site of disease by means of semi-supervised learning, and diagnosis system for performing same
Jadhav et al. Detection of breast cancer by using various machine learning and deep learning algorithms
WO2024135979A1 (en) Method and device for providing health care to workers
WO2024025350A1 (en) Method, program, and device for updating artificial intelligence model for electrocardiogram reading

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21959539

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE