WO2020209414A1 - Face recognition-based smart dispenser using image sensor - Google Patents

Face recognition-based smart dispenser using image sensor Download PDF

Info

Publication number
WO2020209414A1
WO2020209414A1 PCT/KR2019/004330 KR2019004330W WO2020209414A1 WO 2020209414 A1 WO2020209414 A1 WO 2020209414A1 KR 2019004330 W KR2019004330 W KR 2019004330W WO 2020209414 A1 WO2020209414 A1 WO 2020209414A1
Authority
WO
WIPO (PCT)
Prior art keywords
taker
image sensor
image
dispenser
unit
Prior art date
Application number
PCT/KR2019/004330
Other languages
French (fr)
Korean (ko)
Inventor
천재두
이민정
Original Assignee
주식회사 에버정보기술
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 에버정보기술 filed Critical 주식회사 에버정보기술
Publication of WO2020209414A1 publication Critical patent/WO2020209414A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61JCONTAINERS SPECIALLY ADAPTED FOR MEDICAL OR PHARMACEUTICAL PURPOSES; DEVICES OR METHODS SPECIALLY ADAPTED FOR BRINGING PHARMACEUTICAL PRODUCTS INTO PARTICULAR PHYSICAL OR ADMINISTERING FORMS; DEVICES FOR ADMINISTERING FOOD OR MEDICINES ORALLY; BABY COMFORTERS; DEVICES FOR RECEIVING SPITTLE
    • A61J7/00Devices for administering medicines orally, e.g. spoons; Pill counting devices; Arrangements for time indication or reminder for taking medicine
    • A61J7/04Arrangements for time indication or reminder for taking medicine, e.g. programmed dispensers
    • A61J7/0409Arrangements for time indication or reminder for taking medicine, e.g. programmed dispensers with timers
    • A61J7/0427Arrangements for time indication or reminder for taking medicine, e.g. programmed dispensers with timers with direct interaction with a dispensing or delivery system
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61JCONTAINERS SPECIALLY ADAPTED FOR MEDICAL OR PHARMACEUTICAL PURPOSES; DEVICES OR METHODS SPECIALLY ADAPTED FOR BRINGING PHARMACEUTICAL PRODUCTS INTO PARTICULAR PHYSICAL OR ADMINISTERING FORMS; DEVICES FOR ADMINISTERING FOOD OR MEDICINES ORALLY; BABY COMFORTERS; DEVICES FOR RECEIVING SPITTLE
    • A61J7/00Devices for administering medicines orally, e.g. spoons; Pill counting devices; Arrangements for time indication or reminder for taking medicine
    • A61J7/04Arrangements for time indication or reminder for taking medicine, e.g. programmed dispensers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61JCONTAINERS SPECIALLY ADAPTED FOR MEDICAL OR PHARMACEUTICAL PURPOSES; DEVICES OR METHODS SPECIALLY ADAPTED FOR BRINGING PHARMACEUTICAL PRODUCTS INTO PARTICULAR PHYSICAL OR ADMINISTERING FORMS; DEVICES FOR ADMINISTERING FOOD OR MEDICINES ORALLY; BABY COMFORTERS; DEVICES FOR RECEIVING SPITTLE
    • A61J7/00Devices for administering medicines orally, e.g. spoons; Pill counting devices; Arrangements for time indication or reminder for taking medicine
    • A61J7/04Arrangements for time indication or reminder for taking medicine, e.g. programmed dispensers
    • A61J7/0409Arrangements for time indication or reminder for taking medicine, e.g. programmed dispensers with timers
    • A61J7/0481Arrangements for time indication or reminder for taking medicine, e.g. programmed dispensers with timers working on a schedule basis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/10ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to drugs or medications, e.g. for ensuring correct administration to patients
    • G16H20/13ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to drugs or medications, e.g. for ensuring correct administration to patients delivered from dispensers
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61JCONTAINERS SPECIALLY ADAPTED FOR MEDICAL OR PHARMACEUTICAL PURPOSES; DEVICES OR METHODS SPECIALLY ADAPTED FOR BRINGING PHARMACEUTICAL PRODUCTS INTO PARTICULAR PHYSICAL OR ADMINISTERING FORMS; DEVICES FOR ADMINISTERING FOOD OR MEDICINES ORALLY; BABY COMFORTERS; DEVICES FOR RECEIVING SPITTLE
    • A61J2200/00General characteristics or adaptations
    • A61J2200/30Compliance analysis for taking medication
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61JCONTAINERS SPECIALLY ADAPTED FOR MEDICAL OR PHARMACEUTICAL PURPOSES; DEVICES OR METHODS SPECIALLY ADAPTED FOR BRINGING PHARMACEUTICAL PRODUCTS INTO PARTICULAR PHYSICAL OR ADMINISTERING FORMS; DEVICES FOR ADMINISTERING FOOD OR MEDICINES ORALLY; BABY COMFORTERS; DEVICES FOR RECEIVING SPITTLE
    • A61J2200/00General characteristics or adaptations
    • A61J2200/70Device provided with specific sensor or indicating means

Definitions

  • the present invention relates to a smart dispenser, and more particularly, to a smart dispenser based on facial recognition using an image sensor capable of authenticating a taker using an image sensor and performing a dosage command of the taker.
  • Drugs must be taken in a fixed dose at a fixed time and in a timely manner in order to achieve optimal effects.
  • the number of drugs to be taken is large, and the number of drugs to be taken is also large, so the correct drug must be taken at the specified time.
  • An object of the present invention is to use an image sensor to trigger a notification when the dose time is reached, to recognize and open the biometric information through an image sensor connected to the device, to allow the patient to take the drug, and at the same time to take the dose by the image sensor. It provides a smart dispenser based on facial recognition using an image sensor that can recognize and automatically check whether or not to take it.
  • the smart dispenser based on facial recognition using an image sensor has an image sensor that periodically photographs and acquires an image of a taker, performs authentication of the taker through the image sensor, and It is characterized in that the dose schedule of the taker is stored in advance, and the medication command is executed according to the stored dose schedule, and authentication for the corresponding taker is performed using the image of the taker.
  • the dispenser comprises: a control unit configured to receive an image acquired by periodically photographing an image of a taker through the image sensor, and to recognize a taker and authenticate the taker; A display unit that serves to visually provide various notifications to a user; A key input unit for providing a key input such as a screen display, power on/off, etc.; An output unit provided to output voice guidance to the outside; Recognizes the facial expression features of the taker from at least one of the taker's image, can store a plurality of facial expression recognition results, detects a feature related to the temporal change of the taker's expression, and based on the detected feature It includes a situation judgment unit that can estimate mental health status.
  • the dispenser predicts the state of the taker based on an artificial intelligence learning algorithm by analyzing the image photographed through the image sensor, and displays whether to take it through the display unit under the control of the controller according to the prediction result It further includes a prediction unit for providing a prediction result to the control unit to provide a voice guidance on the dose through the output unit.
  • control unit is characterized in that to store and manage at least one of a dose, a type of drug to be taken, a dose time, a state of the taker, and a taking history in order to perform a dosage command of the taker.
  • the authentication of the taker is characterized by extracting the taker's feature points from the facial image based on pixel information from the taker's face image acquired by photographing by the image sensor.
  • the situation determination unit is characterized in that it uses a skin care algorithm to check the health of the user according to the skin change.
  • the authentication of the taker is characterized in that the situation determination unit checks the heart rate by programming a health management algorithm in order to check the state of the taker.
  • the smart dispenser based on facial recognition using the image sensor of the present invention can perform dose to the authenticated taker through image sensor authentication, thus enhancing security and preventing medical accidents that may be taken by others in advance. can do.
  • additional additional services are provided by acquiring non-contact biometric information for the object's reluctance, discomfort, and ease of use, and by processing and managing AI (deep learning) using the acquired big data of the subject's unique biometric information ( Health, skin, taking medicines, etc.)
  • FIG. 1 is a front view of a face recognition-based smart dispenser using an image sensor according to an embodiment of the present invention.
  • FIG. 2 is a block diagram showing a detailed configuration of a smart dispenser based on facial recognition using the image sensor of FIG. 1.
  • FIG. 1 is a front view of a face recognition-based smart dispenser using an image sensor according to an embodiment of the present invention
  • FIG. 2 is a block diagram showing a detailed configuration of a face recognition-based smart dispenser using the image sensor of FIG. 1.
  • the dispenser 100 of the present invention basically uses the image sensor 170 to sound a notification when the dose time is reached, and recognizes the biometric information through the image sensor 170 connected to the device and locks the drug inlet 101 with a locking device. ) Is opened and the user is allowed to take the medicine, and at the same time, the user's taking action is recognized through the image sensor 170 to automatically check and record whether or not to take the medicine.
  • the dispenser 100 of the present invention shows a display unit 120 for displaying various guides on the front side, a key input unit 140 for receiving external inputs, and a user's appearance for use in biometric authentication. It includes an image sensor 170 for photographing.
  • the dispenser 100 of the present invention includes a control unit 110, a display unit 120, a prediction unit 130, a key input unit 140, an output unit 150, and a biometric authentication unit 160. , An image sensor 170, and a situation determination unit 180.
  • the image sensor 170 is located in front of the dispenser 100 using the image sensor 170 authentication, and if the taker is within the recognition distance, the image can be periodically photographed and acquired, and the acquired image is transferred to the controller 110. Is transmitted. At this time, the recognition distance is preferably within 1m.
  • the image sensor 170 is an image sensor used to image the entire face of the taker, and an auxiliary image capable of imaging different angles of the taker (for example, the position of the taker's hand or the side of the taker's face).
  • a plurality of sensors can also be used in combination. Through this method, it is possible to compensate for the limitation of situation recognition that may occur using the single image sensor 170 (for example, a backlight situation or an error such as a hardware failure of the image sensor itself).
  • a camera capable of performing the function of the image sensor 170 may be provided to acquire a plurality of images and transmit them to the control unit 110 to be used for user authentication.
  • the controller 110 may store the drug taking schedule of the taker for the taker authenticated and the recognized taker through the image sensor 170, and execute a dosage command according to the stored taking schedule.
  • control unit 110 through a central control server (not shown) that can provide such information about additional disease status through personal ID information (for example, resident number) of the user or personal ID information of the user. You can learn. That is, after the user's authentication is completed, additional information (personal ID information, disease status) of the user can be used.
  • personal ID information for example, resident number
  • the control unit 110 may use a feature point extraction method to perform recognition of a taker and authentication of a corresponding taker.
  • a feature point of the taker may be extracted from the facial image based on pixel information from the face image of the taker acquired by photographing by the image sensor 170. It may be a feature point constituting the face (face), for example, the center of the eyebrow, the edges of the eyes, the corner of the mouth, the center of the lips, and the like.
  • facial contours and contours such as eyebrows, eyes, nose, mouth, and ears may be extracted using a boundary line extraction method, and feature points may be extracted by checking a pixel value of each pixel within each contour.
  • the complexion area is an area that is divided into an area that can be distinguished from the face, and may be an area constituting the face.
  • the control unit 110 may have a built-in memory to store a dose, a type of drug to be taken, a dose time, a state of a taker, a taking history, etc., and may be a programmable microcontroller (MCU) for this purpose.
  • MCU programmable microcontroller
  • control unit 110 receives a prediction result for the state of the taker based on an artificial intelligence learning algorithm from the prediction unit 130 and displays whether to take it through the display unit 120 or the output unit 150. It can be controlled to provide voice guidance for
  • the display unit 120 serves to visually provide various notifications to the user, and may display, for example, a dose, a type of drug to be taken, a dosage time, a state of the user, and a dosage history.
  • the prediction unit 130 analyzes the image captured through the image sensor 170, predicts the state of the taker based on an artificial intelligence learning algorithm, and displays the prediction result through the display unit 120 or the output unit 150. Enables voice guidance to be performed.
  • the artificial intelligence learning algorithm may be, for example, an artificial neural network, a radial basis function (RBF) neural network, and a support vector machine algorithm.
  • a support vector machine algorithm capable of estimating a case that deviates from the pattern by extracting and classifying facial feature points.
  • the predictor 130 analyzes the taker's image and predicts the taker's state, and most of the taker's images are used to recognize the taker's face and hands.
  • the prediction unit 130 is used to predict the state of the taker by extracting feature points (eg, eyes, nose, mouth, etc.) of the face and hands from the imaged images of the face and hands of the taker.
  • these additional facial structures may be glasses, sunglasses, hats, masks, artificial beards, etc. worn by the user.
  • a voice guide such as, etc. may be output or displayed on the display unit 120.
  • the prediction unit 130 includes the artificial structures (for example, sunglasses, masks) described above as a factor that analyzes and learns the image captured through the image sensor 170, You can make them learn. As a result, through such learning, the predictor 130 may better estimate the wearer's wearing of the artificial structure and request the user to better classify the removal of the artificial structure through the display unit 120 or the output unit 150.
  • the artificial structures for example, sunglasses, masks
  • various voice guidance or visual indications related to the dose are made according to the state of the taker, and the prediction results and the taking history of the taker are stored in a separate database, so that the self-learning and evolution can be achieved by converting into big data.
  • the situation determination unit 180 may recognize the facial expression characteristics of the taker from each of the plurality of image sensor 170 images captured and acquired in time series, and store the plurality of facial expression recognition results as time series data. Features related to temporal changes in facial expressions may be detected, and the mental health status of the user may be estimated based on the detected features.
  • the medicine administration time may be accelerated so that it can be provided to the user. It can also be used to determine a health state according to facial expression characteristics.
  • the situation determination unit 180 additionally recognizes not only the facial features of the taker in time series, but also the hands that actually take the medicine to the mouth of the taker, and features points of the taker's face (for example, the position of the mouth). Based on the relationship between the characteristics of the hand and the hand, it is possible to better check whether the user is taking medication. That is, by additionally recognizing and authenticating the change in the result of facial expression recognition and the change between the feature points of the face and the hand, it is possible to estimate the health status of the taker and increase the estimation or prediction accuracy of whether the taker takes medicine.
  • the situation determination unit 180 predicts what problems will occur in health (breathing status, activity level, heart rate status) according to skin changes (pores, wrinkles, sebum, etc.) using a skin care algorithm to check the status of the user. May be.
  • skin care algorithm to check the status of the user.
  • the image provided through the image sensor 170 which causes rapid changes such as skin temperature or flushing, can be analyzed and checked, and through this, medical accidents can be prevented in advance.
  • the situation determination unit 180 can check the heart rate by programming a health management algorithm to check the state of the taker. For example, anger, contempt, disgust, fear, happiness, and expressionlessness in the face image obtained from the image sensor 170 You can also identify a series of emotions such as, sadness, and surprise.
  • the situation determination unit 180 analyzes the user's time series data and images obtained through the image sensor 170 to estimate the change in blood flow on the user's face, thereby estimating the heart rate. It also becomes possible. In addition, a method such as deep learning as described above may be additionally used for this estimation. Furthermore, it is possible to perform authentication for the user by checking the heart rate using such a health management algorithm. For example, user authentication may be performed according to whether the heart rate measurement value matches the previously registered heart rate of the corresponding user.
  • the key input unit 140 may provide a user to directly input a key input to give a command to take medicine or to input a key such as a screen display or power on/off.
  • the output unit 150 is provided in the form of a speaker so as to output voice guidance to the outside.
  • the voice guidance may be, for example, a dosage instruction guide, a dosage time warning alarm, an authentication status alarm, and a dosage voice guidance.
  • the biometric authentication unit 160 is a separate authentication means other than the image sensor 170 image authentication, and may be an alternative authentication means in situations where authentication of the image sensor 170 continuously fails or is unavailable (dark places, etc.), For example, a biometric authentication method using fingerprint, iris, voice, and heart rate may be adopted. In addition, for biometric authentication, it is desirable to receive and store a fingerprint, iris, voice, and heart rate as the biometric authentication information of the user in advance.
  • the auxiliary detection means may be a distance detection sensor, such as a radar or LiDar, that can measure a distance to a user.
  • a distance detection sensor such as a radar or LiDar
  • the auxiliary sensing means is a sensor capable of sensing the distance, for example, when a user enters the dispenser 100 within a certain distance, it recognizes such a situation and starts so that the image sensor 170 can be imaged, and the user moves a certain distance. If it deviates, it can be utilized to stop capturing the image sensor 170. Through this utilization, it is possible to solve a problem in that the image sensor 170 operates to capture an object other than the patient and analyze the captured image through the prediction unit 130 or the control unit 110.
  • the dispenser when the dispenser is operated using only the image sensor 170 and the auxiliary sensing means, it is possible to effectively care for the health of the users through the presence or absence of the taker's medicine and estimation of the health status of the taker in a non-contact manner with the user. This is particularly helpful for patients who are difficult to authenticate through physical contact or for the elderly (for example, dementia patients) to manage their medications smartly and further effectively manage their health. Can give.
  • control unit 110 may utilize additional information (personal ID information, disease status) of the user through the central control server.
  • additional information personal ID information, disease status
  • the control unit 110 may use the individual's injury and illness code.
  • the dispenser 100 simultaneously utilizes the image sensor 170 and the auxiliary sensing means as described above, and the biometric authentication unit
  • the imaging method of the image sensor 170 is expected to be in a state where the movement of the object is large.
  • the dispenser of the present invention may additionally include a wireless communication unit (not shown) capable of wireless communication in connection with an external management server.
  • the dispenser of the present invention can be used to perform user authentication in connection with a management server through a wireless communication unit, or to establish a DB for dosage management, skin care, and health management (heart rate check), and to check and manage history. It is also possible to analyze and predict the patient's condition more accurately by using the dosage data related to the taker (dose, type of drug taken, taking time, taker status and taking history, etc.) converted into big data based on artificial intelligence algorithm.
  • control unit 120 display unit
  • image sensor 180 situation determination unit

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Pathology (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • Cardiology (AREA)
  • Physiology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Psychiatry (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Medicinal Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Social Psychology (AREA)
  • Signal Processing (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Hospice & Palliative Care (AREA)

Abstract

A face recognition-based smart dispenser using an image sensor according to an embodiment of the present invention is provided with the image sensor, which periodically captures and acquires images of a medicine taker, wherein authentication of the medicine taker is performed through the image sensor, a medicine-taking schedule of the medicine taker is stored in advance for the authenticated medicine taker, a command to take medicine is issued according to the stored medicine taking schedule, and the medicine taker is authenticated by using the image of the medicine taker.

Description

이미지 센서를 이용한 안면인식 기반 스마트 디스펜서Smart dispenser based on facial recognition using image sensor
본 발명은 스마트 디스펜서에 관한 것으로, 더욱 상세하게는 이미지 센서를 이용하여 복용자를 인증하고, 복용자의 투약 명령을 수행할 수 있는 이미지 센서를 이용한 안면인식 기반 스마트 디스펜서에 관한 것이다.The present invention relates to a smart dispenser, and more particularly, to a smart dispenser based on facial recognition using an image sensor capable of authenticating a taker using an image sensor and performing a dosage command of the taker.
현대 사회는 건강에 대한 관심이 나날이 증가하고 있다. 이러한 시대적인 관심과 더불어, 실시간 데이터 수집에 의한 데이터 분석 방식 및 툴(tool)이 고도화되는 등 기술이 비약적으로 발전함에 따라서, 건강 상태를 모니터링하고 개인화된 건강관리 서비스를 제공받는 것이 가능하게 되었다.In modern society, interest in health is increasing day by day. In addition to the interests of the times, as technology advances rapidly, such as the advancement of data analysis methods and tools by collecting real-time data, it has become possible to monitor health conditions and receive personalized health care services.
또한, 소비자의 의식 변화에 따른 고객 요구의 다양화와 기대수준의 향상으로 건강 서비스 및 관련 시스템 이용의 편리성 및 맞춤화가 강화되고 있는 추세이며, 축적된 개인의 건강 데이터를 바탕으로 생활 습관병 예방이나 체중관리 등의 개인화(personalized) 건강관리 사업이 급속도로 성장하고 있다.In addition, convenience and customization of health services and related systems are being strengthened by diversification of customer needs and improvement of expectations according to changes in consumer's consciousness, and lifestyle-related diseases are prevented based on accumulated personal health data. Personalized healthcare businesses such as weight management are growing rapidly.
최근, 통계청에 따르면 우리나라는 2017년부터 노인인구 비중이 유소년인구 비중을 웃도는'인구 역전 현상'이 발생하고 생산가능인구가 감소세로 돌아선 이후부터 고령화시대가 본격화한다고 한다.Recently, according to the National Statistical Office, Korea has experienced a'population reversal phenomenon' in which the proportion of the elderly population exceeds the proportion of the youth population from 2017, and the age of aging is in full swing after the working age population turns to a decline.
그리고 노인은 여러 가지 약물을 동시에 복용하기 때문에 약물간의 상호작용을 일으켜 부작용이 나타날 가능성이 매우 크므로, 약물을 임의적으로 중단하거나 통증해소를 위해 추가로 약물을 복용하는 경우에는 미리 의사나 담당 약사와 상의해야 한다. In addition, since the elderly take several drugs at the same time, it is very likely that side effects may occur due to interactions between drugs. Therefore, if you discontinue the drugs or take additional drugs to relieve pain, consult your doctor or pharmacist in advance. You should consult.
약물은 정해진 용량을 정해진 시간에 제때 복용해야 최적의 효과를 나타 낼 수 있다. 하지만 노인환자의 경우에는 합병증이 많아 복용하는 약물의 수도 많고 약물의 복용 횟수도 많아 정해진 시간대에 정확한 약물을 복용하여야 한다.Drugs must be taken in a fixed dose at a fixed time and in a timely manner in order to achieve optimal effects. However, in the case of elderly patients, due to complications, the number of drugs to be taken is large, and the number of drugs to be taken is also large, so the correct drug must be taken at the specified time.
따라서, 전술한 문제를 해결하기 위하여, 약물을 정해진 복용 스케줄대로 투약할 수 있음과 아울러, 타인이 허가받지 않고 약물을 투여받는 사고가 발생하지 않도록 이미지 센서 인증을 수행할 수 있는 이미지 센서를 이용한 안면인식 기반 스마트 디스펜서에 대한 연구가 필요하게 되었다.Therefore, in order to solve the above-described problem, the face using an image sensor capable of performing image sensor authentication so that the drug can be administered according to a prescribed dosing schedule and that other people are not authorized to receive the drug. Research on recognition-based smart dispensers is needed.
[선행특허문헌][Prior patent literature]
한국공개특허 제2013-024065호(2013.03.08.공개)Korean Patent Publication No. 2013-024065 (published on March 08, 2013)
본 발명의 목적은 이미지 센서를 이용하여 복용시간이 되면 알림이 울리고, 장치에 연결된 이미지 센서를 통해 생체정보를 인식해 개방하고, 복용자가 약을 복용하도록 함과 동시에 이미지 센서를 통해 복용자의 복용 동작을 인식해 자동으로 복용 여부를 확인하고 기록할 수 있는 이미지 센서를 이용한 안면인식 기반 스마트 디스펜서를 제공하는 것이다.An object of the present invention is to use an image sensor to trigger a notification when the dose time is reached, to recognize and open the biometric information through an image sensor connected to the device, to allow the patient to take the drug, and at the same time to take the dose by the image sensor. It provides a smart dispenser based on facial recognition using an image sensor that can recognize and automatically check whether or not to take it.
본 발명의 일 실시예에 따른 이미지 센서를 이용한 안면인식 기반 스마트 디스펜서는 복용자의 이미지를 주기적으로 촬영하여 획득하는 이미지 센서를 구비하고, 상기 이미지 센서를 통하여 복용자 인증을 수행하고, 인증된 복용자에 대하여 복용자의 복용 스케줄을 미리 저장하며, 저장된 복용 스케줄대로 투약 명령을 수행하되, 상기 복용자의 이미지를 이용하여 해당 복용자에 대한 인증을 수행하는 것을 특징으로 한다.The smart dispenser based on facial recognition using an image sensor according to an embodiment of the present invention has an image sensor that periodically photographs and acquires an image of a taker, performs authentication of the taker through the image sensor, and It is characterized in that the dose schedule of the taker is stored in advance, and the medication command is executed according to the stored dose schedule, and authentication for the corresponding taker is performed using the image of the taker.
상기에 있어서, 상기 디스펜서는, 상기 이미지 센서를 통하여 복용자의 이미지를 주기적으로 촬영하여 획득된 이미지를 전송받고, 복용자의 인식과 해당 복용자에 대한 인증을 수행하는 제어부; 복용자에게 각종 알림을 시각적으로 제공하기 위한 역할을 수행하는 표시부; 상기 복용자가 직접 키입력에 의해 복용 명령을 내리거나, 화면표시, 전원 온/오프 등의 키입력이 가능하도록 제공하는 키입력부; 외부로 음성 안내를 출력할 수 있도록 구비되는 출력부; 적어도 하나의 상기 복용자의 이미지에서 복용자의 표정 특징을 인식하고, 복수의 얼굴 표정 인식 결과를 저장할 수 있으며, 복용자의 표정의 시간적 변화에 관련된 특징을 검출하고, 그 검출된 특징에 기반하여 상기 복용자의 정신적 건강 상태를 추정할 수 있는 상황판단부;를 포함한다.In the above, the dispenser comprises: a control unit configured to receive an image acquired by periodically photographing an image of a taker through the image sensor, and to recognize a taker and authenticate the taker; A display unit that serves to visually provide various notifications to a user; A key input unit for providing a key input such as a screen display, power on/off, etc.; An output unit provided to output voice guidance to the outside; Recognizes the facial expression features of the taker from at least one of the taker's image, can store a plurality of facial expression recognition results, detects a feature related to the temporal change of the taker's expression, and based on the detected feature It includes a situation judgment unit that can estimate mental health status.
상기에 있어서, 상기 디스펜서는 상기 이미지 센서를 통하여 촬영된 이미지를 분석하여 인공지능 학습 알고리즘을 기반으로 복용자의 상태를 예측하고, 예측 결과에 따라 상기 제어부의 제어하에 상기 표시부를 통하여 복용 여부를 표시하거나 출력부를 통하여 복용에 대한 음성 안내를 제공하도록 상기 제어부에 예측 결과를 제공하는 예측부;를 더 포함한다.In the above, the dispenser predicts the state of the taker based on an artificial intelligence learning algorithm by analyzing the image photographed through the image sensor, and displays whether to take it through the display unit under the control of the controller according to the prediction result It further includes a prediction unit for providing a prediction result to the control unit to provide a voice guidance on the dose through the output unit.
상기에 있어서, 상기 제어부는 복용자의 투약 명령을 수행하기 위해 복용량, 복용되는 약종류, 복용시간, 복용자 상태 및 복용이력 중 적어도 하나를 저장하여 관리하는 것을 특징으로 한다.In the above, the control unit is characterized in that to store and manage at least one of a dose, a type of drug to be taken, a dose time, a state of the taker, and a taking history in order to perform a dosage command of the taker.
상기에 있어서, 상기 복용자에 대한 인증은, 상기 이미지 센서의 촬영으로 획득된 복용자의 안면 이미지에서 픽셀 정보를 기반으로 안면 이미지에서 복용자의 특징점을 추출하여 이루어지는 것을 특징으로 한다.In the above, the authentication of the taker is characterized by extracting the taker's feature points from the facial image based on pixel information from the taker's face image acquired by photographing by the image sensor.
상기에 있어서, 상기 상황판단부는 피부 변화에 따른 복용자의 건강을 체크할 수 있도록 피부 관리 알고리즘을 이용하는 것을 특징으로 한다.In the above, the situation determination unit is characterized in that it uses a skin care algorithm to check the health of the user according to the skin change.
상기에 있어서, 상기 복용자에 대한 인증은, 상기 상황판단부가 복용자의 상태를 확인하기 위해 건강 관리 알고리즘을 프로그래밍하여 심박수를 체크하여 이루어지는 것을 특징으로 한다.In the above, the authentication of the taker is characterized in that the situation determination unit checks the heart rate by programming a health management algorithm in order to check the state of the taker.
본 발명의 이미지 센서를 이용한 안면인식 기반 스마트 디스펜서는 이미지 센서 인증을 통하여 인증된 복용자에 대하여 복용을 수행할 수 있어, 보안성이 강화되고, 혹시라도 타인이 복용할 수 있는 의료사고를 미연에 방지할 수 있다.The smart dispenser based on facial recognition using the image sensor of the present invention can perform dose to the authenticated taker through image sensor authentication, thus enhancing security and preventing medical accidents that may be taken by others in advance. can do.
또한, 독거노인 또는 1~2인 가족의 복용 약 관리를 통한 건강 헬스케어 서비스 제공할 수 있다.In addition, it is possible to provide health and healthcare services through the administration of medications for the elderly living alone or for one or two family members.
또한, 대상자의 기기 사용에 대한 거부감, 불쾌감, 사용 편의성을 위한 비 접촉 생체 정보 획득하고, 획득한 대상자의 고유 생체정보 빅데이터를 활용, AI(딥러닝) 처리하여 관리함으로써 추가적인 부가 서비스들을 제공(건강, 피부, 복용 약 관리 등)할 수 있다.In addition, additional additional services are provided by acquiring non-contact biometric information for the object's reluctance, discomfort, and ease of use, and by processing and managing AI (deep learning) using the acquired big data of the subject's unique biometric information ( Health, skin, taking medicines, etc.)
또한 정해진 복용 스케줄대로 복용 명령을 수행함으로써, 약물 오남용을 방지하고, 복용자가 복용 스케줄을 인지하고 있지 않더라도 복용량, 복용되는 약종류, 복용시간, 복용자 상태, 복용이력 등을 확인하고 투약하여, 복약관리를 스마트하게 수행할 수 있고, 나아가 건강관리 및 사회적 불필요한 의료비용 손실을 줄일 수 있는 이점이 있다.In addition, by executing the medication order according to the prescribed dosage schedule, drug abuse is prevented, and even if the user is not aware of the dosage schedule, the dosage, the type of medicine being taken, the dosage time, the condition of the user, the dosage history, etc. are checked and administered, and medication management There is an advantage in that it can be performed smartly, and furthermore, it is possible to reduce unnecessary loss of health care and social medical expenses.
도 1은 본 발명의 일 실시예에 따른 이미지 센서를 이용한 안면인식 기반 스마트 디스펜서의 정면도이다.1 is a front view of a face recognition-based smart dispenser using an image sensor according to an embodiment of the present invention.
도 2는 도 1의 이미지 센서를 이용한 안면인식 기반 스마트 디스펜서의 세부 구성을 보인 블록도이다.FIG. 2 is a block diagram showing a detailed configuration of a smart dispenser based on facial recognition using the image sensor of FIG. 1.
이하에서는 도면을 참조하여 본 발명의 구체적인 실시예를 상세하게 설명한다. 다만, 본 발명의 사상은 제시되는 실시예에 제한되지 아니하고, 본 발명의 사상을 이해하는 당업자는 동일한 사상의 범위 내에서 다른 구성요소를 추가, 변경, 삭제 등을 통하여, 퇴보적인 다른 발명이나 본 발명 사상의 범위 내에 포함되는 다른 실시예를 용이하게 제안할 수 있을 것이나, 이 또한 본원 발명 사상 범위 내에 포함된다고 할 것이다. 또한, 각 실시예의 도면에 나타나는 동일한 사상의 범위 내의 기능이 동일한 구성요소는 동일한 참조부호를 사용하여 설명한다.Hereinafter, specific embodiments of the present invention will be described in detail with reference to the drawings. However, the spirit of the present invention is not limited to the presented embodiments, and those skilled in the art who understand the spirit of the present invention can add, change, or delete other elements within the scope of the same idea. Other embodiments included within the scope of the inventive concept may be easily proposed, but it will be said that this is also included within the scope of the inventive concept. In addition, components having the same function within the scope of the same idea shown in the drawings of each embodiment will be described with the same reference numerals.
도 1은 본 발명의 일 실시예에 따른 이미지 센서를 이용한 안면인식 기반 스마트 디스펜서의 정면도이며, 도 2는 도 1의 이미지 센서를 이용한 안면인식 기반 스마트 디스펜서의 세부 구성을 보인 블록도이다.1 is a front view of a face recognition-based smart dispenser using an image sensor according to an embodiment of the present invention, and FIG. 2 is a block diagram showing a detailed configuration of a face recognition-based smart dispenser using the image sensor of FIG. 1.
본 발명의 디스펜서(100)는 기본적으로 이미지 센서(170)를 이용하여 복용시간이 되면 알림이 울리고, 장치에 연결된 이미지 센서(170)를 통해 생체정보를 인식해 잠금장치로 잠금된 약투입구(101)를 개방하고 복용자가 약을 복용하도록 함과 동시에 이미지 센서(170)를 통해 복용자의 복용 동작을 인식해 자동으로 복용 여부를 확인하고 기록할 수 있다.The dispenser 100 of the present invention basically uses the image sensor 170 to sound a notification when the dose time is reached, and recognizes the biometric information through the image sensor 170 connected to the device and locks the drug inlet 101 with a locking device. ) Is opened and the user is allowed to take the medicine, and at the same time, the user's taking action is recognized through the image sensor 170 to automatically check and record whether or not to take the medicine.
본 발명의 디스펜서(100)는 도 1을 참조하면, 전면에 각종 안내를 표시하는 표시부(120)와, 외부 입력을 받을 수 있는 키입력부(140)와, 생체인증에 이용하기 위해 복용자의 모습을 촬영하는 이미지 센서(170)를 구비한다.Referring to FIG. 1, the dispenser 100 of the present invention shows a display unit 120 for displaying various guides on the front side, a key input unit 140 for receiving external inputs, and a user's appearance for use in biometric authentication. It includes an image sensor 170 for photographing.
본 발명의 디스펜서(100)는 도 2에 도시된 바와 같이, 제어부(110), 표시부(120), 예측부(130), 키입력부(140), 출력부(150), 생체인증부(160), 이미지 센서(170), 상황판단부(180)를 포함한다.As shown in FIG. 2, the dispenser 100 of the present invention includes a control unit 110, a display unit 120, a prediction unit 130, a key input unit 140, an output unit 150, and a biometric authentication unit 160. , An image sensor 170, and a situation determination unit 180.
이미지 센서(170)는 이미지 센서(170) 인증을 이용한 디스펜서(100)의 전면에 위치하여 복용자가 인식 거리 내에 위치하면 이미지를 주기적으로 촬영하여 획득할 수 있으며, 획득된 이미지는 제어부(110)로 전송된다. 이때 인식거리는 1m 이내가 바람직하다. The image sensor 170 is located in front of the dispenser 100 using the image sensor 170 authentication, and if the taker is within the recognition distance, the image can be periodically photographed and acquired, and the acquired image is transferred to the controller 110. Is transmitted. At this time, the recognition distance is preferably within 1m.
또한 이미지 센서(170)는 복용자의 얼굴 등 전면을 촬상하기 위해 사용되는 이미지 센서와 복용자의 다른 각도(예를 들어, 복용자의 손의 위치나 복용자의 얼굴의 측면 등)를 촬상할 수 있는 보조 이미지 센서를 복수개로 조합하여 사용할 수도 있다. 이러한 방식을 통해 단일한 이미지 센서(170)를 사용하여 발생할 수 있는 상황인식의 한계(예를 들어 역광의 상황이라던가, 이미지 센서 자체의 하드웨어 고장 등의 오류)에 대해서 보완할 수 있다. 또한, 안면 인식이 용이하도록 밝지 않은 곳에서 활용하기 위해 조도 센서를 추가로 구비하여 복용자의 안면 이미지를 획득하는 것이 바람직하다.In addition, the image sensor 170 is an image sensor used to image the entire face of the taker, and an auxiliary image capable of imaging different angles of the taker (for example, the position of the taker's hand or the side of the taker's face). A plurality of sensors can also be used in combination. Through this method, it is possible to compensate for the limitation of situation recognition that may occur using the single image sensor 170 (for example, a backlight situation or an error such as a hardware failure of the image sensor itself). In addition, it is desirable to additionally provide an illuminance sensor to obtain a face image of the patient in order to utilize it in a non-bright place to facilitate facial recognition.
나아가, 이미지 센서(170)의 기능을 대신할 수 있는 카메라를 구비하여 복수의 이미지를 획득하여 제어부(110)로 전송하여 복용자 인증에 이용할 수도 있다.Furthermore, a camera capable of performing the function of the image sensor 170 may be provided to acquire a plurality of images and transmit them to the control unit 110 to be used for user authentication.
제어부(110)는 이미지 센서(170)를 통하여 복용자 인증 및 인식된 복용자에 대하여 복용자의 약 복용 스케줄을 저장하고, 저장된 복용 스케줄대로 투약 명령을 수행할 수 있다. The controller 110 may store the drug taking schedule of the taker for the taker authenticated and the recognized taker through the image sensor 170, and execute a dosage command according to the stored taking schedule.
이 때, 제어부(110)는 복용자에 대한 개인 id 정보(예를 들어 주민번호)나 복용자의 개인 id 정보를 통해 추가적인 질병 현황에 대해서 이러한 정보를 제공할 수 있는 중앙관제서버(미도시)를 통해 습득할 수 있다. 즉, 복용자의 인증이 마치고 나면 복용자의 추가적인 정보들(개인 id 정보, 질병 현황)을 활용할 수 있다.At this time, the control unit 110 through a central control server (not shown) that can provide such information about additional disease status through personal ID information (for example, resident number) of the user or personal ID information of the user. You can learn. That is, after the user's authentication is completed, additional information (personal ID information, disease status) of the user can be used.
제어부(110)는 복용자의 인식과 해당 복용자에 대한 인증을 수행하기 위해 특징점 추출 방식을 이용할 수 있다. 예컨대, 이미지 센서(170)의 촬영으로 획득된 복용자의 안면 이미지에서 픽셀 정보를 기반으로 안면 이미지에서 복용자의 특징점을 추출할 수 있다. 안면(얼굴)을 구성하는 특징점, 예를 들어, 눈썹 중심, 눈의 양 가장자리, 입가, 입술 중앙점 등이 될 수 있다. 이때, 경계선 추출 방식을 이용하여 안면 윤곽선과, 눈썹, 눈, 코, 입, 귀 등의 윤곽선을 추출하고, 각 윤곽선 내에서 각 픽셀의 픽셀값을 확인하여 특징점을 추출할 수 있다. The control unit 110 may use a feature point extraction method to perform recognition of a taker and authentication of a corresponding taker. For example, a feature point of the taker may be extracted from the facial image based on pixel information from the face image of the taker acquired by photographing by the image sensor 170. It may be a feature point constituting the face (face), for example, the center of the eyebrow, the edges of the eyes, the corner of the mouth, the center of the lips, and the like. In this case, facial contours and contours such as eyebrows, eyes, nose, mouth, and ears may be extracted using a boundary line extraction method, and feature points may be extracted by checking a pixel value of each pixel within each contour.
나아가 검출된 특징점을 이용하여 적어도 하나 이상의 안색 영역을 추출할 수도 있다. 여기서 안색 영역이란, 얼굴에서 구분될 수 있는 영역을 나눈 영역으로, 안면을 구성하는 영역이 될 수 있다. Furthermore, at least one complexion area may be extracted using the detected feature points. Here, the complexion area is an area that is divided into an area that can be distinguished from the face, and may be an area constituting the face.
제어부(110)는 복용량, 복용되는 약종류, 복용시간, 복용자 상태, 복용이력 등을 저장하기 위해 메모리를 내장할 수 있으며, 이를 위해 프로그래밍 가능한 마이크로 컨트롤러(MCU)가 될 수 있다. The control unit 110 may have a built-in memory to store a dose, a type of drug to be taken, a dose time, a state of a taker, a taking history, etc., and may be a programmable microcontroller (MCU) for this purpose.
또한 제어부(110)는 예측부(130)로부터 인공지능 학습 알고리즘을 기반으로 복용자의 상태에 대한 예측 결과를 제공받아 상기 표시부(120)를 통하여 복용 여부를 표시하거나 출력부(150)를 통하여 복용에 대한 음성 안내를 제공하도록 제어할 수 있다.In addition, the control unit 110 receives a prediction result for the state of the taker based on an artificial intelligence learning algorithm from the prediction unit 130 and displays whether to take it through the display unit 120 or the output unit 150. It can be controlled to provide voice guidance for
표시부(120)는 복용자에게 각종 알림을 시각적으로 제공하기 위한 역할을 수행하는 것으로 예컨대 복용량, 복용되는 약종류, 복용시간, 복용자 상태, 복용이력 등을 표시할 수 있다. The display unit 120 serves to visually provide various notifications to the user, and may display, for example, a dose, a type of drug to be taken, a dosage time, a state of the user, and a dosage history.
예측부(130)는 이미지 센서(170)를 통하여 촬영된 이미지를 분석하여 인공지능 학습 알고리즘을 기반으로 복용자의 상태를 예측하고 예측결과를 표시부(120)를 통하여 표시하거나 출력부(150)를 통하여 음성 안내를 수행할 수 있게 한다. 여기서 인공지능 학습 알고리즘은 예컨대 인공신경망(Artificial Neural Network), RBF(Radial Basis Function) 신경망, 서포트 벡터 머신(Support Vector Machine) 알고리즘 등이 될 수 있다.The prediction unit 130 analyzes the image captured through the image sensor 170, predicts the state of the taker based on an artificial intelligence learning algorithm, and displays the prediction result through the display unit 120 or the output unit 150. Enables voice guidance to be performed. Here, the artificial intelligence learning algorithm may be, for example, an artificial neural network, a radial basis function (RBF) neural network, and a support vector machine algorithm.
구체적으로, 얼굴 특징점을 추출하여 분류함으로써 패턴화하여 패턴을 벗어나는 경우를 잘 추정할 수 있는 서포트 벡터 머신 알고리즘을 사용하는 것이 좀더 바람직하다.Specifically, it is more preferable to use a support vector machine algorithm capable of estimating a case that deviates from the pattern by extracting and classifying facial feature points.
좀더 구체적으로, 예측부(130)에서 복용자의 이미지를 분석하여, 복용자의 상태를 예측하게 되는데, 복용자의 이미지는 대부분 복용자의 얼굴과 손에 대한 인식을 위해 사용되게 된다. 좀더 구체적으로 예측부(130)에서는 복용자의 얼굴과 손에 대해서 촬상된 이미지로부터 얼굴과 손의 특징점(예를 들어 눈, 코, 입 등)을 추출하게 복용자의 상태를 예측하는 데 사용하게 된다.More specifically, the predictor 130 analyzes the taker's image and predicts the taker's state, and most of the taker's images are used to recognize the taker's face and hands. In more detail, the prediction unit 130 is used to predict the state of the taker by extracting feature points (eg, eyes, nose, mouth, etc.) of the face and hands from the imaged images of the face and hands of the taker.
하지만, 이러한 복용자가 얼굴에 추가적인 얼굴 인공구조물을 착용하게 되면, 이미지 센서(170)를 통해 촬상된 이미지로부터 예측부(130)가 특징점을 정확하게 추출하기 어려워지는 문제가 발생한다.However, when such a user wears an additional facial artificial structure on the face, there is a problem that it is difficult for the predictor 130 to accurately extract the feature points from the image captured through the image sensor 170.
예를 들어, 이러한 추가적인 얼굴 인공구조물은 복용자가 착용하는 안경, 선글라스, 모자, 마스크, 인공 수염 등이 될 수 있다.For example, these additional facial structures may be glasses, sunglasses, hats, masks, artificial beards, etc. worn by the user.
이렇듯이, 인공구조물로 인해 복용자의 얼굴 인식이 불가능한 경우, "얼굴에서 인공구조물을 제거해주세요." 또는 "인공구조물 등으로 인하여 현재 인식이 불가능한 상태이므로 확인바랍니다." 등의 음성 안내를 출력하거나 표시부(120)에 표시할 수 있다.Like this, if the user's face cannot be recognized due to the artificial structure, "Please remove the artificial structure from the face." Or "Please check it as it is currently impossible to recognize due to artificial structures." A voice guide such as, etc. may be output or displayed on the display unit 120.
또한, 더 나아가, 예측부(130)는 이미지 센서(170)를 통해 촬상된 이미지를 분석하여 학습시키는 팩터로 위에서 상술한 인공구조물(예를 들어 선글라스, 마스크)을 포함시켜서, 이러한 인공구조물에 대해서 학습을 하도록 할 수 있다. 이러한 학습을 통해서 결과적으로 예측부(130)는 복용자의 인공구조물 착용을 더 잘 추정하여 표시부(120)나 출력부(150) 등을 통해 사용자에게 인공구조물의 제거를 더 잘 구분하여 요청할 수 있다. In addition, furthermore, the prediction unit 130 includes the artificial structures (for example, sunglasses, masks) described above as a factor that analyzes and learns the image captured through the image sensor 170, You can make them learn. As a result, through such learning, the predictor 130 may better estimate the wearer's wearing of the artificial structure and request the user to better classify the removal of the artificial structure through the display unit 120 or the output unit 150.
즉 복용자의 상태에 따라 복용과 관련한 다양한 음성 안내 또는 시각적 표시가 이루어질 있으며, 예측 결과들과 복용자의 복용 이력 등은 별도의 데이터베이스에 저장함으로써, 빅데이터화하여 스스로 학습 및 진화가 이루어지도록 한다.That is, various voice guidance or visual indications related to the dose are made according to the state of the taker, and the prediction results and the taking history of the taker are stored in a separate database, so that the self-learning and evolution can be achieved by converting into big data.
상황판단부(180)는 시계열로 촬영되어 취득된 복수의 이미지 센서(170) 영상 각각에서 복용자의 표정 특징을 인식하고, 복수의 얼굴 표정 인식 결과를 시계열 데이터로서 저장할 수 있으며, 시계열 데이터에서 복용자의 표정의 시간적 변화에 관련된 특징을 검출하고, 그 검출된 특징에 기반하여 상기 복용자의 정신적 건강 상태를 추정할 수 있다. The situation determination unit 180 may recognize the facial expression characteristics of the taker from each of the plurality of image sensor 170 images captured and acquired in time series, and store the plurality of facial expression recognition results as time series data. Features related to temporal changes in facial expressions may be detected, and the mental health status of the user may be estimated based on the detected features.
예컨대 웃는 표정의 얼굴이 지속적으로 인식되는 경우 약 복용시간을 지연하거나, 우울한 표정의 얼굴이 지속적으로 인식되는 경우 약 복용시간을 앞당겨 복용자에게 제공될 수 있도록 한다. 또한, 표정 특징에 따라 건강 상태를 판단하는데 이용할 수도 있다.For example, if a face with a smiling expression is continuously recognized, the drug administration time is delayed, or if a face with a depressed expression is continuously recognized, the medicine administration time may be accelerated so that it can be provided to the user. It can also be used to determine a health state according to facial expression characteristics.
또한 추가적으로 상황판단부(180)는 시계열적으로 복용자의 표정 특징 뿐 아니라, 실질적으로 약을 복용자의 입으로 가져가도록 하는 손에 대해서도 추가적으로 인식하여, 복용자의 얼굴의 특징점(예를 들어 입의 위치)과 손의 특징점 사이의 관계를 기반으로 복용자의 복약 여부를 좀 더 잘 확인할 수 있게 된다. 즉, 얼굴 표정 인식 결과의 변화와 얼굴 및 손의 특징점 사이의 변화를 추가로 인식하여 인증함으로써 복용자의 건강 상태의 추정 및 복용자의 약 복용 여부에 대한 추정이나 예측 정확도를 높일 수 있게 된다.In addition, the situation determination unit 180 additionally recognizes not only the facial features of the taker in time series, but also the hands that actually take the medicine to the mouth of the taker, and features points of the taker's face (for example, the position of the mouth). Based on the relationship between the characteristics of the hand and the hand, it is possible to better check whether the user is taking medication. That is, by additionally recognizing and authenticating the change in the result of facial expression recognition and the change between the feature points of the face and the hand, it is possible to estimate the health status of the taker and increase the estimation or prediction accuracy of whether the taker takes medicine.
나아가, 상황판단부(180)는 피부 관리 알고리즘을 이용하여 피부 변화(모공, 주름, 피지 등)에 따라 건강(호흡상태,활동량,심박상태)에 어떤 문제가 생기는지 예측하여 복용자의 상태를 체크할 수도 있다. 예를 들어 위암에 걸렸을 경우 피부 온도나 홍조 등 급격한 변화 생기는 이미지 센서(170)를 통하여 제공되는 이미지를 분석하여 체크할 수 있으며, 이를 통해 의료 사고를 미연에 예방 가능하게 한다.Furthermore, the situation determination unit 180 predicts what problems will occur in health (breathing status, activity level, heart rate status) according to skin changes (pores, wrinkles, sebum, etc.) using a skin care algorithm to check the status of the user. May be. For example, in the case of stomach cancer, the image provided through the image sensor 170, which causes rapid changes such as skin temperature or flushing, can be analyzed and checked, and through this, medical accidents can be prevented in advance.
또한 상황판단부(180)는 건강 관리 알고리즘을 프로그래밍하여 심박수를 체크하여 복용자의 상태를 확인할 수 있으며, 예컨대 이미지 센서(170)로부터 획득된 얼굴 이미지에 있는 분노, 경멸, 혐오, 공포, 행복, 무표정, 슬픔, 놀람 같은 일련의 감정 파악을 할 수도 있다.In addition, the situation determination unit 180 can check the heart rate by programming a health management algorithm to check the state of the taker. For example, anger, contempt, disgust, fear, happiness, and expressionlessness in the face image obtained from the image sensor 170 You can also identify a series of emotions such as, sadness, and surprise.
좀더 구체적으로 상황판단부(180)에서는 이미지 센서(170)를 통해 획득되는 복용자의 시계열 데이터 및 영상을 분석하여, 복용자의 얼굴에 나타난 혈류량의 변화 등을 추정할 수 있고, 이로 인해 심박수에 대한 추정 또한 가능하게 된다. 또한 이러한 추정에는 상술한 것과 같은 딥러닝 등과 같은 방식이 추가적으로 사용될 수 있다. 나아가 이러한 건강 관리 알고리즘을 이용한 심박수 체크에 의해 상기 복용자에 대한 인증을 수행할 수도 있다. 예컨대, 심박수 측정치가 기등록된 해당 복용자의 심박수와 일치하는지 여부에 따라 복용자 인증을 수행할 수 있다.More specifically, the situation determination unit 180 analyzes the user's time series data and images obtained through the image sensor 170 to estimate the change in blood flow on the user's face, thereby estimating the heart rate. It also becomes possible. In addition, a method such as deep learning as described above may be additionally used for this estimation. Furthermore, it is possible to perform authentication for the user by checking the heart rate using such a health management algorithm. For example, user authentication may be performed according to whether the heart rate measurement value matches the previously registered heart rate of the corresponding user.
키입력부(140)는 복용자가 직접 키입력에 의해 약 복용 명령을 내리거나, 화면표시, 전원 온/오프 등의 키입력이 가능하도록 제공할 수 있다.The key input unit 140 may provide a user to directly input a key input to give a command to take medicine or to input a key such as a screen display or power on/off.
출력부(150)는 외부로 음성 안내를 출력할 수 있도록 스피커 형태로 제공된다.The output unit 150 is provided in the form of a speaker so as to output voice guidance to the outside.
음성 안내로는 예컨대 복용 지시 안내, 복용시간 경고 알람, 인증 여부 알람, 복용량 음성 안내 등이 될 수 있다.The voice guidance may be, for example, a dosage instruction guide, a dosage time warning alarm, an authentication status alarm, and a dosage voice guidance.
생체인증부(160)는 이미지 센서(170) 이미지 인증외 별도의 인증수단으로서, 이미지 센서(170) 인증에 지속적으로 실패하거나 사용 불가한 상황(어두운 곳 등)에서 대체 인증수단이 될 수 있으며, 예컨대 지문, 홍채, 음성, 심박수를 이용한 생체인증방식이 채택될 수 있다. 또한 생체인증을 위해 미리 복용자의 생체인증정보로서, 지문, 홍채, 음성, 심박수를 입력받아 저장하는 것이 바람직하다.The biometric authentication unit 160 is a separate authentication means other than the image sensor 170 image authentication, and may be an alternative authentication means in situations where authentication of the image sensor 170 continuously fails or is unavailable (dark places, etc.), For example, a biometric authentication method using fingerprint, iris, voice, and heart rate may be adopted. In addition, for biometric authentication, it is desirable to receive and store a fingerprint, iris, voice, and heart rate as the biometric authentication information of the user in advance.
또한, 상술한 이미지 센서(170) 및 생체인증부(160)와 더불어, 추가적으로 보조 감지 수단들을 더 포함할 수 있다. 좀더 구체적으로 보조 감지 수단은 레이더나 LiDar와 같이 복용자와의 거리를 측정할 수 있는 거리 감지 센서가 될 수 있다. 이러한 보조 감지 수단이 거리 감지가 가능한 센서인 경우 예를 들어 복용자가 디스펜서(100)에 일정한 거리 이내로 들어오는 경우 이러한 상황을 인식하여 이미지 센서(170)를 촬상할 수 있도록 기동시키고, 복용자가 일정한 거리를 벗어나는 경우, 이미지 센서(170)의 촬상을 그만두도록 활용할 수 있다. 이러한 활용을 통해, 이미지 센서(170)가 복용자가 아닌 다른 객체를 촬상하여 그때 촬상된 이미지를 예측부(130)나 제어부(110)를 통해 분석하도록 동작하도록 하는 문제점을 해결할 수 있다.In addition, in addition to the image sensor 170 and the biometric authentication unit 160 described above, it may further include auxiliary sensing means. More specifically, the auxiliary detection means may be a distance detection sensor, such as a radar or LiDar, that can measure a distance to a user. When such an auxiliary sensing means is a sensor capable of sensing the distance, for example, when a user enters the dispenser 100 within a certain distance, it recognizes such a situation and starts so that the image sensor 170 can be imaged, and the user moves a certain distance. If it deviates, it can be utilized to stop capturing the image sensor 170. Through this utilization, it is possible to solve a problem in that the image sensor 170 operates to capture an object other than the patient and analyze the captured image through the prediction unit 130 or the control unit 110.
또한, 이미지 센서(170)와 보조 감지 수단만을 활용하여 디스펜서를 동작시키게 되면, 복용자와 비접촉식으로 복용자의 약의 복용 유무, 복용자의 건강 상태 추정 등을 통해 복용자들의 건강을 효과적으로 케어해줄 수 있다. 이러한 점은 특히 별도의 신체적 접촉을 통해 인증을 하기 어려운 환자나 노약자(예를 들어 치매환자)들에 대해서도 약에 대한 복용을 스마트하게 관리하고 더 나아가 복용자의 건강을 효율적으로 관리할 수 있도록 도움을 줄 수 있다.In addition, when the dispenser is operated using only the image sensor 170 and the auxiliary sensing means, it is possible to effectively care for the health of the users through the presence or absence of the taker's medicine and estimation of the health status of the taker in a non-contact manner with the user. This is particularly helpful for patients who are difficult to authenticate through physical contact or for the elderly (for example, dementia patients) to manage their medications smartly and further effectively manage their health. Can give.
또한 전술한 것과 같이 제어부(110)는 중앙관제서버를 통해 복용자의 추가적인 정보들(개인 id 정보, 질병 현황)을 활용할 수 있는데, 예를 들어 질병 현황의 경우 개인의 상병코드를 활용할 수도 있다.In addition, as described above, the control unit 110 may utilize additional information (personal ID information, disease status) of the user through the central control server. For example, in the case of disease status, the control unit 110 may use the individual's injury and illness code.
좀 더 구체적으로는 제어부(110)를 통해 습득 가능한 복용자의 상병코드가 '치매'인 경우, 디스펜서(100)는 상술한 것과 같이 이미지 센서(170)와 보조 감지 수단을 동시에 활용하고, 생체인증부(160)은 동작시키지 않도록 하여 복용자의 생체 정보를 습득하도록 하고, 또 다른 예로 복용자의 상병코드가 '조현병'인 경우, 이미지 센서(170)의 촬상 방식을 대상의 움직임이 큰 상태로 예상하도록 설정하여, 복용자의 상병코드와 같은 질병 현황에 따라 맞춤형 방식을 제공할 수 있다.More specifically, when the injured code of the person who can be acquired through the control unit 110 is'dementia', the dispenser 100 simultaneously utilizes the image sensor 170 and the auxiliary sensing means as described above, and the biometric authentication unit In the case where the patient's injury code is'schizophrenia', as another example, when the patient's injury code is'schizophrenia', the imaging method of the image sensor 170 is expected to be in a state where the movement of the object is large. By setting, it is possible to provide a customized method according to the condition of the disease such as the patient's injury and disease code.
즉, 복용자의 상병코드를 제어부(110), 예측부(130), 상황판단부(180)의 동작 등에 적극적으로 활용함으로써 복용자가 앓고 있는 질병에 대응되어 여러 인식 방식 등을 변경하여 활용할 수 있다.That is, by actively using the patient's injury and illness code in the operation of the control unit 110, the prediction unit 130, and the situation determination unit 180, it is possible to change and utilize various recognition methods in response to the disease the patient is suffering from.
나아가 본 발명의 디스펜서는 외부에 위치한 관리서버와 연동하여 무선 통신할 수 있는 무선통신부(미도시)를 추가로 구비할 수 있다. 본 발명의 디스펜서는 무선통신부를 통하여 관리서버와 연계하여 복용자 인증을 수행하거나, 복용 관리, 피부 관리, 건강 관리(심박수 체크)에 대한 DB를 구축하고, 이력 확인 및 관리에 활용될 수 있으며, 특히 인공지능 알고리즘 기반으로 빅데이터화된 복용자와 관련된 복용 데이터(복용량, 복용되는 약종류, 복용시간, 복용자 상태 및 복용이력 등)를 이용하여 보다 정확한 복용자의 상태를 분석하고 예측할 수도 있다.Furthermore, the dispenser of the present invention may additionally include a wireless communication unit (not shown) capable of wireless communication in connection with an external management server. The dispenser of the present invention can be used to perform user authentication in connection with a management server through a wireless communication unit, or to establish a DB for dosage management, skin care, and health management (heart rate check), and to check and manage history. It is also possible to analyze and predict the patient's condition more accurately by using the dosage data related to the taker (dose, type of drug taken, taking time, taker status and taking history, etc.) converted into big data based on artificial intelligence algorithm.
[부호의 설명][Explanation of code]
100 : 디스펜서 100: dispenser
110 : 제어부 120 : 표시부110: control unit 120: display unit
130 : 예측부 140 : 키입력부130: prediction unit 140: key input unit
150 : 출력부 160 : 생체인증부150: output unit 160: biometric authentication unit
170 : 이미지 센서 180 : 상황판단부170: image sensor 180: situation determination unit

Claims (7)

  1. 복용자의 이미지를 주기적으로 촬영하여 획득하는 이미지 센서를 구비하고, Equipped with an image sensor that periodically photographs and acquires an image of a taker
    상기 이미지 센서를 통하여 복용자 인증을 수행하고, 인증된 복용자에 대하여 복용자의 복용 스케줄을 미리 저장하며, 저장된 복용 스케줄대로 투약 명령을 수행하되, Performing the taker authentication through the image sensor, pre-stores the taker's taking schedule for the authenticated taker, and executes a dosage command according to the stored taking schedule,
    상기 복용자의 이미지를 이용하여 해당 복용자에 대한 인증을 수행하는 것을 특징으로 하는 이미지 센서를 이용한 안면인식 기반 스마트 디스펜서.Facial recognition-based smart dispenser using an image sensor, characterized in that performing authentication for the corresponding taker using the image of the taker.
  2. 제1항에 있어서,The method of claim 1,
    상기 디스펜서는The dispenser
    상기 이미지 센서를 통하여 복용자의 이미지를 주기적으로 촬영하여 획득된 이미지를 전송받고, 복용자의 인식과 해당 복용자에 대한 인증을 수행하는 제어부;A control unit configured to receive an image acquired by periodically photographing an image of a taker through the image sensor, and to recognize a taker and authenticate the taker;
    복용자에게 각종 알림을 시각적으로 제공하기 위한 역할을 수행하는 표시부; A display unit that serves to visually provide various notifications to a user;
    상기 복용자가 직접 키입력에 의해 복용 명령을 내리거나, 화면표시, 전원 온/오프 등의 키입력이 가능하도록 제공하는 키입력부; A key input unit for providing a key input such as a screen display, power on/off, etc.;
    외부로 음성 안내를 출력할 수 있도록 구비되는 출력부; 및An output unit provided to output voice guidance to the outside; And
    적어도 하나의 상기 복용자의 이미지에서 복용자의 표정 특징을 인식하고, 복수의 얼굴 표정 인식 결과를 저장할 수 있으며, 복용자의 표정의 시간적 변화에 관련된 특징을 검출하고, 그 검출된 특징에 기반하여 상기 복용자의 정신적 건강 상태를 추정할 수 있는 상황판단부를 포함하는 이미지 센서를 이용한 안면인식 기반 스마트 디스펜서.Recognizes the facial expression features of the taker from at least one of the taker's image, can store a plurality of facial expression recognition results, detects a feature related to the temporal change of the taker's expression, and based on the detected feature A smart dispenser based on facial recognition using an image sensor including a situation determination unit that can estimate mental health.
  3. 제2항에 있어서,The method of claim 2,
    상기 디스펜서는 The dispenser
    상기 이미지 센서를 통하여 촬영된 이미지를 분석하여 인공지능 학습 알고리즘을 기반으로 복용자의 상태를 예측하고, 예측 결과에 따라 상기 제어부의 제어하에 상기 표시부를 통하여 복용 여부를 표시하거나 출력부를 통하여 복용에 대한 음성 안내를 제공하도록 상기 제어부에 예측 결과를 제공하는 예측부를 더 포함하는 이미지 센서를 이용한 안면인식 기반 스마트 디스펜서.By analyzing the image captured through the image sensor, the state of the taker is predicted based on an artificial intelligence learning algorithm, and according to the prediction result, the taking status is displayed through the display unit under the control of the control unit, or a voice for taking the dose is displayed through the output unit. Facial recognition-based smart dispenser using an image sensor further comprising a prediction unit for providing a prediction result to the control unit to provide guidance.
  4. 제2항에 있어서, The method of claim 2,
    상기 제어부는 The control unit
    복용자의 투약 명령을 수행하기 위해 복용량, 복용되는 약종류, 복용시간, 복용자 상태 및 복용이력 중 적어도 하나를 저장하여 관리하는 것을 특징으로 하는 이미지 센서를 이용한 안면인식 기반 스마트 디스펜서.A smart dispenser based on facial recognition using an image sensor, characterized in that storing and managing at least one of a dose, a type of drug to be taken, a dose time, a state of a taker, and a taking history in order to perform a dose command of a taker.
  5. 제1항에 있어서,The method of claim 1,
    상기 복용자에 대한 인증은, Certification for the above taker,
    상기 이미지 센서의 촬영으로 획득된 복용자의 안면 이미지에서 픽셀 정보를 기반으로 안면 이미지에서 복용자의 특징점을 추출하여 이루어지는 것을 특징으로 하는 이미지 센서를 이용한 안면인식 기반 스마트 디스펜서.A face recognition-based smart dispenser using an image sensor, characterized in that the feature points of the taker are extracted from the face image based on pixel information from the face image of the taker acquired by photographing by the image sensor.
  6. 제2항에 있어서,The method of claim 2,
    상기 상황판단부는 피부 변화에 따른 복용자의 건강을 체크할 수 있도록 피부 관리 알고리즘을 이용하는 것을 특징으로 하는 이미지 센서를 이용한 안면인식 기반 스마트 디스펜서.The situation determination unit uses a skin care algorithm to check the health of a user according to a change in skin. A smart dispenser based on facial recognition using an image sensor.
  7. 제2항에 있어서,The method of claim 2,
    상기 복용자에 대한 인증은, Certification for the above taker,
    상기 상황판단부가 복용자의 상태를 확인하기 위해 건강 관리 알고리즘을 프로그래밍하여 심박수를 체크하여 이루어지는 것을 특징으로 하는 이미지 센서를 이용한 안면인식 기반 스마트 디스펜서.The smart dispenser based on facial recognition using an image sensor, characterized in that the situation determination unit is made by programming a health management algorithm to check the state of the patient and checking the heart rate.
PCT/KR2019/004330 2019-04-11 2019-04-11 Face recognition-based smart dispenser using image sensor WO2020209414A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2019-0042271 2019-04-11
KR1020190042271A KR102185492B1 (en) 2019-04-11 2019-04-11 Smart dispenser based facial recognition using image sensor

Publications (1)

Publication Number Publication Date
WO2020209414A1 true WO2020209414A1 (en) 2020-10-15

Family

ID=72752071

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/004330 WO2020209414A1 (en) 2019-04-11 2019-04-11 Face recognition-based smart dispenser using image sensor

Country Status (2)

Country Link
KR (1) KR102185492B1 (en)
WO (1) WO2020209414A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102596492B1 (en) * 2023-01-12 2023-10-31 주식회사 에버정보기술 A Drug Dispenser Capable of Measuring non-contact Biometric Information

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080090505A (en) * 2006-01-06 2008-10-08 아셀알엑스 파마슈티컬스 인코퍼레이티드 Drug storage and dispensing devices and systems comprising the same
JP2015505259A (en) * 2011-12-21 2015-02-19 デカ・プロダクツ・リミテッド・パートナーシップ System, method and device for administering oral medication
WO2015060296A1 (en) * 2013-10-22 2015-04-30 株式会社湯山製作所 Drug distribution assistance system
JP2016147006A (en) * 2015-02-13 2016-08-18 オムロン株式会社 Health management assist device and health management assist method
JP3214893U (en) * 2011-08-27 2018-02-15 クラフト,ダニエル,エル. Portable drug dispenser

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3214893B2 (en) 1992-05-27 2001-10-02 松下電器産業株式会社 Optical transmitter
CA2773575C (en) 2011-04-04 2019-03-12 Mark Andrew Hanson Medication management and reporting technology
KR101301821B1 (en) 2011-08-30 2013-08-29 한국 한의학 연구원 Apparatus and method for detecting complexion, apparatus and method for determinig health using complexion, apparatus and method for generating health sort function
KR101971695B1 (en) * 2016-04-01 2019-04-25 한국전자통신연구원 Medication monitoring apparatus and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080090505A (en) * 2006-01-06 2008-10-08 아셀알엑스 파마슈티컬스 인코퍼레이티드 Drug storage and dispensing devices and systems comprising the same
JP3214893U (en) * 2011-08-27 2018-02-15 クラフト,ダニエル,エル. Portable drug dispenser
JP2015505259A (en) * 2011-12-21 2015-02-19 デカ・プロダクツ・リミテッド・パートナーシップ System, method and device for administering oral medication
WO2015060296A1 (en) * 2013-10-22 2015-04-30 株式会社湯山製作所 Drug distribution assistance system
JP2016147006A (en) * 2015-02-13 2016-08-18 オムロン株式会社 Health management assist device and health management assist method

Also Published As

Publication number Publication date
KR20200120778A (en) 2020-10-22
KR102185492B1 (en) 2020-12-03

Similar Documents

Publication Publication Date Title
US20230222805A1 (en) Machine learning based monitoring system
CN111540105B (en) Method, system, equipment and storage medium for controlling access control
CN110327051A (en) A kind of intelligent guarding system in the nursing type home for destitute based on cloud platform
US20220079439A1 (en) Biometric Imaging and Biotelemetry System
CN105224804A (en) Intelligent medical comprehensive detection system
US20220338757A1 (en) System and method for non-face-to-face health status measurement through camera-based vital sign data extraction and electronic questionnaire
CN110660472A (en) Hospital management early warning system and method based on face recognition technology
CN110140180A (en) Patient monitoring system and method
CN109993063A (en) A kind of method and terminal identified to rescue personnel
WO2020006263A1 (en) System and methods for brain health monitoring and seizure detection and prediction
KR20210062534A (en) A method for measuring a physiological parameter of a subject in a contactless manner
CN110755091A (en) Personal mental health monitoring system and method
WO2020085576A1 (en) Method for providing health care service
Tang et al. Signal identification system for developing rehabilitative device using deep learning algorithms
US20180130555A1 (en) Systems and methods for intelligent admissions
WO2020209414A1 (en) Face recognition-based smart dispenser using image sensor
CN104637189A (en) ATM help seeking terminal
CN214476427U (en) Intelligent admission information registration robot system
CN212724738U (en) National health guarantee management system
JP6635412B1 (en) Cognitive function judgment system
CN116570246A (en) Epileptic monitoring and remote alarm system
KR20090065716A (en) Ubiquitous security and healthcare system using the iris
Migliorelli et al. A store-and-forward cloud-based telemonitoring system for automatic assessing dysarthria evolution in neurological diseases from video-recording analysis
Chen et al. Biovitals™: a personalized multivariate physiology analytics using continuous mobile biosensors
KR102596492B1 (en) A Drug Dispenser Capable of Measuring non-contact Biometric Information

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19924194

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19924194

Country of ref document: EP

Kind code of ref document: A1