WO2023096394A1 - Server for determining posture type and operation method thereof - Google Patents

Server for determining posture type and operation method thereof Download PDF

Info

Publication number
WO2023096394A1
WO2023096394A1 PCT/KR2022/018788 KR2022018788W WO2023096394A1 WO 2023096394 A1 WO2023096394 A1 WO 2023096394A1 KR 2022018788 W KR2022018788 W KR 2022018788W WO 2023096394 A1 WO2023096394 A1 WO 2023096394A1
Authority
WO
WIPO (PCT)
Prior art keywords
posture type
key point
posture
change
determining
Prior art date
Application number
PCT/KR2022/018788
Other languages
French (fr)
Korean (ko)
Inventor
이태훈
송유중
Original Assignee
주식회사 공훈
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 공훈 filed Critical 주식회사 공훈
Publication of WO2023096394A1 publication Critical patent/WO2023096394A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Definitions

  • the present invention relates to a server for determining a posture type and an operating method thereof, and more particularly, to a technique for determining a posture type of an object by applying key points to spatial coordinates.
  • a wearable device equipped with a sensor is worn by a user, and a value sensed by the sensor is analyzed to determine whether or not the user has fallen.
  • a value sensed by the sensor is analyzed to determine whether or not the user has fallen.
  • the present invention generates 3D spatial coordinates using a 2D image, assigns 3D coordinates to each key point of the object, and analyzes the coordinate change of the key points to determine the posture type of the object.
  • the present invention measures a posture change rate including a coordinate value corresponding to a coordinate change of a key point in spatial coordinates and a time value, and analyzes the posture change rate to determine a posture type of an object.
  • a server for determining a posture type and an operating method thereof provides
  • a server for determining a posture type according to an embodiment of the present invention for the above object to be solved includes a communication unit 210 for receiving a two-dimensional image from an image acquisition device 100 for photographing an object; an image processing unit 220 that generates 3-dimensional space coordinates using the 2-dimensional image and identifies objects; a measurement unit 230 that assigns three-dimensional coordinates to each key point of the object and measures a coordinate change of the key point in spatial coordinates; A determination unit 240 for analyzing the coordinate change of the key point to determine the posture type of the object and a database 250 for storing learning data for object identification and posture type determination, including the key point in space It is characterized in that the posture type of the object is determined by applying it to the coordinates.
  • the measurement unit measures a posture change rate including a coordinate value and a time value corresponding to a coordinate change of a key point in spatial coordinates, and the determination unit analyzes the posture change rate to determine the posture type of the object.
  • the measurement unit identifies a plurality of fixed objects and at least one moving object on spatial coordinates, and among the plurality of objects, the moving object moves from a first object to a second object different from the first object. Movement within a preset time may be measured, and the determination unit may determine a posture type of the moving object based on the measurement of the movement of the moving object.
  • the measurement unit sets the second object as a region of interest based on the learning data, and the measurement unit includes the coordinate values of the objects whose coordinate values are changed to coordinate values within the region of the second object.
  • An object whose coordinate value change rate is within the preset time period may be identified as the moving object.
  • the image capture device acquires a thermal image by sensing heat generated from an object
  • the measurement unit measures a posture change rate including a coordinate value corresponding to a coordinate change of a key point in spatial coordinates and a time value
  • the discrimination unit may determine the posture type of the object by analyzing the posture change rate and detecting a thermal change exceeding a preset change criterion of the obtained thermal image.
  • a method of operating a server for determining a posture type includes generating three-dimensional space coordinates using a two-dimensional image generated by photographing an object, and identifying the object; assigning three-dimensional coordinates to each key point of the object; Measuring the coordinate change of the key point in the spatial coordinates and determining the posture type of the object by analyzing the coordinate change of the key point, determining the posture type of the object by applying the key point to the spatial coordinates It is characterized by doing.
  • the present invention can improve the accuracy of determining the posture type of an object by determining the posture type of an object by applying key points to spatial coordinates.
  • the present invention determines the posture type of an object by analyzing the posture change rate including the coordinate value and the time value corresponding to the coordinate change of the key point in spatial coordinates, thereby determining whether the posture type is an accidental fall or voluntary lying down. Accuracy can be further improved, and false positive rates for conventional attitude estimation can be reduced.
  • FIG. 1 is a block diagram showing a posture type determination system according to an embodiment of the present invention.
  • FIG. 2 is a block diagram showing the posture type determination system of FIG. 1 in more detail.
  • FIG. 3 is a block diagram illustrating the server of FIG. 1 in more detail.
  • 5 is another example for explaining a conventional posture estimation method.
  • FIG. 8 is an example for explaining a method of recognizing a situation of an object using voice and video analysis according to the present invention.
  • 9 is an example for explaining a method of determining a posture type of an object using a thermal image.
  • the posture type determination system 10 includes an image capture device 100, a server 200, and a terminal 300.
  • the image acquisition device 100 can be installed in various environments such as a nursing hospital, a children's shelter, or a home, and is used to monitor a subject to be monitored. Targets to be monitored may vary depending on the purpose of use, such as the elderly, children, or the disabled.
  • the server 200 analyzes the video captured by the image capture device 100 to monitor the monitoring target, and transmits corresponding event information to the terminal 300 when an abnormal behavior or fall of the monitoring target is detected.
  • the user of the terminal 300 may check the state of the monitoring target by referring to the event information sent from the server 200 .
  • the terminal 300 may be a smart phone, desktop, or laptop, and may be various devices capable of communication, but is not limited thereto.
  • FIG. 2 is a block diagram showing the posture type determination system of FIG. 1 in more detail.
  • the image capture device 100 may include a CCTV for capturing images and an NVR for recording.
  • the server 200 may provide NVR connection and include software for image analysis.
  • FIG. 3 is a block diagram showing the server of FIG. 1 in more detail.
  • the server 200 includes a communication unit 210, an image processing unit 220, a measurement unit 230, a determination unit 240, and a database 250. include
  • the communication unit 210 receives a 2D image from the image capture device 100 that captures an object.
  • the image processing unit 220 generates 3-dimensional space coordinates using a 2-dimensional image and identifies an object.
  • the present invention can generate a 2D image with 3D spatial coordinates using a single camera or multiple cameras.
  • Spatial coordinate generation using a single camera can generate captured spatial coordinates by combining internal/external parameter values of the corresponding camera and measurement indices captured by the corresponding camera with reference points on an image through camera specifications.
  • Spatial coordinate generation using multiple cameras can generate spatial coordinates using a triangulation method using the distance between cameras for an object represented on an image after measuring the distance of each camera.
  • the image processing unit 220 may identify an object existing in a 2D image using learning data stored in the database 250 based on supervised learning.
  • Supervised learning is a method of finding an output value corresponding to a learner's input value by referring to learning data.
  • objects may include furniture and/or structures such as bookshelves and beds in a space.
  • the object may include a shape identified as a human body and an object thereof.
  • the measurement unit 230 assigns three-dimensional coordinates to each key point of the object and measures a coordinate change of the key point in spatial coordinates.
  • a key point means a main point of an object to be moved in an image, and is divided into, for example, a head, a shoulder, an arm, and a leg.
  • Figure 4 is an example for explaining a conventional posture estimation method
  • Figure 5 is another example for explaining a conventional posture estimation method. was estimated.
  • the prior art there is a problem in that it is difficult to determine whether a person falls by accident or voluntarily falls to lie down by recognizing only a change in posture, such as a change in a state of falling from a standing state.
  • FIG. 6 is an example for explaining the posture type determination method of the present invention
  • FIG. 7 is an example for comparing the posture type determination method of the present invention and the conventional method.
  • 230) assigns three-dimensional coordinates to each key point of the object, and measures the coordinate change of the key point in space coordinates.
  • the determination unit 240 analyzes the coordinate change of the key point to determine the posture type of the object.
  • the measurement unit 230 may implement a key point-based posture type model in a 3-dimensional space, and based on the 3-dimensional positional posture data obtained from the posture type model, the overall characteristics of the posture and the specific characteristics of each joint part may be determined. By extracting, a human body model in the form of a 3D mesh may be extracted.
  • the measuring unit 230 may measure a posture change rate including a coordinate value corresponding to a coordinate change of a key point in spatial coordinates and a time value, and the determination unit 240 analyzes the posture change rate to determine the posture of the object. type can be identified.
  • the determination unit 240 may utilize learning data stored in a database to determine a posture type based on a posture change rate.
  • a fall due to an accident caused by a fall has a very small change in time value corresponding to a change in coordinate values, and a change in time value is larger than that in an accidental fall when lying down due to one's own will.
  • the measurement unit 230 may measure location information (Region Of Interest, ROI) of an object with respect to the floor where a fall occurs, and the determination unit 240 may determine 3 Fall determination can be performed by considering the relative coordinate value difference between the object and the floor that can be seen as a region of interest compared to other objects in the dimensional space. That is, when an object identified as a body is located on the floor corresponding to the region of interest, other than another object, for example, a bed, or when the body is separated from the first object, for example, a bed, the second object, that is, When moving to the floor within a preset time value change, the determination unit 240 may determine a fall accident.
  • location information Region Of Interest, ROI
  • 3 Fall determination can be performed by considering the relative coordinate value difference between the object and the floor that can be seen as a region of interest compared to other objects in the dimensional space. That is, when an object identified as a body is located on the floor corresponding to the region of interest, other than another object, for example, a bed,
  • the image capture device 100 may further include a means for recognizing voice generated through an object. , it is possible to further improve the accuracy of discriminating the posture type of an object by image through voice analysis.
  • the server 200 receives the 2D thermal image from the image capture device 100, it determines the posture type of the object by using the thermal image.
  • the image acquisition device 100 includes a thermal sensor or a thermal imaging camera to capture a thermal image.
  • the image processing unit 220 can secure a silhouette of the shape of the object due to heat generated from the object, and the background where no heat is generated in the thermal image is naturally expressed in a dark color to prevent exposure of additional information related to privacy.
  • the server 200 may determine whether the object is in a normal motion, eg, a case where a person voluntarily lies down, or an abnormal motion, eg, a case of falling or falling, from a change in the thermal image. there is. That is, when a thermal change exceeding a preset change criterion caused by falling or falling is detected, the server 200 may determine that a fall accident has occurred.
  • a normal motion eg, a case where a person voluntarily lies down
  • an abnormal motion eg, a case of falling or falling
  • the image processing unit 220 learns the shape of an object silhouette using polygons and key points, generates three-dimensional spatial coordinates using the object silhouette, and identifies the object.
  • the present invention uses a learning tool for a plurality of silhouettes by heat detection, classifies learning data by forming a polygon in an image silhouette, and uses a learning tool for a plurality of silhouettes by heat detection to determine the image silhouette.
  • the head, shoulder, knee, arm, etc. are marked to classify the learning data, and the classified learning data is learned using deep learning technology so that the image processing unit 220 can identify the object.
  • the measurement unit 230 assigns three-dimensional coordinates to each key point of the object and measures a coordinate change of the key point in spatial coordinates.
  • a skilled classifier in a data classification work for constructing learning data for object identification, can mark and process a heat-sensing silhouette using a learning tool, and use information learned by deep learning technology for each part. Key points can be assigned.
  • the determination unit 240 analyzes the change in coordinate values measured by the measurement unit 230 to determine the posture type of the object.
  • the present invention determines the posture type while providing privacy protection of the object through the object silhouette.
  • the measurement unit 230 may measure the time value corresponding to the coordinate change of the key point, and the determination unit 240 may determine the posture type of the object by comparing and analyzing the time value with the coordinate value.
  • the present invention can determine the posture type while providing privacy protection of the object through the object silhouette, and can provide more powerful privacy protection by blocking the exposure of additional information other than the object.
  • the measurement unit 230 measures the time value corresponding to the coordinate change of the key point, and the determination unit 240 compares and analyzes the coordinate value and the time value together to determine the posture type of the object. can be identified. Accordingly, it is possible to further improve the accuracy of determining a posture type, whether it is an accidental fall or a voluntary lying down, and a false positive rate for conventional posture estimation can be reduced.
  • the present invention can improve the accuracy of determining the posture type by combining image information and spatial information and assigning coordinates for each key point of an object in the spatial information.
  • the present invention can reduce the false positive rate for posture estimation by finding out the reason or purpose of posture change, which is difficult to learn in posture estimation, by measuring the key point change.
  • the present invention can provide learning by classifying objects for the purpose of type identification, such as identification of meal or exercise type, in addition to the purpose of lying down.
  • the control unit 290 may include a module for learning learning data, and may control or monitor operations of a plurality of image capture devices 100 for each location, and for each image capture device 100 or terminal 300. Management information can be created by managing users, and management information can be stored in the database 250 .

Abstract

Disclosed are a server for determining a posture type and an operation method thereof. The server comprises: an image processing unit that generates three-dimensional spatial coordinates by using a two-dimensional image, and identifies an object; a measurement unit that assigns three-dimensional coordinates to each key point of the object, and measures a change in coordinates of the key point in the spatial coordinates; and a determination unit (240) that determines a posture type of the object by analyzing the change in coordinates of the key point. Therefore, the server determines a posture type of an object by applying the key point to spatial coordinates.

Description

자세유형 판별을 위한 서버 및 그 동작방법Server for determining posture type and its operation method
본 발명은 자세유형 판별을 위한 서버 및 그 동작방법에 관한 것으로서, 더욱 상세하게는 키 포인트를 공간좌표에 적용하여 객체의 자세유형을 판별하는 기술에 관한 것이다.The present invention relates to a server for determining a posture type and an operating method thereof, and more particularly, to a technique for determining a posture type of an object by applying key points to spatial coordinates.
종래에는 특허문헌에 기재된 바와 같이 센서가 장착된 웨어러블 장치를 사용자에게 착용시키고, 센서에서 감지된 값을 분석하여 사용자의 낙상유무를 판별하였다. 그러나 종래에는 웨어러블 디바이스의 착용을 거부하는 사용자가 존재할 수 있고, 착용 거부로 인해 사용자의 낙상유무를 감지하기 어려울 수 있으며, 사용자를 케어하기 위한 보호자가 웨어러블 디바이스의 착용을 거부하는지 감시하기 어려울 수 있는 문제점이 있다.Conventionally, as described in patent literature, a wearable device equipped with a sensor is worn by a user, and a value sensed by the sensor is analyzed to determine whether or not the user has fallen. However, conventionally, there may be users who refuse to wear the wearable device, and it may be difficult to detect whether or not the user has fallen due to the refusal to wear it, and it may be difficult to monitor whether a guardian to care for the user refuses to wear the wearable device. There is a problem.
최근에는 낙상이미지와 관련된 대량의 학습데이터를 이용하여 사용자를 촬영한 촬영영상을 분석하고 낙상유무를 판별하는 기술이 개발되었으나, 사고에 의한 넘어짐인지 또는 자의에 의한 누움인지 판별 정확도가 감소되는 문제점이 있다.Recently, a technology has been developed to analyze a video taken of a user using a large amount of learning data related to fall images and determine the presence or absence of a fall. there is.
상기 문제점을 해결하기 위하여 본 발명은 2차원의 영상을 이용하여 3차원의 공간좌표를 생성하고, 객체의 키 포인트별로 3차원의 좌표를 부여하며, 키 포인트의 좌표변화를 분석하여 객체의 자세유형을 판별하는 자세유형 판별을 위한 서버 및 그 동작방법을 제공한다.In order to solve the above problems, the present invention generates 3D spatial coordinates using a 2D image, assigns 3D coordinates to each key point of the object, and analyzes the coordinate change of the key points to determine the posture type of the object. Provides a server and its operation method for determining the type of posture that determines the
본 발명은 공간좌표에서 키 포인트의 좌표변화에 대응하는 좌표값과 시간값을 포함하는 자세변화율을 측정하고, 자세변화율을 분석하여 객체의 자세유형을 판별하는 자세유형 판별을 위한 서버 및 그 동작방법을 제공한다.The present invention measures a posture change rate including a coordinate value corresponding to a coordinate change of a key point in spatial coordinates and a time value, and analyzes the posture change rate to determine a posture type of an object. A server for determining a posture type and an operating method thereof provides
상기의 해결하고자 하는 과제를 위한 본 발명의 실시예에 따른 자세유형 판별을 위한 서버는, 객체를 촬영하는 영상획득장치(100)로부터 2차원의 영상을 수신하는 통신부(210); 상기 2차원의 영상을 이용하여 3차원의 공간좌표를 생성하고, 객체를 식별하는 영상처리부(220); 상기 객체의 키 포인트별로 3차원의 좌표를 부여하고, 공간좌표에서 키 포인트의 좌표변화를 측정하는 측정부(230); 상기 키 포인트의 좌표변화를 분석하여 객체의 자세유형을 판별하는 판별부(240) 및 상기 객체 식별과 자세유형의 판별을 위한 학습데이터를 저장하는 데이터베이스(250)를 포함하여, 상기 키 포인트를 공간좌표에 적용하여 객체의 자세유형을 판별하는 것을 특징으로 한다.A server for determining a posture type according to an embodiment of the present invention for the above object to be solved includes a communication unit 210 for receiving a two-dimensional image from an image acquisition device 100 for photographing an object; an image processing unit 220 that generates 3-dimensional space coordinates using the 2-dimensional image and identifies objects; a measurement unit 230 that assigns three-dimensional coordinates to each key point of the object and measures a coordinate change of the key point in spatial coordinates; A determination unit 240 for analyzing the coordinate change of the key point to determine the posture type of the object and a database 250 for storing learning data for object identification and posture type determination, including the key point in space It is characterized in that the posture type of the object is determined by applying it to the coordinates.
일 실시예에서, 상기 측정부는 공간좌표에서 키 포인트의 좌표변화에 대응하는 좌표값과 시간값을 포함하는 자세변화율을 측정하고, 상기 판별부는 자세변화율을 분석하여 객체의 자세유형을 판별하는 것을 특징으로 할 수 있다.In one embodiment, the measurement unit measures a posture change rate including a coordinate value and a time value corresponding to a coordinate change of a key point in spatial coordinates, and the determination unit analyzes the posture change rate to determine the posture type of the object. can be done with
일 실시예에서, 상기 측정부는 공간좌표 상의 복수의 고정된 객체 및 적어도 하나의 이동 객체를 식별하고, 상기 복수의 객체 중, 제1 객체로부터 상기 제1 객체와 다른 제2 객체로 상기 이동 객체가 미리 설정된 시간 내에 이동하는 것을 측정하고, 상기 판별부는, 상기 이동 객체의 이동에 대한 측정을 기초로, 상기 이동 객체의 자세 유형을 판별하는 것을 특징으로 할 수 있다.In one embodiment, the measurement unit identifies a plurality of fixed objects and at least one moving object on spatial coordinates, and among the plurality of objects, the moving object moves from a first object to a second object different from the first object. Movement within a preset time may be measured, and the determination unit may determine a posture type of the moving object based on the measurement of the movement of the moving object.
일 실시예에서, 상기 측정부는, 상기 학습데이터를 기초로, 상기 제2 객체를 관심 영역으로 설정하고, 상기 측정부는, 좌표값이 상기 제2 객체의 영역 내의 좌표값으로 변경되는 상기 객체 중 상기 좌표값이 변경되는 속도가 상기 미리 설정된 시간 내인 객체를상기 이동 객체로 식별할 수 있다.In one embodiment, the measurement unit sets the second object as a region of interest based on the learning data, and the measurement unit includes the coordinate values of the objects whose coordinate values are changed to coordinate values within the region of the second object. An object whose coordinate value change rate is within the preset time period may be identified as the moving object.
일 실시예에서, 상기 영상획득장치는 객체에서 발생하는 열 센싱하여 열영상을 획득하고, 상기 측정부는 공간좌표에서 키 포인트의 좌표변화에 대응하는 좌표값과 시간값을 포함하는 자세변화율을 측정하고, 상기 판별부는 상기 자세변화율을 분석하고, 상기 획득된 열영상의 미리 설정된 변화 기준을 초과하는 열 변화를 감지함에 따라 객체의 자세유형을 판별할 수 있다.In one embodiment, the image capture device acquires a thermal image by sensing heat generated from an object, and the measurement unit measures a posture change rate including a coordinate value corresponding to a coordinate change of a key point in spatial coordinates and a time value, , The discrimination unit may determine the posture type of the object by analyzing the posture change rate and detecting a thermal change exceeding a preset change criterion of the obtained thermal image.
본 발명의 실시예에 따른 자세유형 판별을 위한 서버의 동작방법은, 객체를 촬영하여 생성된 2차원의 영상을 이용하여 3차원의 공간좌표를 생성하고, 객체를 식별하는 단계; 상기 객체의 키 포인트별로 3차원의 좌표를 부여하는 단계; 상기 공간좌표에서 키 포인트의 좌표변화를 측정하는 단계 및 상기 키 포인트의 좌표변화를 분석하여 객체의 자세유형을 판별하는 단계를 포함하여, 상기 키 포인트를 공간좌표에 적용하여 객체의 자세유형을 판별하는 것을 특징으로 한다.A method of operating a server for determining a posture type according to an embodiment of the present invention includes generating three-dimensional space coordinates using a two-dimensional image generated by photographing an object, and identifying the object; assigning three-dimensional coordinates to each key point of the object; Measuring the coordinate change of the key point in the spatial coordinates and determining the posture type of the object by analyzing the coordinate change of the key point, determining the posture type of the object by applying the key point to the spatial coordinates It is characterized by doing.
본 발명은 키 포인트를 공간좌표에 적용하여 객체의 자세유형을 판별함으로써, 객체의 자세유형의 판별 정확도를 향상시킬 수 있다.The present invention can improve the accuracy of determining the posture type of an object by determining the posture type of an object by applying key points to spatial coordinates.
본 발명은 공간좌표에서 키 포인트의 좌표변화에 대응하는 좌표값과 시간값을 포함하는 자세변화율을 분석하여 객체의 자세유형을 판별함으로써, 사고에 의한 넘어짐인지 또는 자의에 의한 누움인지 자세유형의 판별 정확도를 더욱 향상시킬 수 있고, 종래의 자세추정에 대한 오탐율을 감소시킬 수 있다.The present invention determines the posture type of an object by analyzing the posture change rate including the coordinate value and the time value corresponding to the coordinate change of the key point in spatial coordinates, thereby determining whether the posture type is an accidental fall or voluntary lying down. Accuracy can be further improved, and false positive rates for conventional attitude estimation can be reduced.
도 1은 본 발명의 실시예에 따른 자세유형 판별 시스템을 도시한 블록도이다.1 is a block diagram showing a posture type determination system according to an embodiment of the present invention.
도 2는 도 1의 자세유형 판별 시스템을 더욱 상세하게 도시한 블록도이다.FIG. 2 is a block diagram showing the posture type determination system of FIG. 1 in more detail.
도 3은 도 1의 서버를 더욱 상세하게 도시한 블록도이다.FIG. 3 is a block diagram illustrating the server of FIG. 1 in more detail.
도 4는 종래의 자세추정 방식을 설명하기 위한 예이다.4 is an example for explaining a conventional posture estimation method.
도 5는 종래의 자세추정 방식을 설명하기 위한 다른 예이다.5 is another example for explaining a conventional posture estimation method.
도 6은 본 발명의 자세유형 판별 방식을 설명하기 위한 예이다.6 is an example for explaining the posture type determination method of the present invention.
도 7은 종래와 본 발명의 자세유형 판별의 방식을 비교하기 위한 예이다.7 is an example for comparing the posture type determination method of the prior art and the present invention.
도 8은 본 발명의 음성과 영상 분석을 이용하여 객체의 상황을 인식하는 방법을 설명하기 위한 예이다.8 is an example for explaining a method of recognizing a situation of an object using voice and video analysis according to the present invention.
도 9는 열영상을 이용하여 객체의 자세유형을 판별하는 방법을 설명하기 위한 예이다.9 is an example for explaining a method of determining a posture type of an object using a thermal image.
이하 첨부 도면들 및 첨부 도면들에 기재된 내용들을 참조하여 본 발명의 실시예를 상세하게 설명하지만, 본 발명이 실시예에 의해 제한되거나 한정되는 것은 아니다.Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings and the contents described in the accompanying drawings, but the present invention is not limited or limited by the embodiments.
도 1은 본 발명의 실시예에 따른 자세유형 판별 시스템을 도시한 블록도로서, 자세유형 판별 시스템(10)은 영상획득장치(100), 서버(200) 및 단말기(300)를 포함한다. 영상획득장치(100)는 요양병원, 어린이 보호소 또는 가정집 등 다양한 환경에서 설치가 가능하고, 감시대상을 감시하기 위해 사용된다. 감시대상은 노인, 어린이 또는 장애인 등 사용목적에 따라 다양할 수 있다.1 is a block diagram showing a posture type determination system according to an embodiment of the present invention. The posture type determination system 10 includes an image capture device 100, a server 200, and a terminal 300. The image acquisition device 100 can be installed in various environments such as a nursing hospital, a children's shelter, or a home, and is used to monitor a subject to be monitored. Targets to be monitored may vary depending on the purpose of use, such as the elderly, children, or the disabled.
서버(200)는 영상획득장치(100)에서 촬영된 영상을 분석하여 감시대상을 모니터링하고, 감시대상의 이상행동 또는 낙상을 발견하면 단말기(300)에게 해당 이벤트정보를 전송한다. The server 200 analyzes the video captured by the image capture device 100 to monitor the monitoring target, and transmits corresponding event information to the terminal 300 when an abnormal behavior or fall of the monitoring target is detected.
단말기(300)의 사용자는 서버(200)에서 보내준 이벤트정보를 참조하여 감시대상의 상태를 체크할 수 있다. 단말기(300)는 스마트폰, 데스크톱 또는 랩탑일 수 있고, 통신이 가능한 다양한 장치일 수 있으며, 이에 한정하지 않는다.The user of the terminal 300 may check the state of the monitoring target by referring to the event information sent from the server 200 . The terminal 300 may be a smart phone, desktop, or laptop, and may be various devices capable of communication, but is not limited thereto.
도 2는 도 1의 자세유형 판별 시스템을 더욱 상세하게 도시한 블록도로서, 영상획득장치(100)는 영상을 촬영하기 위한 CCTV 및 녹화를 위한 NVR을 포함할 수 있다. 서버(200)는 NVR 연결을 제공하고, 영상 분석을 위한 소프트웨어를 포함할 수 있다.FIG. 2 is a block diagram showing the posture type determination system of FIG. 1 in more detail. The image capture device 100 may include a CCTV for capturing images and an NVR for recording. The server 200 may provide NVR connection and include software for image analysis.
도 3은 도 1의 서버를 더욱 상세하게 도시한 블록도로서, 서버(200)는 통신부(210), 영상처리부(220), 측정부(230), 판별부(240) 및 데이터베이스(250)를 포함한다.FIG. 3 is a block diagram showing the server of FIG. 1 in more detail. The server 200 includes a communication unit 210, an image processing unit 220, a measurement unit 230, a determination unit 240, and a database 250. include
통신부(210)는 객체를 촬영하는 영상획득장치(100)로부터 2차원의 영상을 수신한다. 영상처리부(220)는 2차원의 영상을 이용하여 3차원의 공간좌표를 생성하고, 객체를 식별한다.The communication unit 210 receives a 2D image from the image capture device 100 that captures an object. The image processing unit 220 generates 3-dimensional space coordinates using a 2-dimensional image and identifies an object.
본 발명은 단일 카메라 또는 다중 카메라를 활용하여 2차원의 영상을 3차원의 공간좌표로 생성할 수 있다. 단일 카메라를 이용한 공간좌표 생성은 카메라 제원을 통해 해당 카메라의 내/외부 파라미터 값과 해당 카메라로 촬영된 측정 지표를 이미지상 기준점을 결합하여 촬영된 공간 좌표를 생성할 수 있다.The present invention can generate a 2D image with 3D spatial coordinates using a single camera or multiple cameras. Spatial coordinate generation using a single camera can generate captured spatial coordinates by combining internal/external parameter values of the corresponding camera and measurement indices captured by the corresponding camera with reference points on an image through camera specifications.
다중 카메라를 이용한 공간좌표 생성은 각각의 카메라의 거리를 측정 후 이미지상에서 표현되는 객체에 대하여 카메라 간 거리를 이용한 삼각측정 방법을 이용하여 공간 좌표를 생성할 수 있다.Spatial coordinate generation using multiple cameras can generate spatial coordinates using a triangulation method using the distance between cameras for an object represented on an image after measuring the distance of each camera.
영상처리부(220)는 지도학습(supervised learning)을 기반으로 데이터베이스(250)에 저장된 학습데이터를 이용하여 2차원의 영상에 존재하는 객체를 식별할 수 있다. 지도학습은 학습데이터를 참조하여 학습자의 입력값에 대응하는 출력값을 찾는 방식이다. 예를 들어, 객체는 공간 내의 책장, 침대와 같은 가구 및/또는 구조물을 포함할 수 있다. 또한, 객체는 사람의 신체로 식별되는 형상 및 그 대상물을 포함할 수 있다.The image processing unit 220 may identify an object existing in a 2D image using learning data stored in the database 250 based on supervised learning. Supervised learning is a method of finding an output value corresponding to a learner's input value by referring to learning data. For example, objects may include furniture and/or structures such as bookshelves and beds in a space. Also, the object may include a shape identified as a human body and an object thereof.
측정부(230)는 객체의 키 포인트별로 3차원의 좌표를 부여하고, 공간좌표에서 키 포인트의 좌표변화를 측정한다. 키 포인트(key point)는 영상에서 움직이고자 하는 객체의 주요한 지점을 의미하고, 예를 들어 머리, 어깨, 팔, 다리로 구분한다.The measurement unit 230 assigns three-dimensional coordinates to each key point of the object and measures a coordinate change of the key point in spatial coordinates. A key point means a main point of an object to be moved in an image, and is divided into, for example, a head, a shoulder, an arm, and a leg.
도 4는 종래의 자세추정 방식을 설명하기 위한 예이고, 도 5는 종래의 자세추정 방식을 설명하기 위한 다른 예로서, 종래에는 서있는 상태에서 넘어지는 상태로 전환되는 학습이미지를 학습하여 객체의 자세를 추정하였다. 그러나 종래에는 서있는 상태에서 넘어지는 상태변화 등 자세변화만 인식함으로써, 사고로 넘어지는 것인지 또는 눕기 위해 자의로 넘어지는 것인지 알아내기 어려운 문제점이 있다.Figure 4 is an example for explaining a conventional posture estimation method, and Figure 5 is another example for explaining a conventional posture estimation method. was estimated. However, in the prior art, there is a problem in that it is difficult to determine whether a person falls by accident or voluntarily falls to lie down by recognizing only a change in posture, such as a change in a state of falling from a standing state.
도 6은 본 발명의 자세유형 판별 방식을 설명하기 위한 예이고, 도 7은 종래와 본 발명의 자세유형 판별의 방식을 비교하기 위한 예로서, 본 발명은 종래의 문제점을 해결하기 위하여 측정부(230)는 객체의 키 포인트별로 3차원의 좌표를 부여하고, 공간좌표에서 키 포인트의 좌표변화를 측정한다. 또한 판별부(240)는 키 포인트의 좌표변화를 분석하여 객체의 자세유형을 판별한다.6 is an example for explaining the posture type determination method of the present invention, and FIG. 7 is an example for comparing the posture type determination method of the present invention and the conventional method. 230) assigns three-dimensional coordinates to each key point of the object, and measures the coordinate change of the key point in space coordinates. In addition, the determination unit 240 analyzes the coordinate change of the key point to determine the posture type of the object.
측정부(230)는 3차원 공간에서의 키 포인트 기반 자세유형 모델을 구현할 수 있고, 자세유형 모델로부터 얻게 된 3차원 위치적 자세 데이터를 기반으로 자세에 대한 전체적인 특징 및 각 관절 부위의 구체적 특성을 추출하여 3차원 메시(mesh)망 형태의 인체 모델을 추출할 수 있다.The measurement unit 230 may implement a key point-based posture type model in a 3-dimensional space, and based on the 3-dimensional positional posture data obtained from the posture type model, the overall characteristics of the posture and the specific characteristics of each joint part may be determined. By extracting, a human body model in the form of a 3D mesh may be extracted.
본 발명의 일 실시예에 따르면, 키 포인트를 공간좌표에 적용하여 객체의 자세유형을 판별함으로써, 객체의 자세유형의 판별 정확도를 향상시킬 수 있다.According to an embodiment of the present invention, by applying key points to spatial coordinates to determine the posture type of an object, it is possible to improve the accuracy of determining the posture type of the object.
구체적으로, 측정부(230)는 공간좌표에서 키 포인트의 좌표변화에 대응하는 좌표값과 시간값을 포함하는 자세변화율을 측정할 수 있고, 판별부(240)는 자세변화율을 분석하여 객체의 자세유형을 판별할 수 있다. 판별부(240)는 자세변화율에 의한 자세유형을 판별하기 위해 데이터베이스에 저장된 학습데이터를 활용할 수 있다.Specifically, the measuring unit 230 may measure a posture change rate including a coordinate value corresponding to a coordinate change of a key point in spatial coordinates and a time value, and the determination unit 240 analyzes the posture change rate to determine the posture of the object. type can be identified. The determination unit 240 may utilize learning data stored in a database to determine a posture type based on a posture change rate.
예를 들어 낙상으로 인한 사고에 의한 넘어짐은 좌표값 변화에 대응하여 시간값의 변화가 매우 작고, 자의로 인한 누움은 사고에 의한 넘어짐보다 시간값의 변화가 크므로, 판별부(240)는 자세변화율을 분석하여 객체의 누어있는 자세유형 중 누어있는 자세의 목적을 판별할 수 있다.For example, a fall due to an accident caused by a fall has a very small change in time value corresponding to a change in coordinate values, and a change in time value is larger than that in an accidental fall when lying down due to one's own will. By analyzing the rate of change, it is possible to determine the purpose of the lying posture among the types of lying postures of the object.
본 발명의 다른 실시예에 따르면, 측정부(230)는 낙상으로 인한 넘어짐이 발생하는 바닥면에 대한 객체의 위치 정보(Region Of Interest, ROI)를 측정할 수 있고, 판별부(240)는 3차원 공간 내에서 타 객체 대비 관심 영역으로 볼 수 있는 바닥면과 객체와의 상대적인 좌표 값 차이를 고려함으로써, 넘어짐 판정을 수행할 수 있다. 즉, 신체로 식별되는 객체가 타 객체, 예를 들어 침대가 아닌, 관심 영역에 해당하는 바닥면에 위치하는 경우, 또는 제1 객체, 예를 들어 침대로부터 신체가 이탈하여, 제2 객체, 즉 바닥면으로 미리 설절된 시간 값 변화 이내 이동되는 경우, 상기 판별부(240)는 낙상 사고를 판별할 수 있다.According to another embodiment of the present invention, the measurement unit 230 may measure location information (Region Of Interest, ROI) of an object with respect to the floor where a fall occurs, and the determination unit 240 may determine 3 Fall determination can be performed by considering the relative coordinate value difference between the object and the floor that can be seen as a region of interest compared to other objects in the dimensional space. That is, when an object identified as a body is located on the floor corresponding to the region of interest, other than another object, for example, a bed, or when the body is separated from the first object, for example, a bed, the second object, that is, When moving to the floor within a preset time value change, the determination unit 240 may determine a fall accident.
도 8은 본 발명의 음성과 영상 분석을 이용하여 객체의 상황을 인식하는 방법을 설명하기 위한 예로서, 영상획득장치(100)는 객체를 통하여 발생하는 음성을 인식하는 수단을 더 포함할 수 있고, 음성분석을 통하여 영상에 의한 객체의 자세유형을 판별하는 정확도를 더욱 향상시킬 수 있다.8 is an example for explaining a method for recognizing a situation of an object using voice and image analysis according to the present invention. The image capture device 100 may further include a means for recognizing voice generated through an object. , it is possible to further improve the accuracy of discriminating the posture type of an object by image through voice analysis.
도 9는 열영상을 이용하여 객체의 자세유형을 판별하는 방법을 설명하기 위한 예로서, 영상획득장치(100)는 객체를 촬영하여 2차원의 열영상을 생성하거나 실영상을 생성한다. 서버(200)는 영상획득장치(100)로부터 2차원의 열영상을 수신하면 열영상을 이용하여 객체의 자세유형을 판별한다.9 is an example for explaining a method of determining a posture type of an object using a thermal image, and the image capture device 100 photographs the object to generate a two-dimensional thermal image or a real image. When the server 200 receives the 2D thermal image from the image capture device 100, it determines the posture type of the object by using the thermal image.
영상획득장치(100)는 열영상을 촬영하기 위하여 열 센서 또는 열화상 카메라를 포함한다. 영상처리부(220)는 객체에서 발생하는 열로 인해 객체의 형상에 관한 실루엣을 확보할 수 있고, 열영상에서 열이 발생하지 않는 배경이 어두운 색으로 자연스럽게 표현되어 사생활에 관한 부가정보의 노출을 방지할 수 있다.The image acquisition device 100 includes a thermal sensor or a thermal imaging camera to capture a thermal image. The image processing unit 220 can secure a silhouette of the shape of the object due to heat generated from the object, and the background where no heat is generated in the thermal image is naturally expressed in a dark color to prevent exposure of additional information related to privacy. can
일 실시예에 따르면, 서버(200)는 열영상의 변화로부터 객체가 정상적인 동작, 예를 들어 사람이 자의로 자리에 눕는 경우인지, 비정상적인 동작, 예를 들어, 쓰러지거나 넘어지는 경우인지 판단할 수 있다. 즉, 쓰러지가나 넘어짐에 따라 발생하는 미리 설정된 변화 기준을 초과하는 열 변화가 감지되면, 서버(200)는 낙상 사고가 발생하였음을 판별할 수 있다According to an embodiment, the server 200 may determine whether the object is in a normal motion, eg, a case where a person voluntarily lies down, or an abnormal motion, eg, a case of falling or falling, from a change in the thermal image. there is. That is, when a thermal change exceeding a preset change criterion caused by falling or falling is detected, the server 200 may determine that a fall accident has occurred.
영상처리부(220)는 폴리건과 키 포인트를 이용하여 객체 실루엣의 형태를 학습하고, 객체 실루엣을 이용하여 3차원의 공간좌표를 생성하며, 객체를 식별한다.The image processing unit 220 learns the shape of an object silhouette using polygons and key points, generates three-dimensional spatial coordinates using the object silhouette, and identifies the object.
본 발명은 열감지에 의한 다수의 실루엣을 대상으로 학습도구를 이용하고, 영상 실루엣에 폴리건을 형성하여 학습 데이터를 분류하며, 열감지에 의한 다수의 실루엣을 대상으로 학습도구를 이용하여 영상 실루엣에 머리, 어깨, 무릎, 팔 등을 마킹하여 학습데이터를 분류하고, 딥러닝 기술을 이용하여 분류된 학습데이터를 학습시켜 영상처리부(220)에서 객체를 식별할 수 있도록 한다.The present invention uses a learning tool for a plurality of silhouettes by heat detection, classifies learning data by forming a polygon in an image silhouette, and uses a learning tool for a plurality of silhouettes by heat detection to determine the image silhouette. The head, shoulder, knee, arm, etc. are marked to classify the learning data, and the classified learning data is learned using deep learning technology so that the image processing unit 220 can identify the object.
측정부(230)는 객체의 키 포인트별로 3차원의 좌표를 부여하고, 공간좌표에서 키 포인트의 좌표변화를 측정한다.The measurement unit 230 assigns three-dimensional coordinates to each key point of the object and measures a coordinate change of the key point in spatial coordinates.
본 발명은 객체식별을 위한 학습데이터를 구축하기 위한 데이터 분류작에서 열 감지 실루엣을 학습도구를 이용하여 숙력된 분류자가 마킹하여 처리할 수 있고, 딥러닝 기술에 의해 학습된 정보를 이용하여 부위별로 키 포인트를 부여할 수 있다.In the present invention, in a data classification work for constructing learning data for object identification, a skilled classifier can mark and process a heat-sensing silhouette using a learning tool, and use information learned by deep learning technology for each part. Key points can be assigned.
판별부(240)는 측정부(230)에서 측정된 좌표값의 변화를 분석하여 객체의 자세유형을 판별한다. 본 발명은 객체 실루엣을 통하여 객체의 프라이버시 보호를 제공하면서 자세유형을 판별한다.The determination unit 240 analyzes the change in coordinate values measured by the measurement unit 230 to determine the posture type of the object. The present invention determines the posture type while providing privacy protection of the object through the object silhouette.
측정부(230)는 키 포인트의 좌표변화에 대응하는 시간값을 측정할 수 있고, 판별부(240)는 좌표값에 시간값을 함께 비교 분석하여 객체의 자세유형을 판별할 수 있다.The measurement unit 230 may measure the time value corresponding to the coordinate change of the key point, and the determination unit 240 may determine the posture type of the object by comparing and analyzing the time value with the coordinate value.
본 발명은 객체 실루엣을 통하여 객체의 프라이버시 보호를 제공하면서 자세유형을 판별할 수 있고, 객체 이외의 부가정보의 노출을 차단하여 더욱 강력한 프라이버시 보호를 제공할 수 있다.The present invention can determine the posture type while providing privacy protection of the object through the object silhouette, and can provide more powerful privacy protection by blocking the exposure of additional information other than the object.
본 발명의 일 실시예에 따르면, 측정부(230)에서 키 포인트의 좌표변화에 대응하는 시간값을 측정하고, 판별부(240)에서 좌표값과 시간값을 함께 비교 분석하여 객체의 자세유형을 판별할 수 있다. 이로써, 사고에 의한 넘어짐인지 또는 자의에 의한 누움인지 자세유형의 판별 정확도를 더욱 향상시킬 수 있고, 종래의 자세추정에 대한 오탐율을 감소시킬 수 있다.According to one embodiment of the present invention, the measurement unit 230 measures the time value corresponding to the coordinate change of the key point, and the determination unit 240 compares and analyzes the coordinate value and the time value together to determine the posture type of the object. can be identified. Accordingly, it is possible to further improve the accuracy of determining a posture type, whether it is an accidental fall or a voluntary lying down, and a false positive rate for conventional posture estimation can be reduced.
본 발명은 영상정보와 공간정보를 결합하고, 공간정보에서 객체의 키 포인트별 좌표를 부여함으로써, 자세유형 판별의 정확성을 향상시킬 수 있다.The present invention can improve the accuracy of determining the posture type by combining image information and spatial information and assigning coordinates for each key point of an object in the spatial information.
종래에는 자세추정에 의미를 부여하였지만, 본 발명은 자세추정에서 학습하기 어려운 자세변경의 이유나 목적을 키 포인트 변화에 의한 측정으로 알아내어 자세추정에 대한 오탐율을 감소시킬 수 있다. 예를 들어 본 발명은 객체의 누어있는 목적 이외에도 식사나 운동 형태에 대한 식별 등 형태식별의 목적으로 구분하여 학습을 제공할 수 있는 것이다.In the past, meaning was given to posture estimation, but the present invention can reduce the false positive rate for posture estimation by finding out the reason or purpose of posture change, which is difficult to learn in posture estimation, by measuring the key point change. For example, the present invention can provide learning by classifying objects for the purpose of type identification, such as identification of meal or exercise type, in addition to the purpose of lying down.
제어부(290)는 학습데이터의 학습을 위한 모듈을 포함할 수 있고, 장소별로 복수의 영상획득장치(100)의 동작을 제어하거나 모니터링할 수 있으며, 영상획득장치(100) 또는 단말기(300) 별 사용자를 관리하여 관리정보를 생성할 수 있고, 관리정보를 데이터베이스(250)에 저장할 수 있다.The control unit 290 may include a module for learning learning data, and may control or monitor operations of a plurality of image capture devices 100 for each location, and for each image capture device 100 or terminal 300. Management information can be created by managing users, and management information can be stored in the database 250 .

Claims (6)

  1. 객체를 촬영하는 영상획득장치(100)로부터 2차원의 영상을 수신하는 통신부(210);a communication unit 210 that receives a two-dimensional image from the image capture device 100 that photographs an object;
    상기 2차원의 영상을 이용하여 3차원의 공간좌표를 생성하고, 객체를 식별하는 영상처리부(220);an image processing unit 220 that generates 3-dimensional space coordinates using the 2-dimensional image and identifies objects;
    상기 객체의 키 포인트별로 3차원의 좌표를 부여하고, 공간좌표에서 키 포인트의 좌표변화를 측정하는 측정부(230);a measurement unit 230 that assigns three-dimensional coordinates to each key point of the object and measures a coordinate change of the key point in spatial coordinates;
    상기 키 포인트의 좌표변화를 분석하여 객체의 자세유형을 판별하는 판별부(240) 및A determination unit 240 for determining the posture type of the object by analyzing the coordinate change of the key point; and
    상기 객체 식별과 자세유형의 판별을 위한 학습데이터를 저장하는 데이터베이스(250)를 포함하여,Including a database 250 for storing learning data for object identification and discrimination of posture type,
    상기 키 포인트를 공간좌표에 적용하여 객체의 자세유형을 판별하는 것을 특징으로 하는 자세유형 판별을 위한 서버.A server for determining the posture type, characterized in that the posture type of the object is determined by applying the key point to spatial coordinates.
  2. 제1항에 있어서,According to claim 1,
    상기 측정부는 공간좌표에서 키 포인트의 좌표변화에 대응하는 좌표값과 시간값을 포함하는 자세변화율을 측정하고,The measuring unit measures a rate of change in attitude including a coordinate value corresponding to a coordinate change of a key point in spatial coordinates and a time value,
    상기 판별부는 자세변화율을 분석하여 객체의 자세유형을 판별하는 것을 특징으로 하는 자세유형 판별을 위한 서버.The server for determining the posture type, characterized in that the determining unit determines the posture type of the object by analyzing the posture change rate.
  3. 제1항에 있어서, According to claim 1,
    상기 측정부는 공간좌표 상의 복수의 고정된 객체 및 적어도 하나의 이동 객체를 식별하고,The measurement unit identifies a plurality of fixed objects and at least one moving object on spatial coordinates,
    상기 복수의 객체 중, 제1 객체로부터 상기 제1 객체와 다른 제2 객체로 상기 이동 객체가 미리 설정된 시간 내에 이동하는 것을 측정하고,Among the plurality of objects, measuring movement of the moving object from a first object to a second object different from the first object within a preset time;
    상기 판별부는, 상기 이동 객체의 이동에 대한 측정을 기초로, 상기 이동 객체의 자세 유형을 판별하는 것을 특징으로 하는 자세유형 판별을 위한 서버.The server for determining the posture type, characterized in that the determination unit determines the posture type of the moving object based on the measurement of the movement of the moving object.
  4. 제3항에 있어서,According to claim 3,
    상기 측정부는, 상기 학습데이터를 기초로, 상기 제2 객체를 관심 영역으로 설정하고,The measurement unit sets the second object as a region of interest based on the learning data,
    상기 측정부는, 좌표값이 상기 제2 객체의 영역 내의 좌표값으로 변경되는 상기 객체 중 상기 좌표값이 변경되는 속도가 상기 미리 설정된 시간 내인 객체를상기 이동 객체로 식별하는, 자세유형 판별을 위한 서버. The measuring unit identifies, as the moving object, an object whose coordinate value is changed at a speed at which the coordinate value is changed among the objects whose coordinate value is changed to a coordinate value within the area of the second object, as the moving object, a server for determining the posture type. .
  5. 제1항에 있어서,According to claim 1,
    상기 영상획득장치는 객체에서 발생하는 열 센싱하여 열영상을 획득하고, The image capture device obtains a thermal image by sensing heat generated from an object,
    상기 측정부는 공간좌표에서 키 포인트의 좌표변화에 대응하는 좌표값과 시간값을 포함하는 자세변화율을 측정하고,The measuring unit measures a rate of change in attitude including a coordinate value corresponding to a coordinate change of a key point in spatial coordinates and a time value,
    상기 판별부는 상기 자세변화율을 분석하고, 상기 획득된 열영상의 미리 설정된 변화 기준을 초과하는 열 변화를 감지함에 따라 객체의 자세유형을 판별하는 것을 특징으로 하는 자세유형 판별을 위한 서버.The server for determining the posture type, characterized in that the determination unit determines the posture type of the object by analyzing the posture change rate and detecting a thermal change exceeding a preset change criterion of the obtained thermal image.
  6. 자세유형 판별을 위한 서버의 동작방법에 있어서,In the operating method of the server for determining the posture type,
    객체를 촬영하여 생성된 2차원의 영상을 이용하여 3차원의 공간좌표를 생성하고, 객체를 식별하는 단계;generating 3-dimensional spatial coordinates using a 2-dimensional image generated by photographing the object and identifying the object;
    상기 객체의 키 포인트별로 3차원의 좌표를 부여하는 단계;assigning three-dimensional coordinates to each key point of the object;
    상기 공간좌표에서 키 포인트의 좌표변화를 측정하는 단계 및Measuring a coordinate change of a key point in the spatial coordinates; and
    상기 키 포인트의 좌표변화를 분석하여 객체의 자세유형을 판별하는 단계를 포함하여,Including the step of determining the posture type of the object by analyzing the coordinate change of the key point,
    상기 키 포인트를 공간좌표에 적용하여 객체의 자세유형을 판별하는 것을 특징으로 하는 자세유형 판별을 위한 서버의 동작방법.A method of operating a server for determining a posture type, characterized in that the posture type of an object is determined by applying the key point to spatial coordinates.
PCT/KR2022/018788 2021-11-26 2022-11-25 Server for determining posture type and operation method thereof WO2023096394A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2021-0165457 2021-11-26
KR1020210165457A KR20230078063A (en) 2021-11-26 2021-11-26 Server for determining the posture type and operation method thereof

Publications (1)

Publication Number Publication Date
WO2023096394A1 true WO2023096394A1 (en) 2023-06-01

Family

ID=86540080

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/018788 WO2023096394A1 (en) 2021-11-26 2022-11-25 Server for determining posture type and operation method thereof

Country Status (2)

Country Link
KR (1) KR20230078063A (en)
WO (1) WO2023096394A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002230531A (en) * 2001-02-06 2002-08-16 Hitachi Kokusai Electric Inc Method of detecting invading object
KR20180077683A (en) * 2016-12-29 2018-07-09 재단법인대구경북과학기술원 Apparatus of detecting treadmill based image analysis and method of detecting emergency using the same
KR20210013865A (en) * 2019-07-29 2021-02-08 에스앤즈 주식회사 Abnormal behavior detection system and method using generative adversarial network
KR20210020723A (en) * 2019-08-14 2021-02-24 건국대학교 산학협력단 Cctv camera device having assault detection function and method for detecting assault based on cctv image performed
KR20210098640A (en) * 2020-02-03 2021-08-11 한국생산기술연구원 System and method for recognizing behavior based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102199204B1 (en) 2019-05-31 2021-01-06 (주)엘센 System and method for detecting fall and falldown using wearable sensor device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002230531A (en) * 2001-02-06 2002-08-16 Hitachi Kokusai Electric Inc Method of detecting invading object
KR20180077683A (en) * 2016-12-29 2018-07-09 재단법인대구경북과학기술원 Apparatus of detecting treadmill based image analysis and method of detecting emergency using the same
KR20210013865A (en) * 2019-07-29 2021-02-08 에스앤즈 주식회사 Abnormal behavior detection system and method using generative adversarial network
KR20210020723A (en) * 2019-08-14 2021-02-24 건국대학교 산학협력단 Cctv camera device having assault detection function and method for detecting assault based on cctv image performed
KR20210098640A (en) * 2020-02-03 2021-08-11 한국생산기술연구원 System and method for recognizing behavior based on deep learning

Also Published As

Publication number Publication date
KR20230078063A (en) 2023-06-02

Similar Documents

Publication Publication Date Title
Kwolek et al. Improving fall detection by the use of depth sensor and accelerometer
Kepski et al. Fall detection using ceiling-mounted 3d depth camera
Zhang et al. A viewpoint-independent statistical method for fall detection
Zhang et al. A survey on vision-based fall detection
Cardinaux et al. Video based technology for ambient assisted living: A review of the literature
Bobick et al. The recognition of human movement using temporal templates
EP3689236A1 (en) Posture estimation device, behavior estimation device, posture estimation program, and posture estimation method
KR102413893B1 (en) Non-face-to-face non-contact fall detection system based on skeleton vector and method therefor
CN109830078B (en) Intelligent behavior analysis method and intelligent behavior analysis equipment suitable for narrow space
JP2017059945A (en) Device and method for image analysis
CN113111767A (en) Fall detection method based on deep learning 3D posture assessment
Albawendi et al. Video based fall detection using features of motion, shape and histogram
Abd et al. Human fall down recognition using coordinates key points skeleton
CN113384267A (en) Fall real-time detection method, system, terminal equipment and storage medium
WO2023096394A1 (en) Server for determining posture type and operation method thereof
KR101446422B1 (en) Video security system and method
Bansal et al. Elderly people fall detection system using skeleton tracking and recognition
Sree et al. A computer vision based fall detection technique for home surveillance
Oumaima et al. Vision-based fall detection and prevention for the elderly people: A review & ongoing research
Madhubala et al. A survey on technical approaches in fall detection system
CN112668387B (en) Illegal smoking identification method based on alpha Pose
Kwolek et al. Fall detection using kinect sensor and fall energy image
Lombardi et al. Detection of human movements with pressure floor sensors
An et al. Support vector machine algorithm for human fall recognition kinect-based skeletal data
KR20230078069A (en) System for determining the posture type using object silhouette and operation method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22899076

Country of ref document: EP

Kind code of ref document: A1