TW202403664A - A fall and posture identifying method with safety caring and high identification handling - Google Patents

A fall and posture identifying method with safety caring and high identification handling Download PDF

Info

Publication number
TW202403664A
TW202403664A TW111124986A TW111124986A TW202403664A TW 202403664 A TW202403664 A TW 202403664A TW 111124986 A TW111124986 A TW 111124986A TW 111124986 A TW111124986 A TW 111124986A TW 202403664 A TW202403664 A TW 202403664A
Authority
TW
Taiwan
Prior art keywords
module
fall
care
algorithm
sub
Prior art date
Application number
TW111124986A
Other languages
Chinese (zh)
Other versions
TWI820784B (en
Inventor
駱樂
謝德崇
許錦輝
Original Assignee
百一電子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 百一電子股份有限公司 filed Critical 百一電子股份有限公司
Priority to TW111124986A priority Critical patent/TWI820784B/en
Application granted granted Critical
Publication of TWI820784B publication Critical patent/TWI820784B/en
Publication of TW202403664A publication Critical patent/TW202403664A/en

Links

Images

Landscapes

  • Radar Systems Or Details Thereof (AREA)
  • Machine Tool Sensing Apparatuses (AREA)
  • Emergency Alarm Devices (AREA)

Abstract

This invention relating to a fall and posture identifying method with safety caring and high identification handling, is for a fall and posture identify identification system. This includes an image capture unit, an image processing unit and a fall identification unit. Firstly, the image capturing unit captures a body images of care recipients to generate a body image data corresponding to each care recipient. Then the image capture unit the image processing unit receives the body image data and preprocesses the body image data for image quality and noise. Finally, the fall identification unit receives the body image data after image preprocessing, and judges whether the care recipient falls, and to issued instant notification message if the care recipient falls.

Description

一種具安全照護及高隱私處理的跌倒及姿態辨識方法A fall and posture recognition method with safe care and high privacy processing

本發明是有關一種具安全照護及高隱私處理的跌倒及姿態辨識方法,尤其是以智慧看護並可保護受照護者隱私的一種具安全照護及高隱私處理的跌倒及姿態、行為等辨識方法。The present invention relates to a fall and posture recognition method with safe care and high privacy processing, especially a fall, posture, behavior, etc. identification method with safe care and high privacy processing that uses smart care and can protect the privacy of the person being cared for.

高齡化的時代來臨,針對長輩居家之安全照護等安全照護議題不斷湧現,特別對於高風險可能跌倒的通報,將傷害降至 最低需求痛點,讓受照護者即時獲得妥善照顧,然而,以往的看護方式需要龐大醫護人力成本,為了降低人力成本,最常見的方式,是透過攝影鏡頭即時監控,但在長時間的看護過程中,受照護者的隱私也不容易被保護。With the advent of the aging era, safety care issues such as safe care for the elderly at home are constantly emerging. In particular, reporting of high-risk falls is a need to minimize injuries and allow the care recipients to receive proper care in real time. However, in the past, The nursing method requires huge medical labor costs. In order to reduce labor costs, the most common method is to monitor in real time through a camera. However, during the long-term care process, the privacy of the person being cared for is not easy to be protected.

而目前既有的技術,是讓受照護者配戴一穿戴式智慧偵測手環,並透過內部的陀螺儀分析及判斷受照護者是否發生跌倒,以及在跌倒時發出通報,然而,這類的偵測方式很容易因為受照顧者日常活動及肢體動作或行為而發生誤判,另方面,配戴感測器也容易造成身體不舒適,而導致受照護者也可能忘記攜帶或配戴。The current existing technology is to let the person being cared for wear a wearable smart detection bracelet, and use the internal gyroscope to analyze and determine whether the person being cared for has fallen, and send a notification when the person falls. However, this type of The detection method can easily cause misjudgments due to the daily activities and body movements or behaviors of the care recipient. On the other hand, wearing the sensor can also easily cause physical discomfort, and the care recipient may forget to carry or wear it.

另外,一併參考圖22及23,習知的美國專利案(US9600993)在說明書(Col.19, Line.1-52)已提到,透過傅里葉轉換、拉普拉斯轉換(Fourier transforms, Laplace transforms)進行人的圖像進行形狀或輪廓處理,以做為隱私保護,另外,在說明書(Col.20, Line.11-34)中還提到,去除掉不想要的背景(例如檯燈),同時保留想要的人形大致形狀或輪廓,以及,在說明書(Col.22, Line.1-26)中還提到,人的位置移動變化可以用於檢測跌倒或坐下事件。例如,人的頭頂位置在短時間內快速下降,一直到視野的底部邊緣,判斷可能跌倒。In addition, referring to Figures 22 and 23 together, the commonly known US patent case (US9600993) has mentioned in the specification (Col. 19, Line. 1-52) that through Fourier transforms and Laplace transforms (Fourier transforms , Laplace transforms) to perform shape or outline processing on human images for privacy protection. In addition, it is also mentioned in the instructions (Col.20, Line.11-34) to remove unwanted backgrounds (such as desk lamps). ; For example, if the position of a person's head drops rapidly in a short period of time until it reaches the bottom edge of the field of view, it is judged that a person may fall.

但習知的技術中,如果必需即時對人的影像解析出並進行傅里葉轉換或拉普拉斯轉換,將會造成系統運算處理上的負擔,如果進行受照護者的人眾多,系統難以負擔多人的即時影像處理,將造成系統崩潰,另方面,當背景去除掉不想要的背景,當受照護者被椅子絆倒,由於系統已進行去除背景,將難以判斷受照護者是何原因造成跌倒,再來,如果僅以位置移動變化做為跌倒判斷依據,很容易就造成誤判斷。However, in the conventional technology, if it is necessary to analyze the person's image in real time and perform Fourier transformation or Laplace transformation, it will cause a burden on the system's calculation and processing. If there are many people to be cared for, the system will be difficult to Burdening multiple people with real-time image processing will cause the system to crash. On the other hand, when the unwanted background is removed, and when the person being cared for stumbles over a chair, since the system has already removed the background, it will be difficult to determine the cause of the person being cared for. Causes a fall, and then, if only the position movement change is used as the basis for judging a fall, it is easy to cause a misjudgment.

因此,如何設計出一種智慧看護的照護方法,令受照護者能夠在安全隱私受保護,而且無需令受照護者配戴任何偵測裝置,還能增加受照護者跌倒的判斷準確率,這些都是本案所要著重的問題與焦點。Therefore, how to design a smart nursing care method that can protect the safety and privacy of the care recipient without requiring the care recipient to wear any detection device, and also increase the accuracy of judging the fall of the care recipient, all of which are This is the issue and focus of this case.

本發明之一目的在提供讓受照護者能夠在安全隱私受保護的一種具安全照護及高隱私處理的跌倒及姿態辨識方法。One object of the present invention is to provide a fall and posture recognition method with safe care and high privacy processing that allows the care recipient to be protected in safety and privacy.

本發明之另一目的在提供增加受照護者跌倒的判斷準確率的一種具安全照護及高隱私處理的跌倒及姿態辨識方法。Another object of the present invention is to provide a fall and posture recognition method with safe care and high privacy processing that increases the accuracy of judgment of falls of the person being cared for.

本發明一種具安全照護及高隱私處理的跌倒及姿態辨識方法,是用於一跌倒及姿態、行為辨識系統,並在至少一區域中,提供辯識及區分至少一受照護者的軀體狀態。所述跌倒及姿態辨識系統包括一影像擷取單元、一影像處理單元及一跌倒辯識單元,所述影像擷取單元是對應所述區域設置,並供朝向所述受照護者拍攝,所述影像處理單元是與所述影像擷取單元訊號連接,所述跌倒辯識單元是與所述影像處理單元訊號連接,其中所述跌倒及姿態辨識方法包括下列步驟: a)所述影像擷取單元擷取所述受照護者的軀體影像以產生對應各所述受照護者的一軀體影像資料; b)所述影像處理單元接收所述軀體影像資料,並對所述軀體影像資料進行影像畫質及雜訊的預處理;及 c)所述跌倒辯識單元接收進行影像預處理後的所述軀體影像資料,以及透過人體姿態變化,判斷所述受照護者是否跌倒,如果所述受照護者跌倒,或有其他異常狀態,發佈一即時通報訊息。 The present invention provides a fall and posture recognition method with safe care and high privacy processing, which is used in a fall, posture, and behavior recognition system, and provides identification and differentiation of the physical state of at least one person being cared for in at least one area. The fall and posture recognition system includes an image capture unit, an image processing unit and a fall recognition unit. The image capture unit is configured corresponding to the area and is used for shooting toward the person being cared for. The image processing unit is signal-connected to the image capture unit, and the fall recognition unit is signal-connected to the image processing unit. The fall and posture recognition method includes the following steps: a) The image capturing unit captures body images of the care recipients to generate body image data corresponding to each of the care recipients; b) The image processing unit receives the body image data and performs image quality and noise preprocessing on the body image data; and c) The fall identification unit receives the body image data after image preprocessing, and determines whether the person being cared for has fallen through changes in human body posture. If the person being cared for has fallen or has other abnormal conditions, Post an instant notification message.

如上所述的跌倒及姿態辨識方法,其中所述影像處理單元包括一照度變化分析模組及一環境光源描述模組,其中所述步驟 b)包括下列步驟: b1)所述照度變化分析模組偵測所述區域的環境光源改變,以產生一對場域環境光源描述之即時資訊; b2)所述環境光源描述模組依據所述光源即時資訊,及時修正及建立一周圍環境光源模型;及 b3)所述影像處理單元依據所述周圍環境光源模型情境,對所述區域搜尋與偵測識別存在於環境中的多個受照護者。 As mentioned above, the fall and posture recognition method, wherein the image processing unit includes an illumination change analysis module and an ambient light source description module, wherein the step b) includes the following steps: b1) The illumination change analysis module detects changes in ambient light sources in the area to generate a pair of real-time information describing the ambient light sources in the field; b2) The ambient light source description module promptly corrects and establishes a surrounding ambient light source model based on the real-time information of the light source; and b3) The image processing unit searches and detects and identifies multiple care recipients existing in the environment in the area according to the surrounding environment light source model situation.

如上所述的跌倒及姿態辨識方法,其中所述跌倒辯識單元包括一受照護者分類模組及一受照護者分析模組,其中所述步驟c)包括下列步驟: c1)所述受照護者分類模組依據所述影像處理單元對所述區域的搜尋偵測、及區別在所述區域的所述環境中,識別受照護者,並建立對應各所述環境被識別受照護者的對應骨架點及一目標矩形追蹤框; c2)所述受照護者分類模組判斷所述環境被識別受照護者中是否符合所述受照護者的一特徵擷取即辨識資訊,如果符合,建立一對應的可能物件之候選框;及 c3)所述受照護者分析模組依據所述特徵擷取即辨識資訊,分析及辨識各所述受照護者的姿態、步態及/或行為。 As mentioned above, the fall and posture recognition method, wherein the fall recognition unit includes a care recipient classification module and a care recipient analysis module, wherein the step c) includes the following steps: c1) The care recipient classification module identifies the care recipients based on the image processing unit's search and detection of the area and distinguishes the environment in the area, and establishes a correspondence between each of the environments. Identify the corresponding skeleton points of the care recipient and a target rectangular tracking frame; c2) The care recipient classification module determines whether the identified care recipients in the environment match one of the characteristics of the care recipient, that is, the identification information, and if so, establishes a corresponding candidate box of possible objects; and c3) The care receiver analysis module retrieves and identifies information based on the characteristics, and analyzes and identifies the posture, gait and/or behavior of each care receiver.

如上所述的跌倒及姿態辨識方法,其中所述受照護者分類模組更包括一時空濾波子模組( spatio-temporal filter),其中所述步驟c1)包括一步驟c11)所述時空濾波子模組提供適當的邏輯與機制、遮罩值及特徵門檻值,以計算出所述環境被識別受照護者的特定特徵。As mentioned above, the fall and posture recognition method, wherein the care recipient classification module further includes a spatio-temporal filter sub-module, wherein the step c1) includes the spatio-temporal filter sub-module of step c11) The module provides appropriate logic and mechanisms, mask values and feature thresholds to calculate the specific features of the identified care recipients in the environment.

如上所述的跌倒及姿態辨識方法,其中所述受照護者分析模組包括一幾何分析子模組、一曲率算法子模組、一形狀算法子模組、及一人體局部構型演算法子模組(Body Part Algorithm),其中所述步驟c3)包括一步驟c31) 所述幾何分析子模組分析及辨識所述環境被識別受照護者的形狀、尺寸及/或尺寸變化,所述曲率算法子模組辨識及演算各所述受照護者的特定曲率的姿態變化,所述形狀算法子模組演算及比對各所述受照護者的特定形體姿態,所述人體局部構型演算法子模組演算各所述受照護者的人體各部件,另方面,還能夠同時建立多人追蹤演算法模型,以利在場景在多人時,能同時進行跌倒與姿態辨識。The above fall and posture recognition method, wherein the care recipient analysis module includes a geometric analysis sub-module, a curvature algorithm sub-module, a shape algorithm sub-module, and a human body local configuration algorithm sub-module Group (Body Part Algorithm), wherein the step c3) includes a step c31) the geometric analysis sub-module analyzes and identifies the shape, size and/or size change of the identified care recipient in the environment, and the curvature algorithm The sub-module identifies and calculates the posture changes of the specific curvature of each of the care recipients, the shape algorithm sub-module calculates and compares the specific body posture of each of the care recipients, and the human body local configuration algorithm sub-module The system calculates the various human body parts of each person being cared for. On the other hand, it can also establish a multi-person tracking algorithm model at the same time to facilitate simultaneous fall and posture recognition when there are multiple people in the scene.

如上所述的跌倒及姿態辨識方法,其中所述受照護者分析模組更包括一光流算法子模組、一詞袋演算法子模組(Bag of Word)、人體骨架與關節點子模組、一機器學習子模組、一跌倒偵測演算法子模組、一尺度不變特徵變換演算法模組,其中所述步驟c3)包括一在步驟c31)後的步驟c32) 所述光流算法子模組對各所述受照護者的影像進行運動檢測、物件切割、碰撞時間與物體膨脹的計算、運動補償編碼及/或行立體度量,所述詞袋演算法子模組,是對跌倒過程的姿態建立對應的多個特徵向量組編碼並分類,所述人體骨架與關節點子模組擷取所述曲率算法子模組、所述形狀算法子模組及所述人體局部構型演算法子模組的各演算結果,產生對應各所述受照護者的所有活動及姿態的一狀態細節資訊,所述機器學習子模組以類神經網路進行深度學習以訓練、調整及優化所述所述跌倒辯識單元,所述跌倒偵測演算法子模組擷取所述曲率算法子模組、所述形狀算法子模組、所述人體局部構型演算法子模組、所述光流算法子模組及所述詞袋演算法子模組的各演算結果,建立一跌倒偵測與判斷模型,所述尺度不變特徵變換演算法模組 進行各所述受照護者在所述區域的距離遠近變化的影像補償。另外,本案可整合並採用各式AI演算法以及資料集(Data Sets)訓練之方式,提升辨識率,減少偽動作誤判機率。As mentioned above, the fall and posture recognition method, wherein the care recipient analysis module further includes an optical flow algorithm sub-module, a bag of words algorithm sub-module (Bag of Word), a human skeleton and joint point sub-module, A machine learning sub-module, a fall detection algorithm sub-module, and a scale-invariant feature transformation algorithm module, wherein step c3) includes step c32) of the optical flow algorithm after step c31) The module performs motion detection, object cutting, calculation of collision time and object expansion, motion compensation coding and/or row stereometric measurement of the images of each of the care recipients. The bag-of-words algorithm sub-module is a comprehensive analysis of the fall process. Multiple feature vector groups corresponding to the posture are encoded and classified. The human body skeleton and joint point sub-module retrieves the curvature algorithm sub-module, the shape algorithm sub-module and the human body local configuration algorithm sub-module. Each calculation result generates a state detailed information corresponding to all activities and postures of each care recipient. The machine learning sub-module uses a neural network to perform deep learning to train, adjust and optimize the fall. Identification unit, the fall detection algorithm sub-module retrieves the curvature algorithm sub-module, the shape algorithm sub-module, the human body local configuration algorithm sub-module, and the optical flow algorithm sub-module and each calculation result of the bag-of-words algorithm module to establish a fall detection and judgment model. The scale-invariant feature transformation algorithm module performs calculations on the distance changes of each of the care recipients in the area. Image compensation. In addition, this case can integrate and use various AI algorithms and data sets (Data Sets) training methods to improve the recognition rate and reduce the chance of false misjudgment.

除上所述的跌倒及姿態辨識方法,本發明有關一種具安全照護及高隱私處理的跌倒及姿態辨識系統,還包含能提供智能視覺型電子圍籬功能,能防堵受照顧者其他意外可能的偵測通報。智能視覺型電子圍籬模組,其中所述步驟b)包括一步驟b1’),所述智能視覺型電子圍籬模組是在所述影像擷取單元(IP CAM)1拍攝的各所述區域內建立一虛擬的電子圍離區塊,並與所述受照護者的所述軀體影像資料進行影像疊加,其中所述步驟c)包括一步驟c1’)判斷所述受照護者是否進入所述電子圍離區塊的範圍內或離開所述電子圍離區塊,如果所述受照護者進入所述電子圍離區塊的範圍內或離開所述電子圍離區塊,或者闖入限制的禁區等,則判定所述受照護者可能進入所限定的危險區域、停留時間過久、或日夜異常地離開電子圍籬所設定安全區域等各種可能情境狀態,如進行確認,即刻發佈所述即時通報訊息。In addition to the fall and posture recognition method mentioned above, the present invention relates to a fall and posture recognition system with safe care and high privacy processing. It also includes an intelligent visual electronic fence function that can prevent other accidents of the person being cared for. detection report. Intelligent visual electronic fence module, wherein the step b) includes a step b1'), the intelligent visual electronic fence module is each of the images captured by the image capture unit (IP CAM) 1 Establish a virtual electronic enclosure area in the area, and perform image superposition with the body image data of the person under care, wherein step c) includes a step c1') to determine whether the person under care has entered the area. If the care recipient enters within or leaves the electronic containment zone, or breaks into a restricted area restricted areas, etc., it will be determined that the person under care may enter the restricted dangerous area, stay for too long, or abnormally leave the safe area set by the electronic fence day and night, etc., and if confirmed, the real-time notification will be released immediately. Notify the message.

如上所述的跌倒及姿態辨識方法,其中所述影像處理單元包括一防視覺隱私權洩漏演算法模組,其中所述步驟b)包括一步驟b1”)所述防視覺隱私權洩漏演算法模組對所述受照護者的所述軀體影像資料進行特徵輪廓處理以及動態影像演算法遮蔽處理,以產生一對應所述受照護者且具有安全隱私屏蔽的人體合成遮罩。The fall and posture recognition method as mentioned above, wherein the image processing unit includes a visual privacy leakage prevention algorithm module, wherein the step b) includes a visual privacy leakage prevention algorithm module of step b1″) The group performs feature contour processing and dynamic image algorithm masking processing on the body image data of the care recipient to generate a human body synthetic mask that corresponds to the care recipient and has secure privacy masking.

如上所述的跌倒及姿態辨識方法,其中所述步驟b1”)包括一步驟b11”)對所述區域的多個物件影像進行特徵輪廓處理以及動態影像演算法遮蔽處理,以產生多個對應所述物件且具有安全隱私屏蔽的物件合成遮罩。As mentioned above, the fall and posture recognition method, wherein the step b1") includes a step b11") of performing feature contour processing and dynamic image algorithm masking processing on multiple object images in the area to generate multiple corresponding objects. A composite mask of the object described above with a secure privacy mask.

如上所述的跌倒及姿態辨識方法,其中所述的影像處理單元更包括一夜間跌倒與姿態模型,其中所述的步驟a)包括一步驟a1”)所述的夜間跌倒與姿態模型透過模糊技術(Fuzzy Theory)分析及智慧辨識強化所述於夜間的軀體影像資料。As mentioned above, the fall and posture recognition method, wherein the image processing unit further includes a night-time fall and posture model, wherein the step a) includes a step a1") and the night-time fall and posture model through fuzzy technology (Fuzzy Theory) analyzes and intelligently identifies and enhances the body image data at night.

如上所述的跌倒及姿態辨識方法,其中所述的跌倒及姿態辨識系統更包括一通訊單元,通訊單元與跌倒辨識單元訊號連接其中所述的跌倒及姿態辨識方法更包括一步驟d)所述的通訊單元供發出一關懷詢問訊息至各所述的受照護者,亦能於緊急狀況,同時了解被照顧者現場狀況,並進行雙向通話。As mentioned above, the fall and posture recognition method further includes a communication unit, and the communication unit is signal-connected to the fall recognition unit. The fall and posture recognition method further includes step d). The communication unit is used to send a caring inquiry message to each of the persons being cared for. It can also understand the on-site situation of the persons being cared for in an emergency and conduct two-way calls.

如上所述的跌倒及姿態辨識方法,其中所述的受照護者分析模組包括一屋內有人之異常偵測單元、一特定狀態過久之異常偵測單元、一作息統計單元及一歷史瀏覽單元,其中所述的步驟c3)包括下列步驟: c31’)所述的屋內有人之異常偵測單元在一第一預定時間判斷所述的受照護者是否在預定的所述的區域; c32’)所述的特定狀態過久之異常偵測單元判斷在一第二預定時間之外判斷所述的受照護者是否一直維持動作不動,或是判斷是否在所述的區域過久; c33’)所述的作息統計單元持續自動紀錄各所述的受照顧者之居家生活作息狀態;及 c34’)所述的歷史瀏覽單元供瀏覽各所述的受照顧者之居家生活作息狀態。 As mentioned above, the fall and posture recognition method, wherein the care recipient analysis module includes an abnormality detection unit for people in the house, an abnormality detection unit for a specific state for a long time, a work and rest statistics unit and a history browsing unit , wherein step c3) includes the following steps: c31') The abnormality detection unit for people in the house determines whether the person under care is in the predetermined area at a first predetermined time; c32') The abnormality detection unit in which the specific state is too long determines whether the person under care has remained motionless outside a second predetermined time, or whether it has been in the area for too long; The work and rest statistics unit described in c33’) continuously and automatically records the home life and rest status of each of the care recipients; and The history browsing unit described in c34’) is used to browse the home life and rest status of each of the persons under care.

本案一種具安全照護及高隱私處理的跌倒及姿態辨識方法,是利用影像擷取單元對各受照護者進行智慧影像看護,且可提升受照護者跌倒的判斷準確率,並使受照護者能夠在安全隱私受保護,還可降低醫護人力成本,並達成上述所有目的。This case is a fall and posture recognition method with safe care and high privacy processing. It uses an image capture unit to provide intelligent image care for each care recipient. It can improve the accuracy of the fall judgment of the care recipient and enable the care recipient to While security and privacy are protected, medical labor costs can also be reduced, and all the above purposes can be achieved.

依照本發明第一實施例的一種具安全照護及高隱私處理的跌倒及姿態辨識方法,一併參考圖1所示,是用於一跌倒及姿態辨識系統,並供辯識及區分至少一區域的至少一受照護者(例高齡長者)的軀體狀態,其中跌倒及姿態辨識系統包括一影像擷取單元(IP CAM)1、一影像處理單元(pre-processing)2及一跌倒辯識單元(detected human)3,影像擷取單元(IP CAM)1是對應所述的區域設置,並供朝向受照護者擷取影像,影像處理單元(pre-processing)2是與影像擷取單元(IP CAM)1訊號連接,跌倒辯識單元(detected human)3是與影像處理單元(pre-processing)2訊號連接。其中影像擷取單元1在本案中是採用目前常見一般大眾普及且成本低廉的可見光攝影機,但卻以影像破壞性處理技術,達成高隱私設計。A fall and posture recognition method with safety care and high privacy processing according to the first embodiment of the present invention, as shown in FIG. 1 , is used in a fall and posture recognition system and is used to identify and distinguish at least one area. The physical state of at least one caregiver (for example, the elderly), in which the fall and posture recognition system includes an image capture unit (IP CAM) 1, an image processing unit (pre-processing) 2 and a fall recognition unit ( detected human) 3, the image capture unit (IP CAM) 1 is configured corresponding to the above-mentioned area and is used to capture images toward the person being cared for, and the image processing unit (pre-processing) 2 is in conjunction with the image capture unit (IP CAM) )1 signal connection, the fall recognition unit (detected human) 3 is connected with the image processing unit (pre-processing) 2 signal. The image capture unit 1 in this case uses a visible light camera that is commonly used by the general public and is low-cost, but uses image destructive processing technology to achieve a high privacy design.

一併參考圖2所示,一開始如步驟S101,由影像擷取單元1擷取受照護者的軀體影像以產生對應各受照護者的一軀體影像資料,再如步驟S102,影像處理單元2接收軀體影像資料,並對軀體影像資料進行影像畫質及雜訊的預處理,接著如步驟S103,跌倒辯識單元3接收進行影像預處理後的軀體影像資料,以及透過人體姿態變化判斷受照護者是否跌倒,如果沒有,則回到步驟S101,持續進行對各受照護者的影像結取,如果判斷跌倒,則如步驟S104,發佈一即時通報訊息,並通知相關看護人員立即到現場以進行緊急協助。Referring to FIG. 2 , initially in step S101 , the image capture unit 1 captures body images of the care recipients to generate body image data corresponding to each care recipient. Then in step S102 , the image processing unit 2 Receive body image data, and perform image quality and noise preprocessing on the body image data. Then in step S103, the fall identification unit 3 receives the body image data after image preprocessing, and determines that the person is being cared for through changes in human body posture. Whether the person fell down, if not, then return to step S101 and continue to collect images of each person being cared for. If it is determined that the person has fallen, then in step S104, an immediate notification message is issued and the relevant caregivers are notified to come to the scene immediately for treatment. Emergency assistance.

依照本發明再第二實施例的一種具安全照護及高隱私處理的跌倒及姿態辨識方法,參考圖3所示,在本例中,影像處理單元2包括一照度變化分析模組(illumination change estimation)21及一環境光源描述模組(ambient light source tuning model)22,一併參考圖4所示,在本例中,步驟S102後再如步驟S1021,照度變化分析模組21偵測區域的環境光源改變,以產生一對場域環境光源描述之即時資訊,據此提供調整及補償環境光源,以提供環境較優質之穩定光源影像,再如步驟S1022,環境光源描述模組22依據光源即時資訊,及時修正及建立一周圍環境光源模型,以利有效處理由影像擷取單元1所蒐集區域的受照護者的活動狀況影像,再如步驟1023,影像處理單元2依據周圍環境光源模型對區域搜尋與偵測環境中多個被識別受照護者。According to a second embodiment of the present invention, a fall and posture recognition method with safety care and high privacy processing is shown in FIG. 3. In this example, the image processing unit 2 includes an illumination change analysis module (illumination change estimation). ) 21 and an ambient light source tuning model 22, as shown in Figure 4 together. In this example, step S102 is followed by step S1021, where the illumination change analysis module 21 detects the environment of the area. The light source changes to generate a pair of real-time information describing the field environment light source, thereby adjusting and compensating the environment light source to provide a stable light source image with better quality. In step S1022, the environment light source description module 22 uses the real-time information of the light source. , timely correct and establish a surrounding environment light source model to facilitate effective processing of the activity status images of the care recipients in the area collected by the image capture unit 1. In step 1023, the image processing unit 2 searches the area based on the surrounding environment light source model. and detecting multiple identified care recipients in the environment.

另外,跌倒辯識單元3也可包括一受照護者分類模組(Object Classification)31及一受照護者分析模組32,如圖5所示,在本例中,受照護者分類模組31更包括一時空濾波子模組(Spatio-Temporal Filter)311,受照護者分析模組32包括一幾何分析子模組(Geometric Analysis)321、一曲率算法子模組(Curvature Algorithm)322、一形狀算法子模組(Shape-based Algorithm)323及一人體局部構型演算法子模組(Body Part Algorithm)324。In addition, the fall identification unit 3 may also include a care recipient classification module (Object Classification) 31 and a care recipient analysis module 32, as shown in Figure 5. In this example, the care recipient classification module 31 It further includes a spatio-temporal filter sub-module (Spatio-Temporal Filter) 311, and the care recipient analysis module 32 includes a geometric analysis sub-module (Geometric Analysis) 321, a curvature algorithm sub-module (Curvature Algorithm) 322, and a shape Algorithm sub-module (Shape-based Algorithm) 323 and a human body partial configuration algorithm sub-module (Body Part Algorithm) 324.

其中步驟S1023後再如步驟S1031,受照護者分類模組31依據影像處理單元對區域的搜尋與偵測,辨識及區別在區域的各個環境內被識別受照護者,並建立對應環境內各識別受照護者的對應骨架點及一目標矩形追蹤框,其中步驟S1031後還有一步驟S10311,時空濾波子模組(Spatio-Temporal Filter)311提供適當的邏輯與機制、遮罩值(Mask)及特徵門檻值(Threshold),以計算出環境識別受照護者的特定特徵,以達到過濾某些特定的物件或者能過濾處理對要偵測物件的干擾或雜訊,再如步驟S1032,受照護者分類模組31判斷環境內被識別受照護者中是否符合該些受照護者的一特徵擷取即辨識資訊,如果不符合,回到步驟S1031,繼續搜尋與偵測,如果符合,則如步驟S1033,建立一對應的可能物件之候選框(Bounding Box),再如步驟S1034,受照護者分析模組32依據特徵擷取即辨識資訊分析及辨識各受照護者的姿態、步態及/或行為,透過上述的模組搭配其他演算法,同時便可避免當人有重疊,卻被誤判幾似跌倒情況發生,其中步驟S1034後還有一步驟S10341,由幾何分析子模組321分析及辨識環境內被識別受照護者的形狀、尺寸及/或尺寸變化,由曲率算法子模組(Curvature Algorithm)322辨識及演算各受照護者的特定曲率的姿態變化,以輔助尋找受照護者目標、姿態辨識、步態辨識、行為辨識等任務,由形狀算法子模組(Shape-based Algorithm)323演算及比對各受照護者的特定形體姿態,由人體局部構型演算法子模組(Body Part Algorithm)324演算各受照護者的人體各部件。Step S1023 is followed by step S1031. Based on the image processing unit's search and detection of the area, the care recipient classification module 31 identifies and distinguishes the identified care recipients in each environment of the area, and establishes corresponding identifications in the environment. The corresponding skeleton point of the person under care and a target rectangular tracking frame. There is a step S10311 after step S1031. The spatio-temporal filter sub-module (Spatio-Temporal Filter) 311 provides appropriate logic and mechanism, mask value (Mask) and features. The threshold value (Threshold) is used to calculate the specific characteristics of the environment to identify the care recipient, so as to filter certain specific objects or filter interference or noise to the object to be detected, and then step S1032, classifying the care recipient The module 31 determines whether the identified care recipients in the environment match one of the characteristics of the care recipients, that is, the identification information. If not, return to step S1031 to continue searching and detection. If they match, proceed to step S1033. , establish a corresponding candidate box (Bounding Box) of possible objects, and in step S1034, the care recipient analysis module 32 analyzes and identifies the posture, gait and/or behavior of each care recipient based on the feature extraction, that is, the recognition information. By combining the above modules with other algorithms, it is possible to avoid the situation where people overlap but are misjudged as falling. There is a step S10341 after step S1034, in which the geometric analysis sub-module 321 analyzes and identifies the environment. The shape, size and/or size changes of the identified care recipients are identified and calculated by the curvature algorithm sub-module (Curvature Algorithm) 322. The specific curvature posture changes of each care recipient are used to assist in finding the care recipient target and posture recognition. , gait recognition, behavior recognition and other tasks, the shape-based algorithm sub-module (Shape-based Algorithm) 323 calculates and compares the specific body posture of each care recipient, and the human body partial configuration algorithm sub-module (Body Part Algorithm) 324 calculates the various parts of the human body of each person being cared for.

假設人體往後跌倒與往前跌倒時,不同特定姿態變化時,其身體軀幹應會有特定曲率變化(如圖6及圖7所示),因此,曲率算法子模組322可經由人體姿態改變、以及形體動作的改變等,來幫助了解目前所關注的高齡長者或被照顧者,是否可能有跌倒傾向,而其中人體身形曲率(Curveture)的變化無疑是一重要資訊,並且,透過人體形體曲率之連續變化,可了解人體身形與骨架之即時變化,可有幫助於從人體於環境中行走或移動中,即時抓住有可能符合或滿足跌倒所具備必要特定過程條件與特定即時變化,形狀算法子模組323主要功能是集合服膺人體不同行為之對應「形狀描述」(如圖8)並與前述曲率算法子模組322)互補且截長補短,共同作為提供辨識人體不同姿態的偵測標準與外形基礎一部分,藉以提供做為輔助偵測人體姿態變化時,所表現出特定行為外形改變(諸如走路、摔倒等)的特徵辨識演算。Assume that when a human body falls backward or forward, the body trunk should have specific curvature changes when specific posture changes (as shown in Figures 6 and 7). Therefore, the curvature algorithm sub-module 322 can change the human body posture through , and changes in body movements, etc., to help understand whether the elderly or cared for who are currently concerned may have a tendency to fall, and the changes in the curvature of the human body are undoubtedly important information, and through the human body Continuous changes in curvature can help understand the real-time changes in the human body's body shape and skeleton, and can help to instantly grasp the necessary specific process conditions and specific real-time changes that may meet or satisfy a fall when the human body is walking or moving in the environment. The main function of the shape algorithm sub-module 323 is to collect corresponding "shape descriptions" that keep in mind the different behaviors of the human body (as shown in Figure 8) and complement and complement the aforementioned curvature algorithm sub-module 322), and together provide a method for identifying different postures of the human body. It is part of the detection standards and appearance basics to provide feature recognition algorithms to assist in detecting specific behavioral appearance changes (such as walking, falling, etc.) when human body posture changes.

一併參考圖9所示,在本例中,受照護者分析模組32更包括一光流算法子模組(Optical Flow Algorithm)325、一詞袋演算法子模組(Bag of Word) 326、人體骨架與關節點子模組(Bone & articulation point Algorihm)327、一機器學習子模組 (Artificial Intelligent & machine learning)328、一跌倒偵測演算法子模組 (Fall Detection AI Algorithm)329、一尺度不變特徵變換演算法模組 (Scale-invariant feature transform,SIFT) 320。Referring to Figure 9, in this example, the care recipient analysis module 32 further includes an optical flow algorithm sub-module (Optical Flow Algorithm) 325, a bag of word algorithm sub-module (Bag of Word) 326, Human skeleton and joint point Algorihm (Bone & articulation point Algorihm) 327, a machine learning sub-module (Artificial Intelligent & machine learning) 328, a fall detection algorithm sub-module (Fall Detection AI Algorithm) 329, a scale-varying Scale-invariant feature transform (SIFT) 320.

另外,一併參考圖10所示,在步驟S10341後還有一步驟S10342,由光流算法子模組(Optical Flow Algorithm)325對受照護者的影像進行運動檢測、物件切割、碰撞時間與物體膨脹的計算、運動補償編碼及/或行立體度量,由詞袋演算法子模組326跌倒過程的姿態建立對應的多個特徵向量組編碼並分類,由人體骨架與關節點子模組327擷取曲率算法子模組322、形狀算法子模組323及人體局部構型演算法子模組324的各演算結果,產生對應各受照護者的所有活動及姿態的一狀態細節資訊,由機器學習子模組 328以類神經網路進行深度學習以訓練、調整及優化跌倒辯識單元3的判斷,由跌倒偵測演算法子模組329擷取曲率算法子模組322、形狀算法子模組323、人體局部構型演算法子模組324、光流算法子模組325及詞袋演算法子模組326的各演算結果,建立一跌倒偵測與判斷模型,由尺度不變特徵變換演算法模組 320進行各受照護者在區域的距離遠近變化的影像補償,另方面,還能夠同時建立多人追蹤演算法模型,以利在場景在多人時,能同時進行跌倒與姿態辨識。In addition, as shown in Figure 10, there is a step S10342 after step S10341, in which the optical flow algorithm sub-module (Optical Flow Algorithm) 325 performs motion detection, object cutting, collision time and object expansion on the image of the person being cared for. Calculation, motion compensation coding and/or row stereometrics, the bag-of-words algorithm sub-module 326 establishes multiple corresponding feature vector group encoding and classification for the posture during the fall process, and the human skeleton and joint point sub-module 327 captures the curvature algorithm Each calculation result of the sub-module 322, the shape algorithm sub-module 323 and the human body local configuration algorithm sub-module 324 generates a state detailed information corresponding to all activities and postures of each care recipient, which is processed by the machine learning sub-module 328 Deep learning is performed using a neural network to train, adjust and optimize the judgment of the fall identification unit 3. The fall detection algorithm sub-module 329 captures the curvature algorithm sub-module 322, the shape algorithm sub-module 323, and the human body's local structure. The calculation results of the type algorithm sub-module 324, the optical flow algorithm sub-module 325 and the bag-of-words algorithm sub-module 326 are used to establish a fall detection and judgment model, and the scale-invariant feature transformation algorithm module 320 performs the respective calculations. Image compensation for changes in the distance of the caregiver in the area. On the other hand, it is also possible to establish a multi-person tracking algorithm model at the same time to facilitate simultaneous fall and posture recognition when there are multiple people in the scene.

其中光流算法子模組325,是透過光流法(Optical flow or optic flow)進行演算,光流法是指某物件圖像表現運動所呈現的速度。物體在運動的時候之所以能被人眼發現,就是因為當物體運動時,會在人的視網膜上形成一系列的連續變化的圖像,這些變化資訊會在不同時間,不斷的流過眼睛視網膜,就好像一種光流過一樣,故稱之為光流。光流法可用來描述環境中人體或物件相對於機器視覺(相當於觀察者)的運動,可視為所造成高齡長者或被照顧者(被觀測目標)其表面或邊緣的運動。換言之,光流是由於場景中前景目標本身的移動、以及相機的運動,或者兩者的相互運動所產生的。做法上可以首先給環境中每圖像之每個像素點(Pixels)賦予一個速度向量(光流,包括大小和方向),這樣就可形成了光流場。如果環境中圖像沒有運動物體,光流場就會連續均勻,但如果有運動物體,運動物體的光流和圖像的光流會有所不同,光流場就會不再連續均勻,從而可以檢測出運動物體及位置,光流法在圖形識別、計算機視覺以及其他影像處理領域中非常有用,可用於運動檢測、碰撞時間與物體膨脹的計算,或通過物體表面與邊緣進行立體的具體度量,人體於跌倒連續過程中,就會同時產生關於肢體、骨架、關節光流場變化,藉由偵測辨識相關光流場變化,即可同時獲取人體姿態變化資訊,並結合前述各演算法以輔助跌倒辨識與相關聯之姿態辨識與分析。Among them, the optical flow algorithm sub-module 325 performs calculations through optical flow (Optical flow or optic flow). Optical flow refers to the speed at which an image of an object expresses motion. The reason why an object can be detected by the human eye when it is moving is because when the object moves, a series of continuously changing images will be formed on the human retina. These changing information will continue to flow through the retina of the eye at different times. , just like a kind of light flowing through, so it is called optical flow. The optical flow method can be used to describe the movement of human bodies or objects in the environment relative to machine vision (equivalent to the observer), which can be regarded as the movement of the surface or edge of the elderly or the person being cared for (the object being observed). In other words, optical flow is caused by the movement of the foreground object itself in the scene, the movement of the camera, or the mutual movement of the two. To do this, you can first assign a velocity vector (optical flow, including size and direction) to each pixel (Pixels) of each image in the environment, thus forming an optical flow field. If there are no moving objects in the image in the environment, the optical flow field will be continuous and uniform. However, if there are moving objects, the optical flow of the moving objects and the optical flow of the image will be different, and the optical flow field will no longer be continuous and uniform, thus It can detect moving objects and their positions. The optical flow method is very useful in graphic recognition, computer vision and other image processing fields. It can be used for motion detection, calculation of collision time and object expansion, or for specific three-dimensional measurements through the surface and edges of objects. During the continuous process of falling, the human body will produce changes in the optical flow field of the limbs, skeleton, and joints at the same time. By detecting and identifying the relevant optical flow field changes, the information on the changes in the human body posture can be obtained at the same time, and combined with the aforementioned algorithms to Assist fall recognition and associated posture recognition and analysis.

詞袋演算法子模組326,是一種貝氏多層架構之機率模型,可提供作為對場域中各式物件有需建立智能學習機制的一種分類方式,可以應用於有利幫助判斷評估是否跌倒之可能性與輔助機制,然後集合這些與跌倒有關連之各種片段(Video Clip),並映射這些與跌倒之各種有關片段其對應的密碼本(Codebook)。因此在高齡長者或被照顧者於其場域及活動過程中,據此可獲得及統計出這些可能出現代碼字元(Codeword)的分佈曲線圖,例如圖11所示物理意義可表示特定姿態出現次數的分布曲線圖。曲線圖在影像處理與電腦視覺中常被用來分析影像內容特性。從曲線圖的概念,可藉此來分析特定姿態其連續變化後,以及可能引致跌倒的判斷因素或原則。The bag-of-words algorithm submodule 326 is a probabilistic model of Bayesian multi-layer architecture. It can provide a classification method that needs to establish an intelligent learning mechanism for various objects in the field. It can be used to help determine and evaluate the possibility of falling. and auxiliary mechanisms, and then collect these various clips (Video Clip) related to the fall, and map these various clips related to the fall to their corresponding codebooks (Codebook). Therefore, in the elderly or cared for in their fields and activities, the distribution curve of these code words (Codeword) that may appear can be obtained and statistically calculated. For example, the physical meaning shown in Figure 11 can represent the occurrence of specific gestures. Distribution curve of times. Curve graphs are often used to analyze image content characteristics in image processing and computer vision. From the concept of a curve graph, we can use it to analyze the continuous changes in a specific posture and the judgment factors or principles that may lead to falls.

人體骨架與關節點子模組327,是擷取前述諸如曲率算法子模組322、人體局部構型演算法子模組324、形狀算法子模組323、光流算法子模組325及詞袋演算法子模組326的相關數據資料,以獲取環境中所關注的高齡長輩或者被照顧者之所有活動及姿態細節資訊,並且設計演算法以及整合這些細節資訊,以利建構環境中人體的骨架、與關節點及其關鍵參數,當產生了這些環境中即時之所有人體骨架與關節點等即時之重要參數後,就等同隨時了解場景中各式物件與人體相關關係、存在特定人體其個人化之活動姿態細節、以及人體與人體間之即時互動關係,基於這些骨架與關節點即時參數,就可根據所掌握之即時資訊,判斷環境中所關注人員的活動及過程資訊,以利進行特定行為、動作的判斷,諸如站、坐、起、臥、躺、蹲、跌倒、…各式動作。然而因人體特定行為其源自於各一系列的姿態組成,事實上是非常複雜。換言之,不同之特定行為(Behavior),很多時候可能會有相同的特定姿態(Pose)。舉例來說,特定之「往前撲倒」這個動作(如圖5及圖6所示),有些連續姿態動作其實是跟「彎腰」、「駝背」、「拿東西身體往前傾」等,都有很類似之處,在此吾人可稱其為「偽跌倒」。就萬物之靈之人類來說,因有智能可以快速可分辨這些動作細微差異,是較容易的事。但如果要讓電腦或機器也同時能分辨這類問題,就須採用所謂機器學習(Machine Learning)、 深度學習(Deep Learning)等之人工智慧(Artificial Intelligent)方法,來設計這些演算法,建立機器的學習規則與模型,使機器具有指定成具特定能力之人工智慧。The human skeleton and joint point sub-module 327 captures the aforementioned curvature algorithm sub-module 322, human body local configuration algorithm sub-module 324, shape algorithm sub-module 323, optical flow algorithm sub-module 325 and bag-of-words algorithm sub-module Relevant data of module 326 is used to obtain all activities and posture details of the elderly or cared for in the environment, and algorithms are designed and integrated with these detailed information to facilitate the construction of the human body's skeleton and joints in the environment. Points and their key parameters. When all real-time important parameters such as human skeletons and joint points in these environments are generated, it is equivalent to understanding the relationship between various objects and human bodies in the scene at any time, and the existence of specific human bodies and their personalized activity postures. Details, as well as the real-time interactive relationship between the human body and the human body. Based on the real-time parameters of these skeletons and joint points, the activities and process information of the people of concern in the environment can be judged based on the real-time information grasped, so as to facilitate the implementation of specific behaviors and actions. Judgment, such as standing, sitting, getting up, lying down, squatting, falling,... various actions. However, because the specific behavior of the human body is derived from a series of postures, it is actually very complicated. In other words, different specific behaviors (Behavior) may often have the same specific posture (Pose). For example, for the specific action of "falling forward" (as shown in Figures 5 and 6), some continuous posture actions are actually similar to "bending over", "hunchback", "picking up something and leaning forward", etc. , are very similar, we can call it "pseudo-fall" here. It is easier for human beings, who are the spirits of all things, to quickly distinguish the subtle differences in these movements because they have intelligence. But if you want computers or machines to be able to distinguish such problems at the same time, you must use artificial intelligence methods such as machine learning and deep learning to design these algorithms and build machines. The learning rules and models enable the machine to have artificial intelligence designated with specific abilities.

一併參考圖12所示,為機器學習子模組 328的流程示意圖,首先要選擇設計此機器學習演算法之框架(或平台),常用之框架例如Tensorflow、 Poytch, 等或快速特徵嵌入的卷積結構CNN等。此框架如以Tensorflow而言可視為一種機器學習設計工具,其內含許多可用之演算法及函數;就Tensorflow來說主要可提供深度學習模型(Deep Learning model)一些方法。Referring to Figure 12, which is a flow chart of the machine learning sub-module 328, you must first select a framework (or platform) for designing this machine learning algorithm. Commonly used frameworks such as Tensorflow, Poytch, etc. or fast feature embedding volumes Product structure CNN, etc. For example, in the case of Tensorflow, this framework can be regarded as a machine learning design tool, which contains many available algorithms and functions; in the case of Tensorflow, it mainly provides some methods of deep learning model (Deep Learning model).

跌倒偵測演算法子模組329,是擷取前述諸如曲率算法子模組322、形狀算法子模組323、人體局部構型演算法子模組324、光流算法子模組325及詞袋演算法子模組326的相關數據資料,以獲取環境中所關注的長輩或者被照顧者之所有活動姿態細節資訊,並且設計演算法以利整合這些細節資訊,使建構環境中人體的骨架、與關節點及其關鍵參數,當產生了這些環境中即時之所有人體骨架與關節點等即時重要參數後,就等同隨時了解場景中各式物件與人體相關關係、存在特定人體其個人化之活動姿態細節、以及人體與人體間互動關係。此外,跌倒偵測演算法子模組329模組有內建以深度學習(Deep Learning)等之方法所建立的AI模型,其輸入參數結合來自前述各模組資訊(如人體骨架與關節點模組等),除擷取前述各模組運算輸出之資訊,也同時考慮人體當下各關節點之及時之位置、速度、及加速度等各參數變化,以所建立之AI模型,進行一系列之跌倒偵測與判斷。The fall detection algorithm sub-module 329 captures the aforementioned curvature algorithm sub-module 322, shape algorithm sub-module 323, human body local configuration algorithm sub-module 324, optical flow algorithm sub-module 325 and bag-of-words algorithm. Relevant data of module 326 is used to obtain detailed information on all the activities and postures of the elders or people being cared for in the environment, and algorithms are designed to facilitate the integration of these detailed information to construct the skeleton, joints and joints of the human body in the environment. Its key parameters, when all the real-time important parameters of the human body skeleton and joint points in these environments are generated, it is equivalent to knowing at any time the relationship between various objects and the human body in the scene, the existence of a specific human body and its personalized activity posture details, and The interactive relationship between the human body and the human body. In addition, the fall detection algorithm sub-module 329 module has a built-in AI model established by deep learning (Deep Learning) and other methods. Its input parameters are combined with information from the aforementioned modules (such as human skeleton and joint point modules). etc.), in addition to capturing the information output by the aforementioned modules, it also considers the changes in parameters such as the current position, speed, and acceleration of each joint point of the human body, and uses the established AI model to conduct a series of fall detection. Test and judge.

尺度不變特徵變換演算法模組 320需進行尺度空間極值檢測、關鍵點定位、方向確定及關鍵點描述,所述的尺度空間極值檢測是搜尋所有尺度上的圖像位置。通過高斯微分函式來識別潛在的對於尺度和旋轉不變的興趣點;關鍵點定位是在每個候選的位置上,通過一個擬合精細的模型來確定位置和尺度。關鍵點的選擇依據它們的穩定程度;方向確定是基于圖像局部的梯度方向,分配給每個關鍵點位置一個或多個方向。所有後面的對圖像資料的操作都相對關鍵點的方向、尺度和位置而進行變換,從而提供對于這些變換的不變性; 關鍵點描述是在每個關鍵點周圍的鄰域內,在選定的尺度上測量圖像局部的梯度。這些梯度被變換成一種表示,這種表示允許比較大的局部形狀的變形和照度變化。The scale-invariant feature transformation algorithm module 320 needs to perform scale space extreme value detection, key point positioning, direction determination and key point description. The scale space extreme value detection is to search image positions on all scales. Potential interest points that are invariant to scale and rotation are identified through the Gaussian differential function; key point positioning is to determine the position and scale at each candidate position through a well-fitted model. Key points are selected based on their degree of stability; direction determination is based on the local gradient direction of the image, assigning one or more directions to each key point position. All subsequent operations on the image data are transformed relative to the direction, scale and position of the key points, thereby providing invariance to these transformations; the key point description is in the neighborhood around each key point, in the selected Measure the local gradient of the image on a scale. These gradients are transformed into a representation that allows relatively large local shape deformations and illumination changes.

依照本發明另第三實施例的一種具安全照護及高隱私處理的跌倒及姿態辨識與智能視覺型電子圍籬模組結合使用方法,請參考圖13及圖14所示。在智能視覺型電子圍籬的實施例中,影像擷取單元擷取環境場域中所關注的被受照顧者在環境中的影像,經過影像處理單元後,其具高度隱私的串流影像除傳送至跌倒辨識單元,進行跌倒辨識外,另一個同樣的串流影像會傳送至智能視覺型電子圍籬模組,來進行是否有觸發智能視覺型電子圍籬模組所欲防堵各種可能的危險情境。在智能視覺型電子圍籬模組(AI Electrical Fence Algorithm)23中,如步驟S101前的步驟S101’ 智能視覺型電子圍籬模組23在影像擷取單元(IP CAM)1拍攝的各區域建立一電子圍離區塊230,並與受照護者的軀體影像資料進行影像疊加;一併參考圖15所示,並如步驟S103’ 判斷被照護者4是否在電子圍離區塊的範圍內或遠離電子圍離區塊230,判斷受照護者是否有異常行為,如果有,發佈即時通報訊息。According to another third embodiment of the present invention, a method for combining fall and posture recognition with safe care and high privacy processing with an intelligent visual electronic fence module is shown in Figures 13 and 14. In the embodiment of the intelligent visual electronic fence, the image capture unit captures the image of the person being cared for in the environment. After passing through the image processing unit, the highly private streaming image is removed. In addition to being sent to the fall identification unit for fall identification, another identical stream image will be sent to the smart visual electronic fence module to determine whether the smart visual electronic fence module is triggered to prevent various possible obstructions. Dangerous situation. In the intelligent visual electronic fence module (AI Electrical Fence Algorithm) 23, as in step S101' before step S101, the intelligent visual electronic fence module 23 establishes in each area captured by the image capture unit (IP CAM) 1 An electronic isolation block 230 is superimposed with the body image data of the person being cared for; refer to Figure 15 and determine whether the person being cared for 4 is within the range of the electronic isolation block or in step S103'. Stay away from the electronic isolation block 230, determine whether the person under care has abnormal behavior, and if so, issue an immediate notification message.

其中智能視覺型電子圍籬模組23,是結合AI智慧演算法以及安防的電子圍籬技術做整合,建構「AI電子圍籬演算法」,使其擴充應用於智慧家庭、智慧長照、或者以人為主場域的安全照護使用。本案可依據所特別關注及應用需要,整合電子圍籬的範圍設定,除輔助跌倒的判定及其他多元辨識的應用,例如可將電子圍籬的範圍設定為受照護者較少去的地方(例如堆放雜物的閒置空間),如果受照護者在電子圍籬的範圍,則很可能發生危險,則立即通報。因此,可透過電腦視覺與影像處理技術,將關心的目標(受照護者)與「視覺型電子式圍籬」建立互動干涉關係,當其互動干涉關係被辨識出且發生成立,系統即時發出通報。Among them, the intelligent visual electronic fencing module 23 is an integration of AI smart algorithm and security electronic fencing technology to construct an "AI electronic fencing algorithm" to expand its application in smart homes, smart long-term care, or Used for safe care of human-centered areas. This case can integrate the range setting of electronic fences according to the special concerns and application needs. In addition to assisting in fall determination and other multi-identification applications, for example, the range of electronic fences can be set to places where the care recipients rarely go (for example, (vacant space where debris is stored). If the person being cared for is within the range of the electronic fence, a danger is likely to occur, so report immediately. Therefore, computer vision and image processing technology can be used to establish an interactive interference relationship between the target of concern (the person being cared for) and the "visual electronic fence". When the interactive interference relationship is identified and established, the system will immediately issue a notification .

依照本發明另第四實施例的一種具安全照護及高隱私處理的跌倒及姿態辨識方法,一併參考圖16及圖17所示,其中影像處理單元2包括一防視覺隱私權洩漏演算法模組(Privacy Protection Algorithm)24,並如步驟S101後的步驟S1011”,防視覺隱私權洩漏演算法模組24對受照護者的軀體影像資料進行特徵輪廓處理以及動態影像演算法遮蔽處理,以產生一對應受照護者且具有安全隱私屏蔽的人體影像破壞性遮罩,更進一步,步驟S1021”後還可如步驟S1012” 對區域的多個物件影像進行特徵輪廓處理以及動態影像演算法遮蔽處理,以產生多個對應所述物件且具有安全隱私屏蔽的物件影像破壞性遮罩。According to another fourth embodiment of the present invention, a fall and gesture recognition method with safety care and high privacy processing is shown in FIG. 16 and FIG. 17 together, in which the image processing unit 2 includes an algorithm module to prevent visual privacy leakage. Group (Privacy Protection Algorithm) 24, and as in step S1011 after step S101, the anti-visual privacy leakage algorithm module 24 performs feature contour processing and dynamic image algorithm masking processing on the body image data of the care recipient to generate A pair of destructive masking of human body images corresponding to the care recipient and with secure privacy masking. Furthermore, after step S1021", step S1012" can also be used to perform feature contour processing and dynamic image algorithm masking processing on multiple object images in the area. To generate a plurality of object image destructive masks corresponding to the object and having a secure privacy mask.

其中防視覺隱私權洩漏演算法模組24是提供整合採用「影像處理」與「AI演算法」技術,對可見光攝影機之影像,進行特徵輪廓遮蔽處理,以及動態影像演算法疊加,最後合成影像以達成兼具高辨識率,同時又顧及以人為主之安全照護隱私,另方面,可以對所關注環境之場域各個受照顧者目標做同樣遮蔽處理,避免洩漏出個物件知詳細特徵與輪廓,以達更進一步的隱私保護。Among them, the anti-visual privacy leakage algorithm module 24 provides the integrated use of "image processing" and "AI algorithm" technology to perform feature contour masking processing on images from visible light cameras, as well as dynamic image algorithm superposition, and finally synthesize the image. Achieve high recognition rate while taking into account the safety and privacy of people-centered care. On the other hand, each care recipient target in the environment of concern can be similarly masked to avoid leaking the detailed characteristics and outline of individual objects. , to achieve further privacy protection.

依照本發明另第五實施例的一種具安全照護及高隱私處理的跌倒及姿態辨識方法,如圖18及圖19所示,其中影像處理單元2更包括一夜間跌倒與姿態模型24,並於步驟S101後如步驟S1011,夜間跌倒與姿態模型24透過模糊技術(Fuzzy Theory)分析及智慧辨識強化軀體影像資料。According to another fifth embodiment of the present invention, a fall and posture recognition method with safety care and high privacy processing is shown in Figures 18 and 19, in which the image processing unit 2 further includes a night fall and posture model 24, and in After step S101, as in step S1011, the nighttime fall and posture model 24 enhances the body image data through fuzzy theory analysis and intelligent identification.

其中跌倒及姿態辨識系統更包括一通訊單元4(例如手機App/Line),是與跌倒辨識單元3訊連接,並於步驟S104後如步驟S1041,看護人員通過通訊單元4發出一關懷詢問訊息至各受照護者,照顧者端亦能於緊急狀況,同時從App或Line了解被照顧者現場狀況,並進行雙向通話。The fall and posture recognition system further includes a communication unit 4 (such as mobile App/Line), which is connected to the fall recognition unit 3. After step S104, as in step S1041, the caregiver sends a care inquiry message to Each person being cared for and the caregiver can also understand the on-site situation of the person being cared for through the App or Line at the same time in an emergency situation, and conduct two-way calls.

一併參考圖20及21所示,在本例中受照護者分析模組32包括一屋內有人之異常偵測單元331、一特定狀態過久之異常偵測單元332、一作息統計單元333及一歷史瀏覽單元334,步驟S1023後還可包括下列步驟,S1301屋內有人之異常偵測單元331在一第一預定時間判斷受照護者是否在預定的區域,當如有人進入受照護者之居家空間,本發明專利更進一步整合前述所揭露之人體姿態辨識與電子圍籬等相關技術,自動將當有人進入被照顧者之居家空間狀態時,顯示於系統上,並將提示訊息推播至照護者的手機App與Line上。Referring to Figures 20 and 21 together, in this example, the care recipient analysis module 32 includes an abnormality detection unit 331 for people in the house, an abnormality detection unit 332 for a specific state that has been too long, a work and rest statistics unit 333, and A history browsing unit 334 may also include the following steps after step S1023. S1301 The abnormality detection unit 331 of someone in the house determines whether the person under care is in a predetermined area at a first predetermined time. If someone enters the home of the person under care, Space, the patent of this invention further integrates the previously disclosed related technologies such as human body posture recognition and electronic fencing, automatically displays on the system when someone enters the home space state of the person being cared for, and pushes the prompt message to the caregiver. on the reader’s mobile App and Line.

如步驟S1302,特定狀態過久之異常偵測單元332在一第二預定時間之外(例睡覺時間)判斷受照護者是否一直維持動作不動,或是判斷是否在區域過久,為更進一步關注被照顧者於所在生活空間各種情境活動狀態,且能確保被照顧者處於穩定良好活動狀態,本發明專利更進一步藉由前述之人體姿態辨識相關技術,了解被照顧者是否處於特定狀態或者固定動作是否有維持太久的異常偵測。As in step S1302, the abnormality detection unit 332 of the prolonged specific state determines whether the person being cared for has remained motionless outside a second predetermined time (such as sleeping time), or whether the person has been in the area for too long, in order to further pay attention to the person being cared for. The caregiver is active in various situations in the living space, and can ensure that the person being cared for is in a stable and good state of activity. The patent of this invention further uses the aforementioned human body posture recognition related technology to understand whether the person being cared for is in a specific state or whether the fixed movements are correct. There is anomaly detection that lasts too long.

如步驟S1303,作息統計單元333持續自動紀錄各該受照顧者之居家生活作息狀態,為更進一步關注被照顧者居家與生活作息狀態,本發明專利更進一步整合前述所揭露之人體姿態辨識等相關技術,自動且持續紀錄被照顧者之居家生活作息狀態(諸如站/坐/躺等姿態之活動狀態與分析,並顯示於系統上,以利照顧者隨時了解及追蹤觀察,以利評估衡量被照顧者平日健康生理狀態。In step S1303, the work and rest statistics unit 333 continues to automatically record the home and life work and rest status of each person being cared for. In order to further pay attention to the home and life work and rest status of the care recipient, the patent of the present invention further integrates the human body posture recognition and other related information disclosed above. Technology that automatically and continuously records the home life and rest status of the person being cared for (activity status and analysis of postures such as standing/sitting/lying, etc.), and displays it on the system to facilitate caregivers to understand and track and observe at any time, and to facilitate evaluation and measurement of the person being cared for. The caregiver's daily healthy physiological status.

最後如步驟S1304,歷史瀏覽單元334供瀏覽各該受照顧者之居家生活作息狀態,為更同時追蹤關注被照顧者長期作息狀態,本發明專利更進一步整合前述所揭露之人體姿態辨識等相關技術,並自動且長時間紀錄被照顧者之生活作息錄影(例如可儲存且能對30天歷史活動進行瀏覽)。本案一種具安全照護及高隱私處理的跌倒及姿態辨識方法,是利用影像擷取單元對各受照護者進行智慧影像看護,並透智慧學習增加受照護者跌倒的判斷準確率,還可透過智能視覺型電子圍籬模組的識別,可將電子圍籬的範圍設定為受照護者較少去的地方(例如堆放雜物的閒置空間),如果受照護者在電子圍籬的範圍,則很可能發生危險,則立即通報。另方面,還可防視覺隱私權洩漏演算法模組,對受照護者及環境物件進行隱私屏蔽,更可降低醫護人力成本,並達成上述所有目的。Finally, in step S1304, the history browsing unit 334 is used to browse the home life and rest status of each person being cared for. In order to simultaneously track and pay attention to the long-term work and rest status of the person being cared for, the patent of the present invention further integrates the related technologies such as human body posture recognition disclosed above. , and automatically and long-term recording of the daily routine of the person being cared for (for example, it can store and browse 30 days of historical activities). This case is a fall and posture recognition method with safe care and high privacy processing. It uses an image capture unit to perform smart image care on each care recipient, and increases the accuracy of the care recipient's fall judgment through smart learning. It can also use intelligent The recognition of the visual electronic fence module can set the range of the electronic fence to a place where the care recipient rarely goes (such as an idle space where debris is stored). If the care recipient is within the range of the electronic fence, it is easy to If danger may occur, report it immediately. On the other hand, it can also prevent visual privacy leakage algorithm modules, perform privacy shielding on care recipients and environmental objects, and reduce medical labor costs, achieving all the above purposes.

本發明無論就目的、手段及功效,在均顯示其迥異於習知技術之特徵,為一大突破。惟須注意,上述實施例僅為例示性說明本發明之原理及其功效,而非用於限制本發明之範圍。任何熟於此項技藝之人士均可在不違背本發明之技術原理及精神下,對實施例作修改與變化。本發明之權利保護範圍應如後述之申請專利範圍所述。The present invention shows that it is completely different from the conventional technology in terms of purpose, means and efficacy, and is a major breakthrough. However, it should be noted that the above embodiments are only illustrative to illustrate the principles and effects of the present invention, and are not intended to limit the scope of the present invention. Anyone skilled in the art can make modifications and changes to the embodiments without violating the technical principles and spirit of the present invention. The protection scope of the present invention shall be as described in the patent application scope described below.

1:影像擷取單元 2:影像處理單元 21:照度變化分析模組 22:環境光源描述模組 23:智能視覺型電子圍籬模組 24:防視覺隱私權洩漏演算法模組 3:跌倒辯識單元 4:通訊單元 31:受照護者分類模組 311:時空濾波子模組 32:受照護者分析模組 321:幾何分析子模組 322:曲率算法子模組 323:形狀算法子模組 324:人體局部構型演算法子模組 325:光流算法子模組 326:詞袋演算法子模組 327:人體骨架與關節點子模組 328:機器學習子模組 329:跌倒偵測演算法子模組 320:尺度不變特徵變換演算法模組 331:屋內有人之異常偵測單元 332:特定狀態過久之異常偵測單元 333:作息統計單元 334:歷史瀏覽單元 S101~S104、S1021~S1023、S1031~S1034、S10341、S10342、S101’、 S1011”、 S1012”:步驟 、 1:Image capture unit 2:Image processing unit 21: Illumination change analysis module 22:Ambient light source description module 23: Intelligent visual electronic fence module 24: Anti-visual privacy leakage algorithm module 3: Fall identification unit 4: Communication unit 31:Care recipient classification module 311: Spatiotemporal filter sub-module 32:Care recipient analysis module 321:Geometric analysis submodule 322: Curvature algorithm submodule 323: Shape algorithm submodule 324: Human body local configuration algorithm submodule 325: Optical flow algorithm sub-module 326: Bag of words algorithm submodule 327:Human skeleton and joint idea module 328:Machine learning sub-module 329: Fall detection algorithm module 320: Scale-invariant feature transformation algorithm module 331: Anomaly detection unit for people in the house 332: Anomaly detection unit in a specific state for too long 333: Work and rest statistics unit 334: Historical browsing unit S101~S104, S1021~S1023, S1031~S1034, S10341, S10342, S101’, S1011”, S1012”: steps ,

圖1是本案第一實施例的一種具安全照護及高隱私處理的跌倒及姿態辨識方法的跌倒及姿態辨識系統的方塊圖; 圖2是圖1的一種具安全照護及高隱私處理的跌倒及姿態辨識方法的流程圖; 圖3是本案第二實施例的一種具安全照護及高隱私處理的跌倒及姿態辨識方法的跌倒及姿態辨識系統的方塊圖; 圖4是圖3的一種具安全照護及高隱私處理的跌倒及姿態辨識方法的流程圖; 圖5是圖3的跌倒及姿態辨識系統的受照護者分析模組的方塊圖; 圖6及圖7是圖3的跌倒及姿態辨識系統的曲率算法子模組的曲率變化示意圖; 圖8是圖3的跌倒及姿態辨識系統的形狀算法子模組的形狀描述示意圖; 圖9是圖3的跌倒及姿態辨識系統的受照護者分析模組另延伸的方塊圖; 圖10是圖4的一種具安全照護及高隱私處理的跌倒及姿態辨識方法的延伸流程圖;; 圖11是圖3的跌倒及姿態辨識系統的形狀算法子模組的詞袋演算法子模組的分佈曲線圖; 圖12是圖3的跌倒及姿態辨識系統的機器學習子模組的流程示意圖; 圖13是本案第三實施例的一種具安全照護及高隱私處理的跌倒及姿態辨識方法的跌倒及姿態辨識系統的方塊圖; 圖14是圖13的一種具安全照護及高隱私處理的跌倒及姿態辨識方法的流程圖; 圖15是圖13的跌倒及姿態辨識系統的智能視覺型電子圍籬模組布建電子圍離區塊的示意圖; 圖16是本案第四實施例的一種具安全照護及高隱私處理的跌倒及姿態辨識方法及姿態辨識系統的方塊圖; 圖17是圖16的一種具安全照護及高隱私處理的跌倒及姿態辨識方法的流程圖; 圖18是本案第五實施例的一種具安全照護及高隱私處理的跌倒及姿態辨識方法及姿態辨識系統的方塊圖; 圖19是圖18的一種具安全照護及高隱私處理的跌倒及姿態辨識方法的部份流程圖; 圖20是圖17的跌倒及姿態辨識系統的受照護者分析模組的方塊圖; 圖21是圖20的延續受照護者分析模組後續的流程圖;及 圖22及圖23是習知跌倒偵測的示意圖。 Figure 1 is a block diagram of a fall and posture recognition system with safety care and high privacy processing of a fall and posture recognition method according to the first embodiment of this case; Figure 2 is a flow chart of a fall and gesture recognition method with safety care and high privacy processing in Figure 1; Figure 3 is a block diagram of a fall and posture recognition system with safety care and high privacy processing of a fall and posture recognition method according to the second embodiment of the present case; Figure 4 is a flow chart of a fall and posture recognition method with safety care and high privacy processing in Figure 3; Figure 5 is a block diagram of the care recipient analysis module of the fall and posture recognition system of Figure 3; Figures 6 and 7 are schematic diagrams of curvature changes of the curvature algorithm sub-module of the fall and posture recognition system in Figure 3; Figure 8 is a schematic diagram of the shape description of the shape algorithm sub-module of the fall and posture recognition system in Figure 3; Figure 9 is a block diagram of another extension of the care recipient analysis module of the fall and posture recognition system in Figure 3; Figure 10 is an extended flow chart of a fall and posture recognition method with safety care and high privacy processing of Figure 4;; Figure 11 is a distribution curve diagram of the bag-of-word algorithm submodule of the shape algorithm submodule of the fall and posture recognition system in Figure 3; Figure 12 is a schematic flow chart of the machine learning sub-module of the fall and posture recognition system in Figure 3; Figure 13 is a block diagram of a fall and posture recognition system with safety care and high privacy processing of a fall and posture recognition method according to the third embodiment of this case; Figure 14 is a flow chart of a fall and gesture recognition method with safety care and high privacy processing in Figure 13; Fig. 15 is a schematic diagram of the electronic fencing block deployed by the intelligent visual electronic fencing module of the fall and posture recognition system in Fig. 13; Figure 16 is a block diagram of a fall and posture recognition method and posture recognition system with safety care and high privacy processing according to the fourth embodiment of the present case; Figure 17 is a flow chart of a fall and posture recognition method with safety care and high privacy processing in Figure 16; Figure 18 is a block diagram of a fall and posture recognition method and posture recognition system with safety care and high privacy processing according to the fifth embodiment of this case; Figure 19 is a partial flow chart of a fall and posture recognition method with safety care and high privacy processing in Figure 18; Figure 20 is a block diagram of the care recipient analysis module of the fall and posture recognition system of Figure 17; Figure 21 is a subsequent flow chart of the continued care recipient analysis module of Figure 20; and 22 and 23 are schematic diagrams of conventional fall detection.

S101~S104:步驟 S101~S104: Steps

Claims (11)

一種具安全照護及高隱私處理的跌倒及姿態辨識方法,係用於一跌倒及姿態辨識系統,並供辨識及區分至少一區域的至少一受照護者的軀體狀態,該跌倒及姿態辨識系統包括一影像擷取單元、一影像處理單元及一跌倒辯識單元,該影像擷取單元係對應該些區域設置,並供朝向該些受照護者拍攝,該影像處理單元係與該影像擷取單元訊號連接,該跌倒辯識單元係與該影像處理單元訊號連接,其中該跌倒及姿態辨識方法包括下列步驟: a)該影像擷取單元擷取該些受照護者的軀體影像以產生對應各該受照護者的一軀體影像資料; b)該影像處理單元接收該軀體影像資料,並對該軀體影像資料進行影像畫質及雜訊的預處理;及 c)該跌倒辯識單元接收進行影像預處理後的該軀體影像資料後,以及透過人體姿態變化,判斷該些受照護者是否跌倒,如果該些受照護者跌倒或其他異常狀態,發佈一即時通報訊息。 A fall and posture recognition method with safe care and high privacy processing, which is used in a fall and posture recognition system and is used to identify and distinguish the physical state of at least one care recipient in at least one area. The fall and posture recognition system includes An image capture unit, an image processing unit and a fall recognition unit. The image capture unit is configured corresponding to the areas and is used to shoot toward the care recipients. The image processing unit is connected to the image capture unit. Signal connection, the fall recognition unit is signal connected with the image processing unit, wherein the fall and posture recognition method includes the following steps: a) The image capture unit captures the body images of the care recipients to generate body image data corresponding to each care recipient; b) The image processing unit receives the body image data and performs image quality and noise preprocessing on the body image data; and c) The fall identification unit receives the body image data after image preprocessing and determines whether the care recipients have fallen through body posture changes. If the care recipients fall or have other abnormal conditions, a real-time notification is issued. Notify the message. 如申請專利範圍第1項所述的跌倒及姿態辨識方法,其中該影像處理單元包括一光照變化分析模組及一環境光源描述模組,其中該步驟 b)包括下列步驟: b1)該光照變化分析模組偵測該區域的環境光源改變,以產生一對場域環境光源描述之即時資訊; b2)該環境光源描述模組依據該光源即時資訊,及時修正及建立一周圍環境光源模型;及 b3)該影像處理單元依據該周圍環境光源模型對該區域搜尋與偵測識別存在於環境中的多個受照護者。 For the fall and posture recognition method described in item 1 of the patent application, the image processing unit includes an illumination change analysis module and an ambient light source description module, and step b) includes the following steps: b1) The lighting change analysis module detects changes in the ambient light source in the area to generate a pair of real-time information describing the ambient light source in the field; b2) The ambient light source description module promptly corrects and establishes a surrounding ambient light source model based on the real-time information of the light source; and b3) The image processing unit searches and detects and identifies multiple care recipients existing in the environment in the area based on the surrounding environment light source model. 如申請專利範圍第2項所述的跌倒及姿態辨識方法,其中該跌倒辯識單元包括一受照護者分類模組及一受照護者分析模組,其中該步驟c)包括下列步驟: c1)該受照護者分類模組依據該影像處理單元對該區域的搜尋偵測及區別在該區域的該些環境中,識別受照護者,並建立對應各該環境被識別受照護者的對應骨架點及一目標矩形追蹤框; c2)該受照護者分類模組判斷該些環境識別受照護者中是否符合該些受照護者的一特徵擷取即辨識資訊,如果符合,建立一對應的可能物件之候選框; c3)該受照護者分析模組依據該些特徵擷取即辨識資訊分析及辨識各該些受照護者的姿態、步態及/或行為。 For the fall and posture recognition method described in item 2 of the patent application, the fall recognition unit includes a care recipient classification module and a care recipient analysis module, and the step c) includes the following steps: c1) The care recipient classification module identifies the care recipients in the environments in the area based on the image processing unit's search, detection and distinction in the area, and establishes a correspondence corresponding to the identified care recipients in each environment. Skeleton points and a target rectangular tracking frame; c2) The care recipient classification module determines whether one of the environmentally recognized care recipients matches one of the characteristics of the care recipients, that is, the identification information, and if so, creates a corresponding candidate frame of possible objects; c3) The care recipient analysis module extracts and recognizes information based on the characteristics to analyze and identify the posture, gait and/or behavior of each of the care recipients. 如申請專利範圍第3項所述的跌倒及姿態辨識方法,其中該受照護者分類模組更包括一時空濾波子模組,其中該步驟c1)包括一步驟c11)該時空濾波子模組提供適當的邏輯與機制、遮罩值及特徵門檻值,以計算出該些環境內被識別受照護者的特定特徵。For the fall and posture recognition method described in item 3 of the patent application, the care recipient classification module further includes a spatio-temporal filter sub-module, wherein the step c1) includes a step c11) and the spatio-temporal filter sub-module provides Appropriate logic and mechanisms, mask values and feature thresholds to calculate the specific features of the identified care recipients in those environments. 如申請專利範圍第3項所述的跌倒及姿態辨識方法,其中該受照護者分析模組包括一幾何分析子模組、一曲率算法子模組、一形狀算法子模組及一人體局部構型演算法子模組,其中該步驟c3)包括一步驟c31)該幾何分析子模組分析及辨識該些環境內被識別的該些受照護者的形狀、尺寸及/或尺寸變化,該曲率算法子模組辨識及演算各該受照護者的特定曲率的姿態變化,該形狀算法子模組演算及比對各該受照護者的特定形體姿態,該人體局部構型演算法子模組演算各該受照護者的人體各部件。For the fall and posture recognition method described in item 3 of the patent application, the care recipient analysis module includes a geometric analysis sub-module, a curvature algorithm sub-module, a shape algorithm sub-module and a human body local structure type algorithm sub-module, wherein the step c3) includes a step c31) the geometric analysis sub-module analyzes and identifies the shapes, sizes and/or size changes of the identified care recipients in the environments, the curvature algorithm The sub-module identifies and calculates the posture changes of the specific curvature of each person being cared for, the shape algorithm sub-module calculates and compares the specific body posture of each person being cared for, and the human body local configuration algorithm sub-module calculates the specific shape and posture of each person being cared for. Various parts of the body of the person being cared for. 如申請專利範圍第5項所述的跌倒及姿態辨識方法,其中該受照護者分析模組更包括一光流算法子模組、一詞袋演算法子模組、一人體骨架與關節點子模組、一機器學習子模組、一跌倒偵測演算法子模組、一尺度不變特徵變換演算法模組,其中該步驟c3)包括一在步驟c31)後的步驟c32) 該光流算法子模組對各該受照護者的影像進行運動檢測、碰撞時間與物體膨脹的計算,該詞袋演算法子模組跌倒過程的姿態建立對應的多個特徵向量組編碼並分類,該人體骨架與關節點子模組擷取該曲率算法子模組、該形狀算法子模組及該人體局部構型演算法子模組的各演算結果,產生對應各該受照護者的所有活動及姿態的一狀態細節資訊,該機器學習子模組以類神經網路進行深度學習以訓練、調整及優化該該跌倒辯識單元的判斷,該跌倒偵測演算法子模組擷取該曲率算法子模組、該形狀算法子模組、該人體局部構型演算法子模組、該光流算法子模組及該詞袋演算法子模組的各演算結果,建立一跌倒偵測與判斷模型,該尺度不變特徵變換演算法模組進行各該受照護者在該區域的距離遠近變化的影像補償。For the fall and posture recognition method described in item 5 of the patent application, the care recipient analysis module further includes an optical flow algorithm sub-module, a bag-of-word algorithm sub-module, and a human skeleton and joint point sub-module. , a machine learning sub-module, a fall detection algorithm sub-module, and a scale-invariant feature transformation algorithm module, wherein step c3) includes a step c32) after step c31), the optical flow algorithm sub-module The group performs motion detection, collision time and object expansion calculation on each image of the person being cared for. The bag-of-words algorithm sub-module establishes multiple feature vectors corresponding to the posture during the fall process, codes and classifies the human skeleton and joint points. The module retrieves each calculation result of the curvature algorithm sub-module, the shape algorithm sub-module and the human body local configuration algorithm sub-module, and generates a state detailed information corresponding to all activities and postures of the care recipient, The machine learning sub-module uses a neural network to perform deep learning to train, adjust and optimize the judgment of the fall identification unit. The fall detection algorithm sub-module captures the curvature algorithm sub-module, the shape algorithm sub-module module, the human body local configuration algorithm sub-module, the optical flow algorithm sub-module and the bag-of-words algorithm sub-module to establish a fall detection and judgment model, the scale-invariant feature transformation algorithm The module performs image compensation for changes in the distance of each care recipient in the area. 如申請專利範圍第1項所述的跌倒及姿態辨識方法,其中該影像處理單元包括一智能視覺型電子圍籬模組,其中該跌倒及姿態辨識方法 更包括在步驟a)前的一步驟a’),該智能視覺型電子圍籬模組在該影像擷取單元拍攝的各該區域建立一電子圍離區塊,其中該步驟c)包括一步驟c1’)判斷該些受照護者是否在該電子圍離區塊的範圍內或遠離該電子圍離區塊,如果該些受照護者進入該電子圍離區塊的範圍內或離開該電子圍離區塊,判定該些受照護者有異常現象或行為,並發佈該即時通報訊息。 The fall and posture recognition method as described in item 1 of the patent application, wherein the image processing unit includes an intelligent visual electronic fence module, wherein the fall and posture recognition method It further includes a step a') before step a), the intelligent visual electronic fence module establishes an electronic fence block in each area captured by the image capture unit, wherein the step c) includes a step c1') Determine whether the care recipients are within the range of the electronic containment block or away from the electronic containment block. If the care recipients enter the range of the electronic containment block or leave the electronic containment block, From the block, it is determined that the care recipients have abnormal phenomena or behaviors, and the real-time notification message is issued. 如申請專利範圍第1項所述的跌倒及姿態辨識方法,其中該影像處理單元包括一防視覺隱私權洩漏演算法模組,其中該步驟b)包括一步驟a1”)該防視覺隱私權洩漏演算法模組對該受照護者的該軀體影像資料進行特徵輪廓遮蔽處理以及動態影像演算法遮蔽處理,以產生一對應該受照護者且具有安全隱私屏蔽的人體影像破壞性遮罩。The fall and gesture recognition method described in item 1 of the patent application, wherein the image processing unit includes an algorithm module for preventing visual privacy leakage, wherein step b) includes a step a1") of preventing visual privacy leakage The algorithm module performs feature contour masking processing and dynamic image algorithm masking processing on the body image data of the care recipient to generate a pair of destructive masks of the body image of the care recipient with secure privacy masking. 如申請專利範圍第1項所述的跌倒及姿態辨識方法,其中該步驟b1”)包括一步驟a11”)對該區域的多個物件影像進行特徵輪廓處理以及動態影像演算法遮蔽處理,以產生多個對應該些物件且具有安全隱私屏蔽的物件影像破壞性遮罩。For the fall and posture recognition method described in item 1 of the patent application, the step b1") includes a step a11") that performs feature contour processing and dynamic image algorithm masking processing on multiple object images in the area to generate Multiple object image destructive masks corresponding to those objects with secure privacy masks. 如申請專利範圍第1項所述的跌倒及姿態辨識方法,其中該影像處理單元2更包括一夜間跌倒與姿態模型24,其中該步驟a)包括一步驟a1”)該夜間跌倒與姿態模型24透過模糊技術分析及智慧辨識強化該軀體影像資料。 如申請專利範圍第1項所述的跌倒及姿態辨識方法,其中該跌倒及姿態辨識系統更包括一通訊單元,該通訊單元與該跌倒辨識單元訊號連接,其中該跌倒及姿態辨識方法更包括一步驟d)該通訊單元供發出一關懷詢問訊息至各該受照護者,亦能於緊急狀況,同時了解被照顧者現場狀況,並進行雙向通話。 As for the fall and posture recognition method described in item 1 of the patent application, the image processing unit 2 further includes a night-time fall and posture model 24, wherein the step a) includes a step a1") of the night-time fall and posture model 24 The body image data is enhanced through fuzzy technology analysis and intelligent identification. The fall and posture recognition method described in item 1 of the patent application, wherein the fall and posture recognition system further includes a communication unit, the communication unit is signal-connected to the fall recognition unit, and the fall and posture recognition method further includes a Step d) The communication unit is used to send a caring inquiry message to each person being cared for. It can also understand the on-site situation of the person being cared for in an emergency and conduct two-way conversations. 如申請專利範圍第3項所述的跌倒及姿態辨識方法,其中該受照護者分析模組32包括一屋內有人之異常偵測單元331、一特定狀態過久之異常偵測單元332、一作息統計單元333及一歷史瀏覽單元334,其中該步驟c3)包括下列步驟: c31’)該屋內有人之異常偵測單元331在一第一預定時間判斷該些受照護者是否在預定的該區域; c32’)該特定狀態過久之異常偵測單元332判斷在一第二預定時間之外判斷該些受照護者是否一直維持動作不動,或是判斷是否在該區域過久; c33’)該作息統計單元333持續自動紀錄各該受照顧者之居家生活作息狀態;及 c34’)該歷史瀏覽單元334供瀏覽各該受照顧者之居家生活作息狀態。 As for the fall and posture recognition method described in item 3 of the patent application, the care recipient analysis module 32 includes an abnormality detection unit 331 for people in the house, an abnormality detection unit 332 for a specific state for too long, and a work and rest routine. Statistics unit 333 and a history browsing unit 334, wherein step c3) includes the following steps: c31') The abnormality detection unit 331 of the person in the house determines whether the care recipients are in the predetermined area at a first predetermined time; c32') The abnormality detection unit 332 in which the specific state is too long determines whether the care recipients have remained motionless outside a second predetermined time, or whether they have been in the area for too long; c33’) The work and rest statistics unit 333 continuously and automatically records the home life and rest status of each person being cared for; and c34') The history browsing unit 334 is used to browse the home life and rest status of each person being cared for.
TW111124986A 2022-07-04 2022-07-04 A fall and posture identifying method with safety caring and high identification handling TWI820784B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW111124986A TWI820784B (en) 2022-07-04 2022-07-04 A fall and posture identifying method with safety caring and high identification handling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW111124986A TWI820784B (en) 2022-07-04 2022-07-04 A fall and posture identifying method with safety caring and high identification handling

Publications (2)

Publication Number Publication Date
TWI820784B TWI820784B (en) 2023-11-01
TW202403664A true TW202403664A (en) 2024-01-16

Family

ID=89722180

Family Applications (1)

Application Number Title Priority Date Filing Date
TW111124986A TWI820784B (en) 2022-07-04 2022-07-04 A fall and posture identifying method with safety caring and high identification handling

Country Status (1)

Country Link
TW (1) TWI820784B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220139188A1 (en) * 2016-05-31 2022-05-05 Ellcie-Healthy Personal system for the detection of a fall or a fall prone situation
TWI662514B (en) * 2018-09-13 2019-06-11 緯創資通股份有限公司 Falling detection method and electronic system using the same
TW202127389A (en) * 2020-01-14 2021-07-16 龍華科技大學 Smart home surveillance system including at least one imaging device, at least one light-adjustable window module having a control unit and a light-adjustable glass window, one light sensor and one controlling device
CN113409348A (en) * 2020-03-16 2021-09-17 上海数迹智能科技有限公司 Fall detection system and fall detection method based on depth image

Also Published As

Publication number Publication date
TWI820784B (en) 2023-11-01

Similar Documents

Publication Publication Date Title
CN111383421B (en) Privacy protection fall detection method and system
Cardinaux et al. Video based technology for ambient assisted living: A review of the literature
Shojaei-Hashemi et al. Video-based human fall detection in smart homes using deep learning
Planinc et al. Introducing the use of depth data for fall detection
Brulin et al. Posture recognition based on fuzzy logic for home monitoring of the elderly
Gowsikhaa et al. Suspicious Human Activity Detection from Surveillance Videos.
Shoaib et al. View-invariant fall detection for elderly in real home environment
KR102580434B1 (en) Dangerous situation detection device and dangerous situation detection method
Fosty et al. Event recognition system for older people monitoring using an RGB-D camera
Rezaee et al. Intelligent detection of the falls in the elderly using fuzzy inference system and video-based motion estimation method
TWI820784B (en) A fall and posture identifying method with safety caring and high identification handling
Moutsis et al. Fall detection paradigm for embedded devices based on YOLOv8
KR20230064095A (en) Apparatus and method for detecting abnormal behavior through deep learning-based image analysis
Fernando et al. Computer vision based privacy protected fall detection and behavior monitoring system for the care of the elderly
Osigbesan et al. Vision-based fall detection in aircraft maintenance environment with pose estimation
KR102647328B1 (en) Edge-type surveillance camera AI abnormal situation detection and control device and method
Kwak et al. Human action recognition using accumulated moving information
Chiranjeevi et al. Surveillance Based Suicide Detection System Using Deep Learning
Debono et al. Monitoring indoor living spaces using depth information
Kaur et al. Real-time video surveillance based human fall detection system using hybrid haar cascade classifier
KR102615378B1 (en) Behavioral recognition-based risk situation detection system and method
Lv Research on Underground Personnel Behavior Detection Algorithm Based on Lightweight OpenPose
Fernández-Caballero et al. Lateral inhibition in accumulative computation and fuzzy sets for human fall pattern recognition in colour and infrared imagery
Jeong et al. A vision-based emergency response system with a paramedic mobile robot
Halder et al. Anomalous Activity Detection from Ego View Camera of Surveillance Robots