WO2021192399A1 - Behavior recognition server, behavior recognition system, and behavior recognition method - Google Patents

Behavior recognition server, behavior recognition system, and behavior recognition method Download PDF

Info

Publication number
WO2021192399A1
WO2021192399A1 PCT/JP2020/042057 JP2020042057W WO2021192399A1 WO 2021192399 A1 WO2021192399 A1 WO 2021192399A1 JP 2020042057 W JP2020042057 W JP 2020042057W WO 2021192399 A1 WO2021192399 A1 WO 2021192399A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensor information
behavior
unit
observed person
sensor
Prior art date
Application number
PCT/JP2020/042057
Other languages
French (fr)
Japanese (ja)
Inventor
健太郎 佐野
大平 昭義
佐知 田中
卓男 姚
浩平 京谷
優佑 円谷
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Priority to CN202080064882.1A priority Critical patent/CN114402575B/en
Publication of WO2021192399A1 publication Critical patent/WO2021192399A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/04Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using a single signalling line, e.g. in a closed loop
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M11/00Telephonic communication systems specially adapted for combination with other electrical systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/38Services specially adapted for particular environments, situations or purposes for collecting sensor information

Definitions

  • the present invention relates to a behavior recognition server, a behavior recognition system, and a behavior recognition method.
  • I 000T Internet of Things
  • Patent Document 1 describes a method of smoothing those reactions and associating them with one action even when the sensor reacts a plurality of times within one second according to the sensor reaction definition prepared in advance.
  • erroneous detection occurs in which measurement is performed as non-detection. For example, there is a factor of erroneous detection such that the infrared rays for detection emitted from the motion sensor are blocked by the amount of illumination light in the room. As a result, the motion sensor overlooks a stationary person, and even if there is a relaxed person, it may be mistakenly recognized as absent.
  • the main subject of the present invention is to suppress the reduction of recognition accuracy from the sensor information including the false detection data.
  • the behavior recognition server of the present invention has the following features.
  • the present invention includes a sensor information acquisition unit that acquires sensor information indicating a detection result for each sensor from a set of sensors that detect an observed person.
  • a sensor information conversion unit that converts the sensor information into a probability density function in the time direction that maximizes the reaction time based on the reaction time detected by the observed person in the sensor information in the time series.
  • a behavior classification unit that classifies the behavior of the observed person at each time based on the converted sensor information, It is characterized by having a behavior output unit that digitizes and outputs the classified behavior of the observed person. Other means will be described later.
  • FIG. 1 is a configuration diagram of an action recognition system.
  • the behavior recognition system is configured so that the observer 3u remotely monitors the living condition of the observer 2u living at home 2h using the observer terminal 3.
  • the behavior recognition server 1 recognizes the living state of the observed person 2u based on the sensor information acquired from various sensors 2, and notifies the observer terminal 3 of the recognition result.
  • the observer 3u who sees the display screen of the observer terminal 3 can grasp the living state of the observer 2u.
  • the observer 2u is, for example, a care recipient, and the observer 3u is, for example, the family of the care recipient.
  • a behavior recognition system may be introduced in a hospital or a long-term care facility instead of the home 2h, in which case the observer 3u becomes a doctor or a care manager.
  • the sensor 2 may be, for example, a sensor incorporated in a home electric appliance such as a refrigerator 2a or an autonomous mobile vacuum cleaner 2b, or a single sensor such as a motion sensor 2c. It is desirable that the sensor 2 such as the motion sensor 2c is installed in a direction in which the measurement area does not face the entrance of the room. By this installation, it is possible to prevent the motion sensor 2c from erroneously detecting a person different from the observed person 2u passing through the corridor outside the room.
  • FIG. 2 is a hardware configuration diagram of the behavior recognition system.
  • the sensor 2 notifies the observer 2u of a communication unit 121 that notifies other devices of sensor information detected by the detection unit 122, a detection unit 122 that detects the observer 2u, and a message from the observer 3u. It has a notification unit 123 and the like.
  • the action recognition server 1 controls the communication unit 111 that receives the sensor information from the sensor 2 and notifies the observer terminal 3 of the recognition result from the sensor information, the control unit 112 that recognizes the living state of the observer 2u, and the control unit 112. It has a storage unit 113 for storing data used for processing of the unit 112.
  • the observer terminal 3 inputs a communication unit 131 that receives the recognition result of the observer 2u, a notification unit 132 that notifies the observer 3u of the recognition result of the observer 2u, a message from the observer 2u, and the like. It has an input unit 133 to be operated.
  • the action recognition server 1 is configured as a computer having a CPU (Central Processing Unit) as an arithmetic unit (control unit 112), a memory as a main storage device, and a hard disk as an external storage device (storage unit 113).
  • the CPU operates a control unit (control means) composed of each processing unit by executing a program (also called an application or an abbreviation for application) read in the memory.
  • a program also called an application or an abbreviation for application
  • FIG. 3 is a configuration diagram showing details of the action recognition server 1.
  • the control unit 112 (FIG. 2) of the action recognition server 1 includes a sensor information acquisition unit 11, a sensor information conversion unit 11T, a time information acquisition unit 12, an image conversion unit 13, an action classification unit 14, and an action correction unit. It has 15, a current action storage unit 16, and an action output unit 17.
  • the storage unit 113 (FIG. 2) of the action recognition server 1 stores the layout data 13L and the classification model 14m.
  • FIG. 4 is a flowchart showing the processing of the action recognition server 1.
  • the sensor information acquisition unit 11 acquires sensor information from the sensors 2 (refrigerator 2a, vacuum cleaner 2b, motion sensor 2c) installed in the home 2h (S101).
  • the data format of the sensor information may differ depending on the type of the sensor 2.
  • the sensor information conversion unit 11T receives the sensor information in the discrete value data format of 0 or 1 from the sensor information acquisition unit 11, and converts the discrete value sensor information into the sensor information of the probability density function (S102, FIGS. 5 to 5 to 5). (See below in FIG. 9).
  • the sensor information conversion unit 11T sets the function value at time t as the maximum value (for example, "1") from the input data of the discrete value "1" at the time t when the sensor reacts. Output data is created by adding a function value less than the maximum value in the time direction before and after that (Fig. 5). The function value less than the maximum value is calculated by the sensor information conversion unit 11T so that the function value becomes smaller as the time difference from the time t becomes larger.
  • the sensor information conversion unit 11T uses the input data, which is a data format other than the discrete value, as the output data as it is without conversion.
  • the image conversion unit 13 images a set of sensor information at a predetermined time based on the sensor information for each sensor 2 which is the output data of the sensor information conversion unit 11T (S103).
  • the layout data 13L referred to by the image conversion unit 13 at the time of conversion information regarding the layout in the image, such as which part of the image the sensor information of which sensor 2 is to be arranged, is defined in advance (FIG. 10, FIG. FIG. 11).
  • the image conversion unit 13 acquires time information indicating a predetermined time, which is the measurement time of the sensor information, via the time information acquisition unit 12, and sets the time information as an imaging target. May be included. If the sensor 2 includes a time stamp in the sensor information, the time information acquisition unit 12 acquires the time, and if there is no time stamp, the time information reception time is set as an image target. Note that the behavior classification unit 14 may accept the sensor information and the time information that have not been imaged, omitting the image conversion process of the sensor information by the image conversion unit 13.
  • the behavior classification unit 14 classifies the behavior of the observed person 2u in the time information from the image data showing the sensor information (S104). For this classification process, a classification model 14m is prepared in which when image data is input in advance, the corresponding behavior is converted into data and output. The classification model 14m is trained by a machine learning algorithm such as deep learning.
  • the behavior correction unit 15 corrects an unnatural behavior that has occurred momentarily by referring to the behavior before and after the individual behavior output by the behavior classification unit 14 in time (FIG. 13). Later). Therefore, when there is a local change from the action before and after the action (current action) to be focused on this time (S111, Yes), the action correction unit 15 changes the local action into the action before and after. After the correction is made so as to be consistent, the corrected action is accumulated in the current action storage unit 16 (S112). On the other hand, when there is no local change (S111, No), the natural behavior is accumulated in the current behavior storage unit 16 as it is (S113).
  • the action output unit 17 outputs the action recognition result currently accumulated in the action storage unit 16 to the outside (observer terminal 3).
  • the output destination of the action recognition result is not limited to the customer environment (observer terminal 3), and may be output to another system such as a database system or a cloud system.
  • FIG. 5 shows a time series graph of sensor information in a state where there is no detection omission.
  • FIG. 211 is sensor information input from the sensor information acquisition unit 11 to the sensor information conversion unit 11T.
  • the discrete value “1” indicating the detection of the observed person 2u at the reaction times t1 to t5 is included in the graph 211.
  • the graph 212 is the result of the sensor information conversion unit 11T converting the discrete value sensor information into the probability density function using the graph 211 as input data.
  • the sensor information conversion unit 11T receives the discrete value “1” of the reaction time t1 and converts it into a probability density function of the curve m1 having the reaction time t1 as a peak. Similarly, the sensor information conversion unit 11T creates a curve m2 at the reaction time t2, a curve m3 at the reaction time t3, a curve m4 at the reaction time t4, and a curve m5 at the reaction time t5, respectively.
  • the sensor information conversion unit 11T may apply, for example, a normal distribution, a student ⁇ distribution, a U (Universal) distribution, and an arbitrary distribution used in other statistical fields as a distribution in which the sensor information is converted into a probability density function. can.
  • Graph 213 integrates the overlapping sections between the curves of Graph 212.
  • the sensor information conversion unit 11T adopts the maximum value of those curves, but the sum of the curves may be adopted.
  • the value of the probability density function at each time is uniquely obtained in the graph 213.
  • the function value of each reaction time t1 to t5 is not "0" even after the conversion of the sensor information conversion unit 11T, the correct detection result is not deleted by the sensor information conversion unit 11T.
  • FIG. 6 shows a time-series graph in a state where a part of detection omission has occurred from the time-series graph of FIG.
  • Graph 221 is sensor information input from the sensor information acquisition unit 11 to the sensor information conversion unit 11T.
  • the discrete value becomes "0" due to the omission of detection.
  • the discrete value “1” is correctly detected as in FIG.
  • Graph 222 is the result of the sensor information conversion unit 11T converting the sensor information of discrete values into a probability density function using the graph 221 as input data.
  • the curve m2 at time t2 and the curve m4 at time t4 are missing from the graph 212 of FIG.
  • Graph 223 is a combination of overlapping sections between the curves of Graph 222, similar to Graph 213 of FIG.
  • the sensor information (function value) at the time t2 is not "0" but is influenced by the probability density function (curves m1, m3) from the time t1 and t3 in the vicinity in time. ..
  • the function value at time t4 is also affected by the probability density function (curve m5) from time t5 in the vicinity in time. In this way, even if a detection omission occurs at time t2 and t4, the detection omission can be relieved by using another signal in the vicinity in time as a probability density function.
  • FIG. 7 shows a time series graph when a probability density function other than a curve is applied from the same input data as the time series graph of FIG. Similar to the graph 211, the graph 231 includes a discrete value “1” indicating the detection of the observed person 2u at times t1 to t5, respectively.
  • the graph 232 is the result of the sensor information conversion unit 11T converting the graph 231 as input data into a linear approximation probability density function having peaks at each time t1 to t5 of the discrete value “1”. Straight line approximation requires less calculation. Further, in addition to the equation approximation of the linear approximation, the sensor information conversion unit 11T may use the curve approximation shown in FIG. 5, the polynomial approximation (not shown), or the like.
  • Graph 233 is the result of the sensor information conversion unit 11T converting the graph 231 into a predetermined range of random values using the graph 231 as input data.
  • the range in which the random value can be taken differs depending on whether the discrete value of the input data is "0" or "1".
  • -Discrete value of input data "1” ⁇ Output data "Random value in the range of 0.7 to 1.0”
  • FIG. 8 is a graph when the probability density function is applied to the spatial axis.
  • the sensor information conversion unit 11T artificially creates a detection signal around the time when the discrete value “1” of the input data is generated by applying the probability density function to the time axis. rice field.
  • the sensor information conversion unit 11T applies a probability density function to the spatial axis, so that the sensor information conversion unit 11T is located around the place (living room) where the discrete value “1” of the input data is generated (bedroom, kitchen). ) May also create a pseudo detection signal.
  • FIG. 9 is a plan view showing a specific example of the space to which the graph of FIG. 8 is applied.
  • FIG. 10 is an explanatory diagram showing an example of layout data 13L used by the image conversion unit 13 for imaging processing.
  • layout data 13L the data contents to be written at each position in the square image data of 12 squares in the vertical direction and 12 squares in the horizontal direction are arranged as symbols in the figure such as "T” and "ACC1".
  • the "mass” is the smallest unit obtained by subdividing the image area, and at least one writing area is assigned to the sensor information and the time information.
  • FIG. 11 is an explanatory table of the layout data 13L of FIG.
  • the top “T” in FIG. 10 corresponds to the symbol “T” in the figure of the first line “time” in FIG.
  • the image data arranged at the position of the uppermost portion "T” in FIG. 10 is the time data acquired by the time information acquisition unit 12. That is, one image shown in FIG. 12 is visualized by aggregating a set of sensor information measured at the same measurement time (time data of "T") from the sensors 2 arranged at each location. The result.
  • the types of the sensor 2 that the sensor information conversion unit 11T converts into the probability density function in S102 include, for example, an acceleration sensor, a (door) open / close sensor, and the like that detect the operation of the observed person 2u, a motion sensor, and the like. Those that detect the presence of the observed person 2u can be mentioned.
  • the third column "number of squares" in the explanation table indicates the size of the writing area.
  • the image conversion unit 13 fills the number of cells in the image by copying and writing the same data content to a plurality of locations.
  • the number of cells in the layout data 13L indicates the weight between the information to be written, and the larger the number of cells is assigned, the greater the influence on the behavior.
  • the distribution of the number of squares is determined by, for example, the following policy. ⁇ Since it is customary for humans to take actions depending on the time of day, such as going out during the day and sleeping at night, the time information "T” allocates a larger number of cells (24 cells) than other sensor information. .. -Since the actions that humans can take are narrowed down to some extent depending on where they are, the sensor information (location information) of the motion sensors "HM1 to HM5" allocates a larger number of cells (12 cells) than other sensor information.
  • the day of the week information "DoW” has more cells than the sensor information that measures the environment of home 2h. Allocate (12 squares). -As sensor information for detecting human movements, the acceleration sensors "ACC1 to ACC4" and the open / close sensors "OC1 to OC3" allocate a larger number of cells (4 cells) than the sensor information for measuring the environment of the home 2h. ..
  • the fourth column "value" of the explanatory table indicates the data content to be written in the writing area.
  • the value "0.31" of the time “T” indicates 7:40 am when 0:00 is the value "0.00” and 23:59 is the value "1.00".
  • the day of the week is selected from seven ways when Monday is set to the value "0.00” and Sunday is set to the value "1.00”.
  • the above-mentioned "value” is a value in an arbitrary range based on the value of each sensor information. In addition to the case of referring to the color corresponding to the value of each sensor information as described above, the case of referring to the value of each sensor information as it is is also included.
  • the "humidity” value "0.66,0.57,0.64,0.58,0.7” is the value of the first humidity sensor “0.66”, the value of the second humidity sensor “0.57”, ..., The value of the fifth humidity sensor in order from the left. Indicates "0.7".
  • the layout data 13L described above has described an example of arranging the same type of sensor information in close proximity in an image. On the other hand, sensor information having the same sensor installation location (room) may be arranged close to each other in the image.
  • FIG. 12 is an explanatory diagram of image data as a result of writing the “value” of FIG. 11 with respect to the layout data 13L of FIG.
  • symbols in the figure such as “T” and “ACC1” are also shown for the sake of clarity, but in reality, the symbols in the figure are omitted from the image.
  • the image conversion unit 13 writes black indicating the value “0” in the writing area of “ACC1”.
  • the image conversion unit 13 writes white indicating the value "1" in the writing area of "HM4". That is, the larger the value to be written, the closer to white.
  • the classification model 14m is defined by associating the image data created by the image conversion unit 13 with the behavior "returning home" of the observer 2u indicating the situation indicated by the image data.
  • the behavior classification unit 14 refers to the classification model 14m registered in the past, and when image data matching or similar to the image data of the classification model 14m is detected from the current observer 2u, the classification model The corresponding action "returning home” at 14 m is output as a classification result (S104).
  • a person such as the observer 3u may teach a meaningful action label such as "going home” or "resting".
  • an action label such as "behavior A” or “behavior B” automatically classified by machine learning may be used, which is simply a group of similar actions that have no meaning.
  • FIG. 13 is a time series graph showing the processing contents of the action correction unit 15.
  • Graph 241 shows the output data of the action classification unit 14 before correction. In Graph 241 it is basically detected that the observer 2u is out, but the bathing behavior for 5 minutes ( ⁇ T1) at 10:00 and the cleaning for 3 minutes ( ⁇ T2) at 15:00. Suppose an action is detected.
  • Graph 242 shows the output data of the behavior correction unit 15 after correction. When an action different from the previous / next action is suddenly detected, the action correction unit 15 corrects the different action so that the action is the same as the previous / next action (S112).
  • the action correction unit 15 determines that the action is an unnatural action to be corrected. As a result, the bathing behavior at 10:00 and the cleaning behavior at 15:00 are corrected to the same behaviors as "going out" before and after, respectively.
  • the action correction unit 15 may refer not only to the period of action but also to the type of action as a method of detecting an unnatural action to be corrected.
  • the behavior correction unit 15 may make a correction target for an action (going out) that is unnatural to occur immediately after (1 minute later) with respect to the previous action (relaxation).
  • the behavior correction unit 15 may change the predetermined period Th to be compared depending on the type of different behavior as to whether or not to correct the behavior different from the front and back. For example, bathing behavior is corrected as unnatural behavior if it is less than 20 minutes (predetermined period Th1), while cleaning behavior is corrected as unnatural behavior if it is less than 5 minutes (predetermined period Th2).
  • predetermined period Th1 a method of improving the accuracy of action recognition by shortening the time interval of action detection, but this method causes complicated control.
  • the sensor information conversion unit 11T is on the time axis or Detection omission can be relieved by using a probability density function based on sensor information in the vicinity on the spatial axis. As a result, it is possible to suppress a decrease in recognition accuracy from sensor information including false detection data.
  • the present invention is not limited to the above-described embodiment, and includes various modifications.
  • the above-described embodiment has been described in detail in order to explain the present invention in an easy-to-understand manner, and is not necessarily limited to those having all the described configurations.
  • it is possible to replace a part of the configuration of one embodiment with the configuration of another embodiment and it is also possible to add the configuration of another embodiment to the configuration of one embodiment.
  • each of the above configurations, functions, processing units, processing means and the like may be realized by hardware by designing a part or all of them by, for example, an integrated circuit.
  • each of the above configurations, functions, and the like may be realized by software by the processor interpreting and executing a program that realizes each function.
  • control lines and information lines indicate those that are considered necessary for explanation, and do not necessarily indicate all the control lines and information lines in the product. In practice, it can be considered that almost all configurations are interconnected.
  • the communication means for connecting each device is not limited to the wireless LAN, and may be changed to a wired LAN or other communication means.
  • Action recognition server 2 Sensor 2u Observer 3 Observer terminal 11 Sensor information acquisition unit 11T Sensor information conversion unit 12 Time information acquisition unit 13 Image conversion unit 13L Layout data 14 Behavior classification unit 14m Classification model 15 Behavior correction unit 16 Current behavior Accumulation unit 17 Action output unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Emergency Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Epidemiology (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Alarm Systems (AREA)
  • Emergency Alarm Devices (AREA)
  • Telephonic Communication Services (AREA)
  • Medical Treatment And Welfare Office Work (AREA)
  • Image Processing (AREA)

Abstract

A behavior recognition server (1) has: a sensor information acquisition unit (11) that acquires, from a collection of sensors (2) for detecting an observee (2u), sensor information of each of the sensors (2); a sensor information conversion unit (11T) that, on the basis of reaction time when the observee (2u) is detected in the sensor information in a chronological order, converts the sensor information into a probability density function of a temporal direction where the reaction time is set as the maximum value; a behavior classification unit (14) that classifies the behavior of the observee (2u) at each time; and a behavior output unit (17) that converts the classified behavior of the observee (2u) into data and outputs the data.

Description

行動認識サーバ、行動認識システム、および、行動認識方法Behavior recognition server, behavior recognition system, and behavior recognition method
 本発明は、行動認識サーバ、行動認識システム、および、行動認識方法に関する。 The present invention relates to a behavior recognition server, a behavior recognition system, and a behavior recognition method.
 近年、インターネットに接続された高性能なセンサが、IоT(Internet of Things)機器として普及している。そして、家庭内などの環境に設置した多数のセンサから大量のセンサ情報をビッグデータとして収集し、それらのビッグデータを分析することで、有益な情報を抽出する試みが行われている。 In recent years, high-performance sensors connected to the Internet have become widespread as IоT (Internet of Things) devices. Then, an attempt is made to extract useful information by collecting a large amount of sensor information as big data from a large number of sensors installed in an environment such as a home and analyzing the big data.
 センサ情報の測定頻度として、一般的には人間の行動期間よりも非常に短期間に測定可能である。よって、センサの1反応をそのまま人間の1行動に置き換えると、「1秒間に3回も立ち上がった」などの非現実的な行動が認識されてしまうこともある。
 そこで、特許文献1には、あらかじめ用意したセンサ反応定義に従って、1秒以内に複数回センサが反応した場合でも、それらの反応を平滑化して1回分の行動に対応づける方法が記載されている。
As the measurement frequency of sensor information, it is generally possible to measure it in a much shorter time than the human action period. Therefore, if one reaction of the sensor is replaced with one human action as it is, an unrealistic action such as "standing up three times per second" may be recognized.
Therefore, Patent Document 1 describes a method of smoothing those reactions and associating them with one action even when the sensor reacts a plurality of times within one second according to the sensor reaction definition prepared in advance.
特開2004-145820公報Japanese Unexamined Patent Publication No. 2004-145820
 各種センサには、実際には検知対象の人間などが存在しているにもかかわらず、非検知として計測してしまう誤検出が発生する。例えば、人感センサから発光させる検知用の赤外線が、部屋内の照明光などの加減により遮られてしまうなどの誤検出の要因がある。その結果、人感センサが静止している人間を見逃してしまうことで、リラックスしている人間が存在しても不在と誤認識されることもある。 Although various sensors actually have a human being to be detected, erroneous detection occurs in which measurement is performed as non-detection. For example, there is a factor of erroneous detection such that the infrared rays for detection emitted from the motion sensor are blocked by the amount of illumination light in the room. As a result, the motion sensor overlooks a stationary person, and even if there is a relaxed person, it may be mistakenly recognized as absent.
 よって、家庭内で生活する被介護者の老人などを監視対象とする場合などで、センサが瞬間的に誤動作してもその老人の行動を認識する精度を高める必要がある。しかし、特許文献1などの従来の技術では、センサの誤動作の影響については考慮されていない。 Therefore, it is necessary to improve the accuracy of recognizing the behavior of the elderly person who lives in the home even if the sensor malfunctions momentarily, such as when monitoring the elderly person. However, in the conventional technique such as Patent Document 1, the influence of the malfunction of the sensor is not considered.
 そこで、本発明は、誤検出データを含むセンサ情報から、認識精度の低減を抑制することを、主な課題とする。 Therefore, the main subject of the present invention is to suppress the reduction of recognition accuracy from the sensor information including the false detection data.
 前記課題を解決するために、本発明の行動認識サーバは、以下の特徴を有する。
 本発明は、被観察者を検知するセンサの集合から、前記センサごとの検知結果を示すセンサ情報を取得するセンサ情報取得部と、
 時系列の前記センサ情報のうちの前記被観察者が検知された反応時刻をもとに、その反応時刻を最大値とする時間方向の確率密度関数に前記センサ情報を変換するセンサ情報変換部と、
 変換された前記センサ情報をもとに、各時刻での前記被観察者の行動を分類する行動分類部と、
 分類した前記被観察者の行動をデータ化して出力する行動出力部と、を有することを特徴とする。
 その他の手段は、後記する。
In order to solve the above problems, the behavior recognition server of the present invention has the following features.
The present invention includes a sensor information acquisition unit that acquires sensor information indicating a detection result for each sensor from a set of sensors that detect an observed person.
A sensor information conversion unit that converts the sensor information into a probability density function in the time direction that maximizes the reaction time based on the reaction time detected by the observed person in the sensor information in the time series. ,
A behavior classification unit that classifies the behavior of the observed person at each time based on the converted sensor information,
It is characterized by having a behavior output unit that digitizes and outputs the classified behavior of the observed person.
Other means will be described later.
 本発明によれば、誤検出データを含むセンサ情報から、認識精度の低減を抑制することができる。 According to the present invention, it is possible to suppress a decrease in recognition accuracy from sensor information including false detection data.
本発明の一実施形態に関する行動認識システムの構成図である。It is a block diagram of the behavior recognition system which concerns on one Embodiment of this invention. 本発明の一実施形態に関する行動認識システムのハードウェア構成図である。It is a hardware block diagram of the action recognition system which concerns on one Embodiment of this invention. 本発明の一実施形態に関する行動認識サーバの詳細を示す構成図である。It is a block diagram which shows the detail of the behavior recognition server which concerns on one Embodiment of this invention. 本発明の一実施形態に関する行動認識サーバの処理を示すフローチャートである。It is a flowchart which shows the process of the action recognition server which concerns on one Embodiment of this invention. 本発明の一実施形態に関する検出漏れがない状態でのセンサ情報の時系列グラフである。It is a time series graph of the sensor information in the state which there is no detection omission about one Embodiment of this invention. 本発明の一実施形態に関する図5の時系列グラフから、一部の検出漏れが発生した状態での時系列グラフである。From the time series graph of FIG. 5 relating to one embodiment of the present invention, it is a time series graph in a state where a part of detection omission has occurred. 本発明の一実施形態に関する曲線以外の確率密度関数を適用した場合の時系列グラフである。It is a time series graph when the probability density function other than the curve about one Embodiment of this invention is applied. 本発明の一実施形態に関する空間軸に確率密度関数を適用した場合のグラフである。It is a graph when the probability density function is applied to the space axis which concerns on one Embodiment of this invention. 本発明の一実施形態に関する図8のグラフが適用される空間の具体例を示す平面図である。It is a top view which shows the specific example of the space to which the graph of FIG. 8 concerning one Embodiment of this invention is applied. 本発明の一実施形態に関するレイアウトデータの一例を示す説明図である。It is explanatory drawing which shows an example of the layout data which concerns on one Embodiment of this invention. 本発明の一実施形態に関するレイアウトデータの説明用テーブルである。It is a table for explaining layout data which concerns on one Embodiment of this invention. 本発明の一実施形態に関する画像データの説明図である。It is explanatory drawing of the image data which concerns on one Embodiment of this invention. 本発明の一実施形態に関する行動補正部の処理内容を示す時系列グラフである。It is a time series graph which shows the processing content of the action correction part which concerns on one Embodiment of this invention.
 以下、本発明の一実施形態について、図面を参照しながら説明する。 Hereinafter, an embodiment of the present invention will be described with reference to the drawings.
 図1は、行動認識システムの構成図である。
 行動認識システムは、自宅2hで生活する被観察者2uの生活状態を、観察者3uが観察者端末3を用いて遠隔から見守るように構成される。行動認識サーバ1は、各種センサ2から取得したセンサ情報をもとに被観察者2uの生活状態を認識し、その認識結果を観察者端末3に通知する。これにより、観察者端末3の表示画面を見た観察者3uが、被観察者2uの生活状態を把握できる。
 被観察者2uは例えば要介護者であり、観察者3uは例えば要介護者の家族である。または、自宅2hの代わりに病院や介護施設に行動認識システムを導入してもよく、その場合、観察者3uは医師やケアマネージャとなる。
FIG. 1 is a configuration diagram of an action recognition system.
The behavior recognition system is configured so that the observer 3u remotely monitors the living condition of the observer 2u living at home 2h using the observer terminal 3. The behavior recognition server 1 recognizes the living state of the observed person 2u based on the sensor information acquired from various sensors 2, and notifies the observer terminal 3 of the recognition result. As a result, the observer 3u who sees the display screen of the observer terminal 3 can grasp the living state of the observer 2u.
The observer 2u is, for example, a care recipient, and the observer 3u is, for example, the family of the care recipient. Alternatively, a behavior recognition system may be introduced in a hospital or a long-term care facility instead of the home 2h, in which case the observer 3u becomes a doctor or a care manager.
 自宅2hには、被観察者2uの行動を監視するためのさまざまなセンサ2がネットワークに接続されている。センサ2は、例えば、冷蔵庫2aや自律移動型の掃除機2bなどの家電機器に組み込まれたセンサでもよいし、人感センサ2cなどの単体のセンサでもよい。
 なお、人感センサ2cなどのセンサ2は、その測定領域が部屋の入口に対向しない方向に設置されていることが望ましい。この設置により、人感センサ2cが部屋外の廊下をすれ違う被観察者2uとは別人を誤検出してしまうことを抑制できる。
At home 2h, various sensors 2 for monitoring the behavior of the observed person 2u are connected to the network. The sensor 2 may be, for example, a sensor incorporated in a home electric appliance such as a refrigerator 2a or an autonomous mobile vacuum cleaner 2b, or a single sensor such as a motion sensor 2c.
It is desirable that the sensor 2 such as the motion sensor 2c is installed in a direction in which the measurement area does not face the entrance of the room. By this installation, it is possible to prevent the motion sensor 2c from erroneously detecting a person different from the observed person 2u passing through the corridor outside the room.
 図2は、行動認識システムのハードウェア構成図である。
 センサ2は、検知部122が検知したセンサ情報などを他装置に通知する通信部121と、被観察者2uを検知する検知部122と、観察者3uからのメッセージなどを被観察者2uに報知する報知部123とを有する。
 行動認識サーバ1は、センサ2からセンサ情報を受け、そのセンサ情報からの認識結果を観察者端末3に通知する通信部111と、被観察者2uの生活状態を認識する制御部112と、制御部112の処理に用いられるデータを格納する記憶部113とを有する。
 観察者端末3は、被観察者2uの認識結果を受信する通信部131と、被観察者2uの認識結果を観察者3uに報知する報知部132と、被観察者2uからのメッセージなどを入力させる入力部133とを有する。
FIG. 2 is a hardware configuration diagram of the behavior recognition system.
The sensor 2 notifies the observer 2u of a communication unit 121 that notifies other devices of sensor information detected by the detection unit 122, a detection unit 122 that detects the observer 2u, and a message from the observer 3u. It has a notification unit 123 and the like.
The action recognition server 1 controls the communication unit 111 that receives the sensor information from the sensor 2 and notifies the observer terminal 3 of the recognition result from the sensor information, the control unit 112 that recognizes the living state of the observer 2u, and the control unit 112. It has a storage unit 113 for storing data used for processing of the unit 112.
The observer terminal 3 inputs a communication unit 131 that receives the recognition result of the observer 2u, a notification unit 132 that notifies the observer 3u of the recognition result of the observer 2u, a message from the observer 2u, and the like. It has an input unit 133 to be operated.
 行動認識サーバ1は演算装置(制御部112)としてのCPU(Central Processing Unit)、主記憶装置としてのメモリ、および、外部記憶装置(記憶部113)としてのハードディスクを有する計算機として構成される。
 この計算機は、CPUが、メモリ上に読み込んだプログラム(アプリケーションや、その略のアプリとも呼ばれる)を実行することにより、各処理部により構成される制御部(制御手段)を動作させる。
The action recognition server 1 is configured as a computer having a CPU (Central Processing Unit) as an arithmetic unit (control unit 112), a memory as a main storage device, and a hard disk as an external storage device (storage unit 113).
In this computer, the CPU operates a control unit (control means) composed of each processing unit by executing a program (also called an application or an abbreviation for application) read in the memory.
 図3は、行動認識サーバ1の詳細を示す構成図である。
 行動認識サーバ1の制御部112(図2)は、センサ情報取得部11と、センサ情報変換部11Tと、時刻情報取得部12と、画像変換部13と、行動分類部14と、行動補正部15と、現在行動蓄積部16と、行動出力部17とを有する。
 行動認識サーバ1の記憶部113(図2)は、レイアウトデータ13Lと、分類モデル14mとを格納する。
 以下、図3の構成要素の詳細について、図4のフローチャートに沿って説明する。
FIG. 3 is a configuration diagram showing details of the action recognition server 1.
The control unit 112 (FIG. 2) of the action recognition server 1 includes a sensor information acquisition unit 11, a sensor information conversion unit 11T, a time information acquisition unit 12, an image conversion unit 13, an action classification unit 14, and an action correction unit. It has 15, a current action storage unit 16, and an action output unit 17.
The storage unit 113 (FIG. 2) of the action recognition server 1 stores the layout data 13L and the classification model 14m.
Hereinafter, the details of the components of FIG. 3 will be described with reference to the flowchart of FIG.
 図4は、行動認識サーバ1の処理を示すフローチャートである。
 センサ情報取得部11は、自宅2hに設置されたセンサ2(冷蔵庫2a、掃除機2b、人感センサ2c)からのセンサ情報を取得する(S101)。センサ情報は、センサ2の種類ごとにデータ形式が異なることもある。
 センサ情報変換部11Tは、0または1という離散値のデータ形式のセンサ情報をセンサ情報取得部11から受け、その離散値のセンサ情報を確率密度関数のセンサ情報に変換する(S102、図5~図9で後記)。
FIG. 4 is a flowchart showing the processing of the action recognition server 1.
The sensor information acquisition unit 11 acquires sensor information from the sensors 2 (refrigerator 2a, vacuum cleaner 2b, motion sensor 2c) installed in the home 2h (S101). The data format of the sensor information may differ depending on the type of the sensor 2.
The sensor information conversion unit 11T receives the sensor information in the discrete value data format of 0 or 1 from the sensor information acquisition unit 11, and converts the discrete value sensor information into the sensor information of the probability density function (S102, FIGS. 5 to 5 to 5). (See below in FIG. 9).
 センサ情報変換部11Tは、確率密度関数の関数値として、例えば、センサが反応した時刻tにおいて離散値「1」の入力データから、時刻tにおける関数値を最大値(例えば「1」)とし、その前後の時間方向にも最大値未満の関数値を追加した出力データを作成する(図5)。最大値未満の関数値とは、時刻tからの時間差が大きくなるほど関数値が小さくなるようにセンサ情報変換部11Tが計算したものである。
 一方、センサ情報変換部11Tは、離散値以外のデータ形式である入力データは、変換せずにそのまま出力データとする。
As the function value of the probability density function, the sensor information conversion unit 11T sets the function value at time t as the maximum value (for example, "1") from the input data of the discrete value "1" at the time t when the sensor reacts. Output data is created by adding a function value less than the maximum value in the time direction before and after that (Fig. 5). The function value less than the maximum value is calculated by the sensor information conversion unit 11T so that the function value becomes smaller as the time difference from the time t becomes larger.
On the other hand, the sensor information conversion unit 11T uses the input data, which is a data format other than the discrete value, as the output data as it is without conversion.
 画像変換部13は、センサ情報変換部11Tの出力データであるセンサ2ごとのセンサ情報をもとに、所定時刻におけるセンサ情報の集合を画像化する(S103)。なお、画像変換部13が変換時に参照するレイアウトデータ13Lには、あらかじめどのセンサ2のセンサ情報を画像内のどの部分に配置するかという画像内のレイアウトに関する情報が定義されている(図10,図11)。 The image conversion unit 13 images a set of sensor information at a predetermined time based on the sensor information for each sensor 2 which is the output data of the sensor information conversion unit 11T (S103). In the layout data 13L referred to by the image conversion unit 13 at the time of conversion, information regarding the layout in the image, such as which part of the image the sensor information of which sensor 2 is to be arranged, is defined in advance (FIG. 10, FIG. FIG. 11).
 また、画像変換部13は、センサ情報の集合に加えて、そのセンサ情報の計測時刻である所定時刻を示す時刻情報を時刻情報取得部12を介して取得し、その時刻情報を画像化対象に含めてもよい。時刻情報取得部12は、センサ2がセンサ情報にタイムスタンプを含めている場合はその時刻を取得し、タイムスタンプが無い場合はセンサ情報の受信時刻を画像化対象とする。
 なお、画像変換部13によるセンサ情報の画像化処理を省略して、行動分類部14は、画像化されていないセンサ情報や時刻情報を受け付けてもよい。
Further, in addition to the set of sensor information, the image conversion unit 13 acquires time information indicating a predetermined time, which is the measurement time of the sensor information, via the time information acquisition unit 12, and sets the time information as an imaging target. May be included. If the sensor 2 includes a time stamp in the sensor information, the time information acquisition unit 12 acquires the time, and if there is no time stamp, the time information reception time is set as an image target.
Note that the behavior classification unit 14 may accept the sensor information and the time information that have not been imaged, omitting the image conversion process of the sensor information by the image conversion unit 13.
 行動分類部14は、センサ情報を示す画像データから、その時刻情報における被観察者2uの行動を分類する(S104)。この分類処理のために、あらかじめ画像データを入力すると、対応する行動をデータ化して出力する分類モデル14mが用意されている。分類モデル14mは、例えば、深層学習などの機械学習アルゴリズムによって訓練されている。 The behavior classification unit 14 classifies the behavior of the observed person 2u in the time information from the image data showing the sensor information (S104). For this classification process, a classification model 14m is prepared in which when image data is input in advance, the corresponding behavior is converted into data and output. The classification model 14m is trained by a machine learning algorithm such as deep learning.
 行動補正部15は、行動分類部14が出力する個々の行動に対して、時間的に前後の行動を参照することで、瞬間的に発生してしまった不自然な行動を補正する(図13で後記)。
 そのため、行動補正部15は、今回着目する行動(現在の行動)に対してその前後の行動からの局所的な変化が存在するときには(S111,Yes)、その局所的な行動を前後の行動に整合させるように補正してから、補正後の行動を現在行動蓄積部16に蓄積する(S112)。一方、局所的な変化が存在しないときには(S111,No)、その自然な行動をそのまま現在行動蓄積部16に蓄積する(S113)。
 行動出力部17は、現在行動蓄積部16に蓄積された行動認識結果を外部(観察者端末3)に出力する。行動認識結果の出力先は、顧客環境(観察者端末3)に限らず、データベースシステムやクラウドシステムなどの他システムに出力してもよい。
The behavior correction unit 15 corrects an unnatural behavior that has occurred momentarily by referring to the behavior before and after the individual behavior output by the behavior classification unit 14 in time (FIG. 13). Later).
Therefore, when there is a local change from the action before and after the action (current action) to be focused on this time (S111, Yes), the action correction unit 15 changes the local action into the action before and after. After the correction is made so as to be consistent, the corrected action is accumulated in the current action storage unit 16 (S112). On the other hand, when there is no local change (S111, No), the natural behavior is accumulated in the current behavior storage unit 16 as it is (S113).
The action output unit 17 outputs the action recognition result currently accumulated in the action storage unit 16 to the outside (observer terminal 3). The output destination of the action recognition result is not limited to the customer environment (observer terminal 3), and may be output to another system such as a database system or a cloud system.
 以下、図5~図9を参照して、センサ情報変換部11Tの処理(S102)の具体例を説明する。
 図5は、検出漏れがない状態でのセンサ情報の時系列グラフを示す。
 グラフ211は、センサ情報取得部11からセンサ情報変換部11Tに入力されるセンサ情報である。反応時刻t1~t5でそれぞれ被観察者2uの検知を示す離散値「1」がグラフ211に含まれる。
Hereinafter, a specific example of the processing (S102) of the sensor information conversion unit 11T will be described with reference to FIGS. 5 to 9.
FIG. 5 shows a time series graph of sensor information in a state where there is no detection omission.
FIG. 211 is sensor information input from the sensor information acquisition unit 11 to the sensor information conversion unit 11T. The discrete value “1” indicating the detection of the observed person 2u at the reaction times t1 to t5 is included in the graph 211.
 グラフ212は、センサ情報変換部11Tがグラフ211を入力データとして、離散値のセンサ情報を確率密度関数に変換した結果である。センサ情報変換部11Tは、反応時刻t1の離散値「1」を受け、反応時刻t1をピークとした曲線m1の確率密度関数に変換する。同様に、センサ情報変換部11Tは、反応時刻t2の曲線m2と、反応時刻t3の曲線m3と、反応時刻t4の曲線m4と、反応時刻t5の曲線m5とをそれぞれ作成する。
 センサ情報変換部11Tは、センサ情報を確率密度関数にした分布として、例えば、正規分布、スチューデントτ分布、U(Universal)分布、および、その他の統計分野で用いられる任意の分布を適用することができる。
The graph 212 is the result of the sensor information conversion unit 11T converting the discrete value sensor information into the probability density function using the graph 211 as input data. The sensor information conversion unit 11T receives the discrete value “1” of the reaction time t1 and converts it into a probability density function of the curve m1 having the reaction time t1 as a peak. Similarly, the sensor information conversion unit 11T creates a curve m2 at the reaction time t2, a curve m3 at the reaction time t3, a curve m4 at the reaction time t4, and a curve m5 at the reaction time t5, respectively.
The sensor information conversion unit 11T may apply, for example, a normal distribution, a student τ distribution, a U (Universal) distribution, and an arbitrary distribution used in other statistical fields as a distribution in which the sensor information is converted into a probability density function. can.
 グラフ213は、グラフ212の曲線間の重複区間を統合したものである。ここでは、同じ時刻に複数の曲線が存在する場合、センサ情報変換部11Tは、それらの曲線の最大値を採用したが、曲線の総和を採用してもよい。これにより、グラフ213は各時刻の確率密度関数の値が一意に求まる。
 このように、センサ情報変換部11Tの変換後においても各反応時刻t1~t5の関数値は「0」ではないので、正しい検出結果がセンサ情報変換部11Tによって削除されることはない。
Graph 213 integrates the overlapping sections between the curves of Graph 212. Here, when a plurality of curves exist at the same time, the sensor information conversion unit 11T adopts the maximum value of those curves, but the sum of the curves may be adopted. As a result, the value of the probability density function at each time is uniquely obtained in the graph 213.
As described above, since the function value of each reaction time t1 to t5 is not "0" even after the conversion of the sensor information conversion unit 11T, the correct detection result is not deleted by the sensor information conversion unit 11T.
 図6は、図5の時系列グラフから、一部の検出漏れが発生した状態での時系列グラフを示す。
 グラフ221は、センサ情報取得部11からセンサ情報変換部11Tに入力されるセンサ情報である。時刻t2,t4でそれぞれ被観察者2uが実際には自宅2hに存在するにもかかわらず、検知漏れのため離散値「0」となってしまった。残りの反応時刻t1,t3,t5では図5と同様に正しく離散値「1」が検知されている。
 グラフ222は、センサ情報変換部11Tがグラフ221を入力データとして、離散値のセンサ情報を確率密度関数に変換した結果である。このグラフ222では、図5のグラフ212から、時刻t2での曲線m2と、時刻t4での曲線m4とが欠落してしまった。
FIG. 6 shows a time-series graph in a state where a part of detection omission has occurred from the time-series graph of FIG.
Graph 221 is sensor information input from the sensor information acquisition unit 11 to the sensor information conversion unit 11T. Although the observed person 2u actually exists at home 2h at time t2 and t4, the discrete value becomes "0" due to the omission of detection. At the remaining reaction times t1, t3, and t5, the discrete value “1” is correctly detected as in FIG.
Graph 222 is the result of the sensor information conversion unit 11T converting the sensor information of discrete values into a probability density function using the graph 221 as input data. In this graph 222, the curve m2 at time t2 and the curve m4 at time t4 are missing from the graph 212 of FIG.
 グラフ223は、図5のグラフ213と同様に、グラフ222の曲線間の重複区間を統合したものである。ここで、時刻t2に着目すると、時刻t2のセンサ情報(関数値)は、「0」ではなく、時間的に近傍の時刻t1,t3からの確率密度関数(曲線m1,m3)の影響を受ける。同様に、時刻t4の関数値も、時間的に近傍の時刻t5からの確率密度関数(曲線m5)の影響を受ける。
 このように、時刻t2,t4でそれぞれ検出漏れが発生しても、時間的に近傍の他信号を確率密度関数にすることで、検出漏れを救済できる。
Graph 223 is a combination of overlapping sections between the curves of Graph 222, similar to Graph 213 of FIG. Here, focusing on the time t2, the sensor information (function value) at the time t2 is not "0" but is influenced by the probability density function (curves m1, m3) from the time t1 and t3 in the vicinity in time. .. Similarly, the function value at time t4 is also affected by the probability density function (curve m5) from time t5 in the vicinity in time.
In this way, even if a detection omission occurs at time t2 and t4, the detection omission can be relieved by using another signal in the vicinity in time as a probability density function.
 図7は、図5の時系列グラフと同じ入力データから、曲線以外の確率密度関数を適用した場合の時系列グラフを示す。
 グラフ231は、グラフ211と同様に、時刻t1~t5でそれぞれ被観察者2uの検知を示す離散値「1」がに含まれる。
 グラフ232は、センサ情報変換部11Tがグラフ231を入力データとして、離散値「1」の各時刻t1~t5をピークとした直線近似の確率密度関数に変換した結果である。
 直線近似は計算量が少なくて済む。また、直線近似の方程式近似の他にも、センサ情報変換部11Tは、図5で示した曲線近似や、図示しない多項式近似などを用いてもよい。
FIG. 7 shows a time series graph when a probability density function other than a curve is applied from the same input data as the time series graph of FIG.
Similar to the graph 211, the graph 231 includes a discrete value “1” indicating the detection of the observed person 2u at times t1 to t5, respectively.
The graph 232 is the result of the sensor information conversion unit 11T converting the graph 231 as input data into a linear approximation probability density function having peaks at each time t1 to t5 of the discrete value “1”.
Straight line approximation requires less calculation. Further, in addition to the equation approximation of the linear approximation, the sensor information conversion unit 11T may use the curve approximation shown in FIG. 5, the polynomial approximation (not shown), or the like.
 グラフ233は、センサ情報変換部11Tがグラフ231を入力データとして、所定範囲の乱数値に変換した結果である。以下に示すように、入力データの離散値「0」の場合と、「1」の場合とで、それぞれ乱数値の取り得る範囲が異なる。
 ・入力データの離散値「0」→出力データ「0~0.3の範囲での乱数値」
 ・入力データの離散値「1」→出力データ「0.7~1.0の範囲での乱数値」
 これにより、時間的に近傍の離散値「1」が存在しない期間でも、検出漏れを救済できることもある。
Graph 233 is the result of the sensor information conversion unit 11T converting the graph 231 into a predetermined range of random values using the graph 231 as input data. As shown below, the range in which the random value can be taken differs depending on whether the discrete value of the input data is "0" or "1".
-Discrete value of input data "0" → Output data "Random value in the range of 0 to 0.3"
-Discrete value of input data "1" → Output data "Random value in the range of 0.7 to 1.0"
As a result, it may be possible to relieve the detection omission even during the period when the discrete value "1" in the vicinity does not exist in time.
 図8は、空間軸に確率密度関数を適用した場合のグラフである。
 図5~図7では、センサ情報変換部11Tは、時間軸に確率密度関数を適用することで、入力データの離散値「1」が発生した時刻周辺にも検出信号を擬似的に作成していた。
 同様に、図8でセンサ情報変換部11Tは、空間軸に確率密度関数を適用することで、入力データの離散値「1」が発生した場所(リビング)の周辺に位置する場所(寝室、キッチン)も検出信号を擬似的に作成してもよい。
FIG. 8 is a graph when the probability density function is applied to the spatial axis.
In FIGS. 5 to 7, the sensor information conversion unit 11T artificially creates a detection signal around the time when the discrete value “1” of the input data is generated by applying the probability density function to the time axis. rice field.
Similarly, in FIG. 8, the sensor information conversion unit 11T applies a probability density function to the spatial axis, so that the sensor information conversion unit 11T is located around the place (living room) where the discrete value “1” of the input data is generated (bedroom, kitchen). ) May also create a pseudo detection signal.
 図9は、図8のグラフが適用される空間の具体例を示す平面図である。
 入力データの離散値「1」が発生したリビングを、被観察者2uの存在確率「1(100%)」とすると、センサ情報変換部11Tは、その近傍の部屋にも存在確率を波及させる。
 例えば、キッチンや寝室には、入力データの離散値「1」が発生せず、被観察者2uが検知されていない。しかし、センサ情報変換部11Tは、リビングに近い順に、キッチン(存在確率=0.7)、寝室(存在確率=0.5)を波及させる。
FIG. 9 is a plan view showing a specific example of the space to which the graph of FIG. 8 is applied.
Assuming that the living room in which the discrete value "1" of the input data is generated is the existence probability "1 (100%)" of the observed person 2u, the sensor information conversion unit 11T spreads the existence probability to the room in the vicinity thereof.
For example, in the kitchen or bedroom, the discrete value "1" of the input data is not generated, and the observed person 2u is not detected. However, the sensor information conversion unit 11T spreads the kitchen (existence probability = 0.7) and the bedroom (existence probability = 0.5) in the order of proximity to the living room.
 図10は、画像変換部13が画像化処理に使用するレイアウトデータ13Lの一例を示す説明図である。レイアウトデータ13Lは、縦方向に12マス、横方向に12マスの正方形の画像データ内の各位置に書き込むデータ内容が、「T」や「ACC1」などの図中記号として配置されている。なお、「マス」とは画像領域を細分化した最小単位であり、センサ情報や時刻情報には、最低1マスの書き込み領域が割り当てられる。 FIG. 10 is an explanatory diagram showing an example of layout data 13L used by the image conversion unit 13 for imaging processing. In the layout data 13L, the data contents to be written at each position in the square image data of 12 squares in the vertical direction and 12 squares in the horizontal direction are arranged as symbols in the figure such as "T" and "ACC1". The "mass" is the smallest unit obtained by subdividing the image area, and at least one writing area is assigned to the sensor information and the time information.
 図11は、図10のレイアウトデータ13Lの説明用テーブルである。例えば、図10の最上部「T」は、図11の第1行「時刻」の図中記号「T」に対応する。なお、図10の最上部「T」の場所に配置される画像データは、時刻情報取得部12が取得した時刻データである。つまり、図12に示す1枚の画像は、各場所に配置されたセンサ2から同じ計測時刻(「T」の時刻データ)において計測されたセンサ情報の集合を、1つに集約して可視化した結果である。
 なお、S102でセンサ情報変換部11Tが確率密度関数に変換するセンサ2の種類は、例えば、加速度センサ、(ドア)開閉センサなどの被観察者2uの動作を検出するものや、人感センサなどの被観察者2uの存在を検出するものが挙げられる。
FIG. 11 is an explanatory table of the layout data 13L of FIG. For example, the top "T" in FIG. 10 corresponds to the symbol "T" in the figure of the first line "time" in FIG. The image data arranged at the position of the uppermost portion "T" in FIG. 10 is the time data acquired by the time information acquisition unit 12. That is, one image shown in FIG. 12 is visualized by aggregating a set of sensor information measured at the same measurement time (time data of "T") from the sensors 2 arranged at each location. The result.
The types of the sensor 2 that the sensor information conversion unit 11T converts into the probability density function in S102 include, for example, an acceleration sensor, a (door) open / close sensor, and the like that detect the operation of the observed person 2u, a motion sensor, and the like. Those that detect the presence of the observed person 2u can be mentioned.
 説明用テーブルの第3列「マス数」とは、書き込み領域の大きさを示す。なお、書き込み領域が表現可能なデータ量よりも書き込むデータ量が少ないときには、書き込み領域が余ってしまう。そのときには、画像変換部13は、同じデータ内容を複数の場所にコピーして書き込むことで、画像内のマス数を埋める。 The third column "number of squares" in the explanation table indicates the size of the writing area. When the amount of data to be written is smaller than the amount of data that can be expressed in the write area, the write area is left over. At that time, the image conversion unit 13 fills the number of cells in the image by copying and writing the same data content to a plurality of locations.
 なお、レイアウトデータ13Lのマス数は書き込む情報間の重みを示し、多くのマス数が割り当てられるほど行動への影響が大きい。このマス数の配分は、例えば、以下のポリシにより決定される。
 ・昼間は外出し夜は寝るなど、人間の生活は時刻によって取る行動が習慣化されているので、時刻情報「T」は、他のセンサ情報よりも多くのマス数(24マス)を配分する。
 ・人間は居る場所によって取り得る行動がある程度絞り込まれるので、人感センサ「HM1~HM5」のセンサ情報(居場所情報)は他のセンサ情報よりも多くのマス数(12マス)を配分する。
 ・平日は出勤し休日は家で休むなど、人間の生活は曜日によっても同じ行動を取る習慣があるので、曜日情報「DoW」は、自宅2hの環境を測定するセンサ情報よりも多くのマス数(12マス)を配分する。
 ・人間の動作を検知するセンサ情報として、加速度センサ「ACC1~ACC4」および開閉センサ「OC1~OC3」は、自宅2hの環境を測定するセンサ情報よりも多くのマス数(4マス)を配分する。
The number of cells in the layout data 13L indicates the weight between the information to be written, and the larger the number of cells is assigned, the greater the influence on the behavior. The distribution of the number of squares is determined by, for example, the following policy.
・ Since it is customary for humans to take actions depending on the time of day, such as going out during the day and sleeping at night, the time information "T" allocates a larger number of cells (24 cells) than other sensor information. ..
-Since the actions that humans can take are narrowed down to some extent depending on where they are, the sensor information (location information) of the motion sensors "HM1 to HM5" allocates a larger number of cells (12 cells) than other sensor information.
・ Since human life has a habit of taking the same actions depending on the day of the week, such as going to work on weekdays and resting at home on holidays, the day of the week information "DoW" has more cells than the sensor information that measures the environment of home 2h. Allocate (12 squares).
-As sensor information for detecting human movements, the acceleration sensors "ACC1 to ACC4" and the open / close sensors "OC1 to OC3" allocate a larger number of cells (4 cells) than the sensor information for measuring the environment of the home 2h. ..
 説明用テーブルの第4列「値」とは、書き込み領域に書き込むデータ内容を示す。例えば、画像データの色深度がグレースケールの8bitであるときには、書き込み領域が表現可能なデータ量は、2の8乗=256通りの数値となる。なお,画像データの色深度は任意に設定可能であるため,表現可能なデータ量は256通りに限定されることは無い。従って,例えば8bitのグレースケールと16bitのカラーでは,同じセンサの反応値でも異なる値,精度に変換されることがある。本実施形態では、0.00~1.00の範囲を0.01の精度で記載することとした。
 例えば、時刻「T」の値「0.31」とは、0時0分を値「0.00」とし、23時59分を値「1.00」としたときには、午前7時40分を示す。一方、曜日は、月曜日を値「0.00」とし、日曜日を値「1.00」としたときの7通りの中から選択される。
 なお、上記「値」とは、各センサ情報の値に基づいた任意の範囲の値のことである。上述したように各センサ情報の値に応じた色をいう場合の他、各センサ情報の値そのままをいう場合も含む。
 
The fourth column "value" of the explanatory table indicates the data content to be written in the writing area. For example, when the color depth of the image data is 8 bits of gray scale, the amount of data that can be expressed in the writing area is 2 to the 8th power = 256 numerical values. Since the color depth of the image data can be set arbitrarily, the amount of data that can be expressed is not limited to 256. Therefore, for example, 8-bit grayscale and 16-bit color may be converted to different values and accuracy even if the reaction value of the same sensor is used. In this embodiment, the range of 0.00 to 1.00 is described with an accuracy of 0.01.
For example, the value "0.31" of the time "T" indicates 7:40 am when 0:00 is the value "0.00" and 23:59 is the value "1.00". On the other hand, the day of the week is selected from seven ways when Monday is set to the value "0.00" and Sunday is set to the value "1.00".
The above-mentioned "value" is a value in an arbitrary range based on the value of each sensor information. In addition to the case of referring to the color corresponding to the value of each sensor information as described above, the case of referring to the value of each sensor information as it is is also included.
 なお、図11の「湿度」の行「HUM」では、マス数「5(=1x5)」とは、湿度センサの書き込み領域の大きさが、1センサで1マス分ありセンサ数が5個なので、合計5マス分という意味である。また「湿度」の値「0.66,0.57,0.64,0.58,0.7」は、左から順に第1湿度センサの値「0.66」、第2湿度センサの値「0.57」、…、第5湿度センサの値「0.7」を示す。
 以上説明したレイアウトデータ13Lは、同じ種類のセンサ情報を画像内の近接にまとめて配置する一例を説明した。一方、センサの設置場所(部屋)が同じセンサ情報ごとに、画像内の近接にまとめて配置してもよい。
In the line "HUM" of "humidity" in FIG. 11, the number of cells "5 (= 1x5)" means that the size of the writing area of the humidity sensor is one cell for one sensor and the number of sensors is five. It means a total of 5 squares. The "humidity" value "0.66,0.57,0.64,0.58,0.7" is the value of the first humidity sensor "0.66", the value of the second humidity sensor "0.57", ..., The value of the fifth humidity sensor in order from the left. Indicates "0.7".
The layout data 13L described above has described an example of arranging the same type of sensor information in close proximity in an image. On the other hand, sensor information having the same sensor installation location (room) may be arranged close to each other in the image.
 図12は、図10のレイアウトデータ13Lに対して、図11の「値」を書き込んだ結果の画像データの説明図である。図12では説明をわかりやすくするために「T」や「ACC1」などの図中記号も併記したが、実際は図中記号は画像からは省略される。
 例えば、画像変換部13は、「ACC1」の書き込み領域には、値「0」を示す黒色を書き込む。一方、画像変換部13は、「HM4」の書き込み領域には、値「1」を示す白色を書き込む。つまり、書き込む値が大きいほど白色に近づく。
 さらに、画像変換部13が作成した画像データに対して、その画像データが示すシチュエーションを示す被観察者2uの行動「帰宅」を対応づけることにより、分類モデル14mが定義される。
FIG. 12 is an explanatory diagram of image data as a result of writing the “value” of FIG. 11 with respect to the layout data 13L of FIG. In FIG. 12, symbols in the figure such as “T” and “ACC1” are also shown for the sake of clarity, but in reality, the symbols in the figure are omitted from the image.
For example, the image conversion unit 13 writes black indicating the value “0” in the writing area of “ACC1”. On the other hand, the image conversion unit 13 writes white indicating the value "1" in the writing area of "HM4". That is, the larger the value to be written, the closer to white.
Further, the classification model 14m is defined by associating the image data created by the image conversion unit 13 with the behavior "returning home" of the observer 2u indicating the situation indicated by the image data.
 行動分類部14は、過去に登録された分類モデル14mを参照することで、分類モデル14mの画像データと一致または類似する画像データが現在の被観察者2uから検知された場合には、分類モデル14mで対応する行動「帰宅」を分類結果として出力する(S104)。
 なお、分類モデル14mの定義は、観察者3uなどの人間が「帰宅」、「休憩」などの意味のある行動ラベルを教えてもよい。一方、機械学習により自動分類した「行動A」、「行動B」などの意味を持たないが類似する行動をグループ化しただけの行動ラベルを用いてもよい。
The behavior classification unit 14 refers to the classification model 14m registered in the past, and when image data matching or similar to the image data of the classification model 14m is detected from the current observer 2u, the classification model The corresponding action "returning home" at 14 m is output as a classification result (S104).
In addition, in the definition of the classification model 14m, a person such as the observer 3u may teach a meaningful action label such as "going home" or "resting". On the other hand, an action label such as "behavior A" or "behavior B" automatically classified by machine learning may be used, which is simply a group of similar actions that have no meaning.
 図13は、行動補正部15の処理内容を示す時系列グラフである。
 グラフ241は、補正前の行動分類部14の出力データを示す。グラフ241では、基本的には被観察者2uが外出中であることが検知されているが、10:00に5分間(ΔT1)の入浴行動と、15:00に3分間(ΔT2)の掃除行動が検知されたとする。
 グラフ242は、補正後の行動補正部15の出力データを示す。行動補正部15は、時間的に前後の行動とは異なる行動が突発的に検知されたときに、前後の行動と同じ行動になるように、異なる行動を補正する(S112)。
 そのため、行動補正部15は各行動の期間(ΔT1、ΔT2)が、所定期間Th=10分より短いときに、補正対象となる不自然な行動と判定する。これにより、10:00の入浴行動と、15:00の掃除行動とが、それぞれ前後の「外出中」と同じ行動に補正される。
FIG. 13 is a time series graph showing the processing contents of the action correction unit 15.
Graph 241 shows the output data of the action classification unit 14 before correction. In Graph 241 it is basically detected that the observer 2u is out, but the bathing behavior for 5 minutes (ΔT1) at 10:00 and the cleaning for 3 minutes (ΔT2) at 15:00. Suppose an action is detected.
Graph 242 shows the output data of the behavior correction unit 15 after correction. When an action different from the previous / next action is suddenly detected, the action correction unit 15 corrects the different action so that the action is the same as the previous / next action (S112).
Therefore, when the period (ΔT1, ΔT2) of each action is shorter than the predetermined period Th = 10 minutes, the action correction unit 15 determines that the action is an unnatural action to be corrected. As a result, the bathing behavior at 10:00 and the cleaning behavior at 15:00 are corrected to the same behaviors as "going out" before and after, respectively.
 また、行動補正部15は、補正対象となる不自然な行動の検知方法として、行動の期間だけでなく、行動の種類を参照してもよい。例えば、行動補正部15は、前の行動(リラックス)に対して、その直後に(1分後に)発生すること自体が不自然な行動(外出)を補正対象としてもよい。
 なお、行動補正部15は、前後とは異なる行動を補正するか否かについて、異なる行動の種類によって、比較する所定期間Thを変更してもよい。例えば、入浴行動は20分(所定期間Th1)未満の場合には不自然な行動として補正する一方、掃除行動は5分(所定期間Th2)未満の場合には不自然な行動として補正する。
 一方、比較例として、行動検知の時間間隔を短くすることで行動認識の精度を向上させる方法もあるが、この方法では制御の煩雑さが発生してしまう。
Further, the action correction unit 15 may refer not only to the period of action but also to the type of action as a method of detecting an unnatural action to be corrected. For example, the behavior correction unit 15 may make a correction target for an action (going out) that is unnatural to occur immediately after (1 minute later) with respect to the previous action (relaxation).
The behavior correction unit 15 may change the predetermined period Th to be compared depending on the type of different behavior as to whether or not to correct the behavior different from the front and back. For example, bathing behavior is corrected as unnatural behavior if it is less than 20 minutes (predetermined period Th1), while cleaning behavior is corrected as unnatural behavior if it is less than 5 minutes (predetermined period Th2).
On the other hand, as a comparative example, there is a method of improving the accuracy of action recognition by shortening the time interval of action detection, but this method causes complicated control.
 以上説明した本実施形態では、センサ情報取得部11がセンサ情報を取得し、そのセンサ情報が瞬間的な誤動作により被観察者2uを見逃してしまった場合でも、センサ情報変換部11Tが時間軸または空間軸で近傍のセンサ情報をもとに確率密度関数にすることで、検出漏れを救済できる。これにより、誤検出データを含むセンサ情報から、認識精度の低減を抑制することができる。 In the present embodiment described above, even if the sensor information acquisition unit 11 acquires the sensor information and the sensor information misses the observed person 2u due to a momentary malfunction, the sensor information conversion unit 11T is on the time axis or Detection omission can be relieved by using a probability density function based on sensor information in the vicinity on the spatial axis. As a result, it is possible to suppress a decrease in recognition accuracy from sensor information including false detection data.
 なお、本発明は前記した実施例に限定されるものではなく、さまざまな変形例が含まれる。例えば、前記した実施例は本発明を分かりやすく説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。
 また、ある実施例の構成の一部を他の実施例の構成に置き換えることが可能であり、また、ある実施例の構成に他の実施例の構成を加えることも可能である。
 また、各実施例の構成の一部について、他の構成の追加・削除・置換をすることが可能である。また、上記の各構成、機能、処理部、処理手段などは、それらの一部または全部を、例えば集積回路で設計するなどによりハードウェアで実現してもよい。
 また、前記の各構成、機能などは、プロセッサがそれぞれの機能を実現するプログラムを解釈し、実行することによりソフトウェアで実現してもよい。
The present invention is not limited to the above-described embodiment, and includes various modifications. For example, the above-described embodiment has been described in detail in order to explain the present invention in an easy-to-understand manner, and is not necessarily limited to those having all the described configurations.
Further, it is possible to replace a part of the configuration of one embodiment with the configuration of another embodiment, and it is also possible to add the configuration of another embodiment to the configuration of one embodiment.
Further, it is possible to add / delete / replace a part of the configuration of each embodiment with another configuration. Further, each of the above configurations, functions, processing units, processing means and the like may be realized by hardware by designing a part or all of them by, for example, an integrated circuit.
Further, each of the above configurations, functions, and the like may be realized by software by the processor interpreting and executing a program that realizes each function.
 各機能を実現するプログラム、テーブル、ファイルなどの情報は、メモリや、ハードディスク、SSD(Solid State Drive)などの記録装置、または、IC(Integrated Circuit)カード、SDカード、DVD(Digital Versatile Disc)などの記録媒体におくことができる。
 また、制御線や情報線は説明上必要と考えられるものを示しており、製品上必ずしも全ての制御線や情報線を示しているとは限らない。実際にはほとんど全ての構成が相互に接続されていると考えてもよい。
 さらに、各装置を繋ぐ通信手段は、無線LANに限定せず、有線LANやその他の通信手段に変更してもよい。
Information such as programs, tables, and files that realize each function can be stored in memory, hard disks, recording devices such as SSDs (Solid State Drives), IC (Integrated Circuit) cards, SD cards, DVDs (Digital Versatile Discs), etc. Can be placed on the recording medium of.
In addition, the control lines and information lines indicate those that are considered necessary for explanation, and do not necessarily indicate all the control lines and information lines in the product. In practice, it can be considered that almost all configurations are interconnected.
Further, the communication means for connecting each device is not limited to the wireless LAN, and may be changed to a wired LAN or other communication means.
 1   行動認識サーバ
 2   センサ
 2u  被観察者
 3   観察者端末
 11  センサ情報取得部
 11T センサ情報変換部
 12  時刻情報取得部
 13  画像変換部
 13L レイアウトデータ
 14  行動分類部
 14m 分類モデル
 15  行動補正部
 16  現在行動蓄積部
 17  行動出力部
1 Action recognition server 2 Sensor 2u Observer 3 Observer terminal 11 Sensor information acquisition unit 11T Sensor information conversion unit 12 Time information acquisition unit 13 Image conversion unit 13L Layout data 14 Behavior classification unit 14m Classification model 15 Behavior correction unit 16 Current behavior Accumulation unit 17 Action output unit

Claims (6)

  1.  被観察者を検知するセンサの集合から、前記センサごとの検知結果を示すセンサ情報を取得するセンサ情報取得部と、
     時系列の前記センサ情報のうちの前記被観察者が検知された反応時刻をもとに、その反応時刻を最大値とする時間方向の確率密度関数に前記センサ情報を変換するセンサ情報変換部と、
     変換された前記センサ情報をもとに、各時刻での前記被観察者の行動を分類する行動分類部と、
     分類した前記被観察者の行動をデータ化して出力する行動出力部と、を有することを特徴とする
     行動認識サーバ。
    A sensor information acquisition unit that acquires sensor information indicating the detection result for each sensor from a set of sensors that detect the observed person, and a sensor information acquisition unit.
    A sensor information conversion unit that converts the sensor information into a probability density function in the time direction that maximizes the reaction time based on the reaction time detected by the observed person in the sensor information in the time series. ,
    A behavior classification unit that classifies the behavior of the observed person at each time based on the converted sensor information,
    A behavior recognition server characterized by having a behavior output unit that converts the classified behavior of the observed person into data and outputs the data.
  2.  前記センサ情報変換部は、前記センサ情報のうちの前記被観察者が検知された反応場所をもとに、その反応場所を最大値とする空間方向の確率密度関数に前記センサ情報を変換することを特徴とする
     請求項1に記載の行動認識サーバ。
    The sensor information conversion unit converts the sensor information into a probability density function in the spatial direction having the reaction location as the maximum value based on the reaction location detected by the observed person in the sensor information. The behavior recognition server according to claim 1.
  3.  前記行動認識サーバは、さらに、画像変換部を備えており、
     前記画像変換部は、レイアウトデータで規定された画像内の前記各センサ情報の配置に従って、前記各センサ情報の値に基づいた任意の範囲の値を書き込むことで前記各センサ情報を画像化し、その画像を前記行動分類部に入力する前記センサ情報として用いることを特徴とする
     請求項1に記載の行動認識サーバ。
    The action recognition server further includes an image conversion unit.
    The image conversion unit images the sensor information by writing a value in an arbitrary range based on the value of the sensor information according to the arrangement of the sensor information in the image defined by the layout data. The behavior recognition server according to claim 1, wherein an image is used as the sensor information to be input to the behavior classification unit.
  4.  前記行動認識サーバは、さらに、行動補正部を備えており、
     前記行動補正部は、前記行動分類部が分類した前記被観察者の行動について、時間的に前後の行動とは異なる行動が突発的に検知されたときに、前後の行動と同じ行動になるように、異なる行動を補正することを特徴とする
     請求項1に記載の行動認識サーバ。
    The behavior recognition server further includes a behavior correction unit.
    The behavior correction unit makes the behavior of the observed person classified by the behavior classification unit the same behavior as the behavior before and after when a behavior different from the behavior before and after is suddenly detected. The behavior recognition server according to claim 1, further comprising correcting different behaviors.
  5.  被観察者の生活する部屋の入口に対向しない方向に設置されているセンサを含む前記被観察者を検知する前記センサの集合と、
     前記被観察者の行動を認識する行動認識サーバとを有する行動認識システムであって、
     前記行動認識サーバは、
     前記被観察者を検知する前記センサの集合から、前記センサごとの検知結果を示すセンサ情報を取得するセンサ情報取得部と、
     時系列の前記センサ情報のうちの前記被観察者が検知された反応時刻をもとに、その反応時刻を最大値とする時間方向の確率密度関数に前記センサ情報を変換するセンサ情報変換部と、
     変換された前記センサ情報をもとに、各時刻での前記被観察者の行動を分類する行動分類部と、
     分類した前記被観察者の行動をデータ化して出力する行動出力部と、を有することを特徴とする
     行動認識システム。
    A set of the sensors for detecting the observer, including a sensor installed in a direction not facing the entrance of the room in which the observer lives, and a set of the sensors.
    A behavior recognition system having a behavior recognition server that recognizes the behavior of the observed person.
    The action recognition server is
    A sensor information acquisition unit that acquires sensor information indicating a detection result for each sensor from the set of sensors that detect the observed person, and a sensor information acquisition unit.
    A sensor information conversion unit that converts the sensor information into a probability density function in the time direction that maximizes the reaction time based on the reaction time detected by the observed person in the sensor information in the time series. ,
    A behavior classification unit that classifies the behavior of the observed person at each time based on the converted sensor information, and
    A behavior recognition system characterized by having a behavior output unit that converts the classified behavior of the observed person into data and outputs the data.
  6.  行動認識サーバは、センサ情報取得部と、センサ情報変換部と、行動分類部と、行動出力部と、を有しており、
     前記センサ情報取得部は、被観察者を検知するセンサの集合から、前記センサごとの検知結果を示すセンサ情報を取得し、
     前記センサ情報変換部は、時系列の前記センサ情報のうちの前記被観察者が検知された反応時刻をもとに、その反応時刻を最大値とする時間方向の確率密度関数に前記センサ情報を変換し、
     前記行動分類部は、変換された前記センサ情報をもとに、各時刻での前記被観察者の行動を分類し、
     前記行動出力部は、分類した前記被観察者の行動をデータ化して出力することを特徴とする
     行動認識方法。
    The action recognition server has a sensor information acquisition unit, a sensor information conversion unit, an action classification unit, and an action output unit.
    The sensor information acquisition unit acquires sensor information indicating a detection result for each sensor from a set of sensors that detect an observed person.
    Based on the reaction time when the observed person is detected in the sensor information in the time series, the sensor information conversion unit converts the sensor information into a probability density function in the time direction having the reaction time as the maximum value. Converted,
    The behavior classification unit classifies the behavior of the observed person at each time based on the converted sensor information.
    The behavior output unit is a behavior recognition method characterized in that the classified behaviors of the observed person are converted into data and output.
PCT/JP2020/042057 2020-03-25 2020-11-11 Behavior recognition server, behavior recognition system, and behavior recognition method WO2021192399A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202080064882.1A CN114402575B (en) 2020-03-25 2020-11-11 Action recognition server, action recognition system, and action recognition method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-054435 2020-03-25
JP2020054435A JP7436257B2 (en) 2020-03-25 2020-03-25 Behavior recognition server, behavior recognition system, and behavior recognition method

Publications (1)

Publication Number Publication Date
WO2021192399A1 true WO2021192399A1 (en) 2021-09-30

Family

ID=77891272

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/042057 WO2021192399A1 (en) 2020-03-25 2020-11-11 Behavior recognition server, behavior recognition system, and behavior recognition method

Country Status (3)

Country Link
JP (1) JP7436257B2 (en)
CN (1) CN114402575B (en)
WO (1) WO2021192399A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030229471A1 (en) * 2002-01-22 2003-12-11 Honeywell International Inc. System and method for learning patterns of behavior and operating a monitoring and response system based thereon
JP2004145820A (en) * 2002-10-28 2004-05-20 Nippon Telegr & Teleph Corp <Ntt> Living motion detection method, device and program, and storage medium storing the program
JP2019087179A (en) * 2017-11-10 2019-06-06 富士通株式会社 Analyzer, analysis method and program

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3996428B2 (en) 2001-12-25 2007-10-24 松下電器産業株式会社 Abnormality detection device and abnormality detection system
CN1322465C (en) * 2005-08-15 2007-06-20 阜阳师范学院 Image segmentation and fingerprint line distance getting technique in automatic fingerprint identification method
JP2011232871A (en) * 2010-04-26 2011-11-17 Sony Corp Information processor, text selection method and program
JP2012058780A (en) * 2010-09-03 2012-03-22 Toyota Motor Corp Device and method for creating environment map and device and method for action prediction
JP5593486B2 (en) * 2012-10-18 2014-09-24 独立行政法人産業技術総合研究所 Sensor network system
JP2016006611A (en) 2014-06-20 2016-01-14 ソニー株式会社 Information processing device, information processing method, and program
US20170312574A1 (en) * 2015-01-05 2017-11-02 Sony Corporation Information processing device, information processing method, and program
DE102015207415A1 (en) * 2015-04-23 2016-10-27 Adidas Ag Method and apparatus for associating images in a video of a person's activity with an event
KR20170084445A (en) * 2016-01-12 2017-07-20 삼성에스디에스 주식회사 Method and apparatus for detecting abnormality using time-series data
JP2017224174A (en) * 2016-06-15 2017-12-21 シャープ株式会社 Information acquisition terminal, information collection device, behavior observation system, control method of information acquisition terminal, and control method of information collection device
JP6890813B2 (en) 2016-08-22 2021-06-18 学校法人慶應義塾 Behavior detection system, information processing device, program
CN106644436B (en) * 2016-12-16 2019-02-01 中国西电电气股份有限公司 A kind of assessment method of breaker mechanic property
JP6795093B2 (en) * 2017-06-02 2020-12-02 富士通株式会社 Judgment device, judgment method and judgment program
JP2019054333A (en) * 2017-09-13 2019-04-04 株式会社東芝 Wireless terminal, wireless communication system, wireless communication method and wireless communication program
CN108764059B (en) * 2018-05-04 2021-01-01 南京邮电大学 Human behavior recognition method and system based on neural network
JP2019213030A (en) * 2018-06-04 2019-12-12 凸版印刷株式会社 Monitoring system
JP7085750B2 (en) 2018-07-18 2022-06-17 株式会社Z-Works Lifestyle analysis system, lifestyle analysis method and program
CN109362066B (en) * 2018-11-01 2021-06-25 山东大学 Real-time behavior recognition system based on low-power-consumption wide-area Internet of things and capsule network and working method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030229471A1 (en) * 2002-01-22 2003-12-11 Honeywell International Inc. System and method for learning patterns of behavior and operating a monitoring and response system based thereon
JP2004145820A (en) * 2002-10-28 2004-05-20 Nippon Telegr & Teleph Corp <Ntt> Living motion detection method, device and program, and storage medium storing the program
JP2019087179A (en) * 2017-11-10 2019-06-06 富士通株式会社 Analyzer, analysis method and program

Also Published As

Publication number Publication date
CN114402575B (en) 2023-12-12
JP2021157275A (en) 2021-10-07
JP7436257B2 (en) 2024-02-21
CN114402575A (en) 2022-04-26

Similar Documents

Publication Publication Date Title
Ghayvat et al. Smart aging system: uncovering the hidden wellness parameter for well-being monitoring and anomaly detection
Monekosso et al. Behavior analysis for assisted living
CN108348160B (en) Monitoring a person&#39;s activities of daily living
Aran et al. Anomaly detection in elderly daily behavior in ambient sensing environments
Dahmen et al. Smart secure homes: a survey of smart home technologies that sense, assess, and respond to security threats
Sunder et al. Incidence, characteristics, and mortality of infective endocarditis in France in 2011
US20180174671A1 (en) Cognitive adaptations for well-being management
US20210241923A1 (en) Sensor-based machine learning in a health prediction environment
JP2005509218A (en) Patient data mining to maintain quality
EP3163545A1 (en) Abnormal activity detection for elderly and handicapped individuals
Howedi et al. An entropy-based approach for anomaly detection in activities of daily living in the presence of a visitor
JP2019155071A (en) Event prediction system, sensor signal processing system, event prediction method, and program
Bijlani et al. An unsupervised data-driven anomaly detection approach for adverse health conditions in people living with dementia: Cohort study
WO2021192399A1 (en) Behavior recognition server, behavior recognition system, and behavior recognition method
WO2021192398A1 (en) Behavior recognition server and behavior recognition method
Gargees et al. Early illness recognition in older adults using transfer learning
Ou et al. Identifying Elderlies at Risk of Becoming More Depressed with Internet-of-Things
EP3163546A1 (en) Method and device for detecting anomalous behavior of a user
JP6830298B1 (en) Information processing systems, information processing devices, information processing methods, and programs
Jiang et al. Recognising activities at home: Digital and human sensors
Akbarzadeh et al. Smart aging system
Zhao et al. Resident activity recognition based on binary infrared sensors and soft computing
Abir et al. The Association of Inpatient Occupancy with Hospital‐Acquired Clostridium difficile Infection.
US20240170117A1 (en) Homeowner Health Alerts and Mitigation Based on Home Sensor Data
KR20220102737A (en) Method And Computer Program for Predicting Recurrence Risk of Acute Coronary Syndrome

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20927577

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20927577

Country of ref document: EP

Kind code of ref document: A1