WO2022196213A1 - 見守りシステム - Google Patents
見守りシステム Download PDFInfo
- Publication number
- WO2022196213A1 WO2022196213A1 PCT/JP2022/005693 JP2022005693W WO2022196213A1 WO 2022196213 A1 WO2022196213 A1 WO 2022196213A1 JP 2022005693 W JP2022005693 W JP 2022005693W WO 2022196213 A1 WO2022196213 A1 WO 2022196213A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- person
- unit
- contact
- imaging
- image
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 claims abstract description 78
- 238000012544 monitoring process Methods 0.000 claims description 26
- 230000033001 locomotion Effects 0.000 claims description 9
- 238000012545 processing Methods 0.000 description 55
- 238000001514 detection method Methods 0.000 description 26
- 230000006870 function Effects 0.000 description 25
- 238000000034 method Methods 0.000 description 19
- 238000004891 communication Methods 0.000 description 18
- 230000010365 information processing Effects 0.000 description 13
- 238000009434 installation Methods 0.000 description 13
- 238000010801 machine learning Methods 0.000 description 9
- 102100040841 C-type lectin domain family 5 member A Human genes 0.000 description 8
- 101150008824 CLEC5A gene Proteins 0.000 description 8
- 101150056111 MDL1 gene Proteins 0.000 description 8
- 101100386697 Magnaporthe oryzae (strain 70-15 / ATCC MYA-4617 / FGSC 8958) DCL1 gene Proteins 0.000 description 8
- 101150095628 MDL2 gene Proteins 0.000 description 7
- 101100062770 Magnaporthe oryzae (strain 70-15 / ATCC MYA-4617 / FGSC 8958) DCL2 gene Proteins 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 238000013178 mathematical model Methods 0.000 description 7
- 210000003128 head Anatomy 0.000 description 4
- 210000003127 knee Anatomy 0.000 description 4
- 238000000605 extraction Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000007477 logistic regression Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000007637 random forest analysis Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 101100236861 Escherichia coli (strain K12) mdlA gene Proteins 0.000 description 1
- 101100183310 Escherichia coli (strain K12) mdlB gene Proteins 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B25/00—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B25/00—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
- G08B25/01—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
- G08B25/04—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using a single signalling line, e.g. in a closed loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Definitions
- the present invention relates to a monitoring system.
- Patent Literature 1 discloses a monitored person posture detection device that includes an imaging device, a skeleton information extraction unit, a skeleton information sample storage unit, a posture detection unit, and a posture determination unit. disclosed.
- the imaging device acquires image data of an imaging area for monitoring a person to be monitored.
- the skeleton information extraction unit extracts the skeleton information of the monitored person from the image data captured by the imaging device.
- the skeleton information sample storage unit stores posture information samples made up of skeleton information.
- the posture detection unit detects the posture of the monitored person based on the human skeleton information extracted by the skeleton information extraction unit and the skeleton information sample stored in the skeleton information storage unit.
- a posture determination unit determines whether or not there is a fall based on the posture detected by the posture detection unit.
- the posture detection device as described above has room for further improvement, for example, in terms of grasping the presence or absence of contact between people.
- the present invention has been made in view of the above circumstances, and aims to provide a monitoring system that can properly grasp the contact situation of a person.
- the monitoring system of the present invention includes an imaging unit that captures an image of a monitored space, and a skeleton model generator that generates a skeleton model representing a person included in the image captured by the imaging unit.
- a position detector capable of detecting a position of a person corresponding to the skeletal model with respect to the imaging depth direction by the imaging unit; a first skeletal model representing a first person; and a second person. Presence or absence of overlap with the second skeleton model, the position of the first person in the imaging depth direction detected by the position detector, and the position of the second person detected by the position detector
- a determination unit that determines contact between the first person and the second person based on the position in the imaging depth direction.
- the monitoring system according to the present invention has the effect of being able to properly grasp the contact situation of a person.
- Drawing 1 is a block diagram showing a schematic structure of a watching system concerning an embodiment.
- Drawing 2 is a mimetic diagram showing an example of loading of a watching system concerning an embodiment.
- FIG. 3 is a schematic diagram illustrating an example of state determination based on a skeleton model in the watching system according to the embodiment.
- FIG. 4 is a schematic diagram illustrating an example of determination in the watching system according to the embodiment.
- FIG. 5 is a schematic diagram illustrating an example of determination in the watching system according to the embodiment.
- FIG. 6 is a schematic diagram illustrating an example of determination in the watching system according to the embodiment.
- FIG. 7 is a schematic diagram illustrating an example of determination in the watching system according to the embodiment.
- Drawing 8 is a flow chart explaining an example of processing in a watching system concerning an embodiment.
- the watching system 1 of this embodiment shown in FIGS. 1 and 2 is a system that monitors and watches the state of a person P existing in the monitored space SP.
- the watching system 1 of the present embodiment is applied, for example, to welfare facilities such as care facilities such as outpatient care (day service) and facilities for the elderly.
- the monitored space SP is, for example, a living room space, a corridor space, or the like of the facility.
- the monitoring system 1 of the present embodiment includes an installation device 10, a detection device 20, and a terminal device 30, which constitute a cooperation system in which information is mutually transmitted and received and cooperated.
- the installation device 10 and the detection device 20 constitute a detection system 1A that detects contact (interference) between persons P in the monitored space SP and records the situation at the time of contact.
- the detection system 1A determines the state of the person P existing in the monitored space SP based on the skeleton model MDL (see FIG. 3) representing the person P, This realizes a configuration for appropriately grasping the situation at the time of contact.
- MDL skeleton model
- connection method between each component for the transmission and reception of power supply, control signals, various information, etc. is either wired connection or wireless connection unless otherwise specified.
- a wired connection is, for example, a connection via a wiring material such as an electric wire or an optical fiber.
- Wireless connection is, for example, connection by wireless communication, contactless power supply, or the like.
- the installation device 10 is a device that is installed in the monitored space SP and captures an image of the monitored space SP.
- the installation device 10 includes an imaging unit 11 , a position detector 12 , a display 13 , a speaker 14 and a microphone 15 .
- the installed device 10 configures an indoor monitoring module that integrates various functions by assembling these components into a housing or the like to form a unit, and then installing the unit on the ceiling or the like of the monitored space SP. Further, in the installation device 10, for example, these components may be individually provided in the monitored space SP. For example, a plurality of installation devices 10 are provided in a facility to which the watching system 1 is applied.
- the imaging unit 11 captures an image I (for example, see FIG. 3) of the monitored space SP.
- the imaging unit 11 may be, for example, a monocular camera capable of capturing a two-dimensional image, or a stereo camera capable of capturing a three-dimensional image. Also, the imaging unit 11 may be a so-called TOF (Time of Flight) camera or the like.
- the imaging unit 11 is typically provided at a position where all persons P present in the monitored space SP can be imaged.
- the imaging unit 11 is arranged, for example, above the monitored space SP, here, on the ceiling, and the angle of view is set so that the imaging range includes the entire area of the monitored space SP. If one imaging unit 11 cannot cover the entire monitored space SP, a plurality of imaging units 11 may be provided so as to cover the entire monitored space SP.
- imaging depth direction X A direction along the horizontal direction may be referred to as an “imaging width direction Y”, and a direction intersecting the imaging depth direction X and along the vertical direction may be referred to as an “imaging vertical direction Z”.
- imaging depth direction X typically corresponds to a direction along the optical axis direction of the imaging unit 11 .
- the position detector 12 is a detector capable of detecting the position of the person P with respect to the imaging depth direction X by the imaging unit 11 .
- the position detector 12 detects the position of the person P imaged by the imaging unit 11 in the imaging depth direction X. As shown in FIG.
- the position of the person P imaged by the imaging unit 11 with respect to the imaging depth direction X is typically the position of the person P corresponding to the skeleton model MDL generated by the detection device 20 with respect to the imaging depth direction X, as will be described later. Equivalent to position.
- various radars, sonar, LiDAR (light detection and ranging), etc. that detect distance using laser, infrared rays, millimeter waves, ultrasonic waves, etc. can be used.
- the position detector 12 can detect the position of the person P in the imaging depth direction X by measuring the distance between the person P and the position detector 12 along the imaging depth direction X, for example.
- the display 13 displays (outputs) image information (visual information) toward the monitored space SP.
- the display 13 is configured by, for example, a thin liquid crystal display, plasma display, organic EL display, or the like.
- the display 13 displays image information at a position visible from the person P within the monitored space SP.
- the display 13 provides various guidance (announcements) by outputting image information, for example.
- the speaker 14 outputs sound information (auditory information) toward the monitored space SP.
- the speaker 14 performs various guidance (announcements) by outputting sound information, for example.
- the microphone 15 is a sound collecting device that converts sounds generated in the monitored space SP into electric signals.
- the microphone 15 can be used, for example, for exchanging voices with persons outside the monitored space SP (for example, facility staff, etc.).
- the detection device 20 is a device that detects the state of the person P existing in the monitored space SP based on the skeleton model MDL generated from the image I captured by the installed device 10 .
- the detection device 20 includes an interface section 21, a storage section 22, and a processing section 23, which are connected so as to be able to communicate with each other.
- the detection device 20 may constitute a so-called cloud service type device (cloud server) implemented on a network, or may constitute a so-called stand-alone type device separated from the network.
- the detection device 20 can also be configured by installing an application for realizing various processes in various computer devices such as personal computers, workstations, and tablet terminals, for example.
- the detection device 20 is provided, for example, in a facility management center or the like to which the watching system 1 is applied, but is not limited to this.
- the interface unit 21 is an interface for transmitting and receiving various information to and from other devices other than the detecting device 20.
- the interface section 21 has a function of wired communication of information with each section via an electric wire or the like, a function of wireless communication of information with each section via a wireless communication unit or the like, and the like.
- the interface unit 21 transmits and receives information to and from a plurality of installed devices 10 and a plurality of terminal devices 30 as devices other than the detected device 20 .
- the interface unit 21 is directly communicably connected to the plurality of installed devices 10, and is communicably connected to the plurality of terminal devices 30 via the communication unit 21a and the network N. Although illustrated as a thing, it is not restricted to this.
- a plurality of installed devices 10 may also be connected to the interface section 21 via the communication section 21a and the network N.
- the communication unit 21a is a communication module (Data Communication Module) connected to the network N for communication.
- the network N can use any communication network, whether wired or wireless.
- the storage unit 22 is a storage circuit that stores various information.
- the storage unit 22 can rewrite data such as a relatively large-capacity storage device such as a hard disk, SSD (Solid State Drive), or optical disc, or RAM, flash memory, NVSRAM (Non Volatile Static Random Access Memory), etc. Any semiconductor memory may be used.
- the storage unit 22 stores, for example, programs for the detecting device 20 to implement various functions.
- the programs stored in the detecting device 20 include a program that causes the interface section 21 to function, a program that causes the communication section 21a to function, a program that causes the processing section 23 to function, and the like.
- the storage unit 22 stores, for example, a learned mathematical model used for determining the state of the person P in the monitored space SP.
- the storage unit 22 also stores various data necessary for various processes in the processing unit 23 . These various data are read from the storage unit 22 by the processing unit 23 and the like as necessary. Note that the storage unit 22 may be implemented by a cloud server or the like connected to the detecting device 20 via the network N.
- the processing unit 23 is a processing circuit that implements various processing functions in the detection device 20 .
- the processing unit 23 is realized by, for example, a processor.
- a processor means a circuit such as a CPU (Central Processing Unit), an MPU (Micro Processing Unit), an ASIC (Application Specific Integrated Circuit), and an FPGA (Field Programmable Gate Array).
- the processing unit 23 implements each processing function by executing a program read from the storage unit 22, for example.
- the processing unit 23 can execute processing of inputting image data representing the image I captured by the imaging unit 11 of the installation device 10 to the detection device 20 via the interface unit 21 .
- the terminal device 30 is a device that is communicably connected to the detecting device 20 .
- the terminal device 30 includes an interface section 31, a storage section 32, a processing section 33, a display 34, a speaker 35, and a microphone 36, which are connected so as to be able to communicate with each other.
- the terminal device 30 can also be configured by installing applications for realizing various processes in various computer devices such as personal computers, workstations, and tablet terminals, for example.
- the terminal device 30 may constitute, for example, a portable terminal device that can be carried by a staff member or the like of the facility to which the watching system 1 is applied, or may constitute a stationary management terminal device.
- the interface unit 31, storage unit 32, processing unit 33, display 34, speaker 35, and microphone 36 are substantially the same as the interface unit 21, storage unit 22, processing unit 23, display 13, speaker 14, and microphone 15 described above, respectively. Configuration.
- the interface unit 31 is an interface for transmitting and receiving various information to and from other devices other than the terminal device 30 .
- the interface unit 31 is communicably connected to the detecting device 20 via the communication unit 31a and the network N. As shown in FIG.
- the communication unit 31a is a communication module, like the communication unit 21a described above.
- the storage unit 32 stores, for example, programs for the terminal device 30 to implement various functions.
- the processing unit 33 is a processing circuit that implements various processing functions in the terminal device 30 .
- the processing unit 33 implements each processing function by executing a program read from the storage unit 32, for example.
- the display 34 displays image information.
- the speaker 35 outputs sound information.
- a microphone 36 is a sound collecting device that converts sound into an electric signal.
- the processing unit 23 converts the state of the person P existing in the monitored space SP into the skeleton model MDL representing the person P, as shown in FIGS. It has a function to perform various processes such as making a judgment based on the contact, properly grasping and recording the situation at the time of contact.
- the processing unit 23 of the present embodiment includes an information processing unit 23a, a skeleton model generation unit 23b, a determination unit 23c, and a motion processing unit 23d functionally conceptually in order to realize the various processing functions described above. Consists of The processing unit 23 implements the processing functions of the information processing unit 23a, the skeleton model generation unit 23b, the determination unit 23c, and the motion processing unit 23d by executing programs read from the storage unit 22, for example.
- the information processing unit 23a is a part having a function capable of executing processing related to various information used in the monitoring system 1.
- the information processing section 23 a can execute processing for transmitting and receiving various information to and from the installed device 10 and the terminal device 30 .
- the monitoring system 1 can exchange information (for example, audio information, image information, etc.) with the installation device 10 and the terminal device 30 through processing by the information processing section 23a.
- the information processing section 23a can execute a process of acquiring image data representing the image I of the monitored space SP captured by the imaging section 11 from the installation device 10 and temporarily storing the image data in the storage section 22. .
- the skeletal model generation unit 23b is a part having a function capable of executing a process of generating a skeletal model MDL (see FIG. 3) representing the person P included in the image I of the monitored space SP captured by the imaging unit 11.
- the skeletal model MDL is a human body model that represents the human body skeleton including the head, eyes, nose, mouth, shoulders, hips, feet, knees, elbows, hands, joints, etc. of the person P in three dimensions.
- the skeletal model generation unit 23b can generate the skeletal model MDL representing the person P included in the image I, for example, by top-down skeletal estimation in which the person P is first detected and then the skeletal structure of the person P is estimated. .
- the skeletal model generation unit 23b recognizes the person P in the image I using various known object recognition techniques, and defines the outside of the area where the recognized person P exists in the image I with the bounding box BB. Execute the enclosing process.
- the bounding box BB is a rectangular frame having a size necessary to enclose the person P recognized in the image I.
- the skeletal model generation unit 23b generates skeletal parts of the human body such as the head, eyes, nose, mouth, shoulders, waist, feet, knees, elbows, hands, joints, etc. ) are detected, and the skeleton model MDL of the person P is generated by combining them.
- the skeletal models MDL illustrated in FIG. 3 the skeletal parts of the human body such as the head, eyes, nose, mouth, shoulders, waist, feet, knees, elbows, hands, and joints of the person P are symbolically represented by "points". It is generated by representing and connecting these with "lines”.
- the skeletal model generation unit 23b When there are multiple persons P included in the image I, the skeletal model generation unit 23b generates a plurality of skeletal models MDL according to the number of the persons P.
- the skeletal model generation unit 23b stores the generated skeletal model MDL in the storage unit 22.
- the skeletal model generation unit 23b estimates the skeleton of the person P after detecting all the skeletal parts of the human body in the image I without using the bounding box BB or the like.
- a skeletal model representing the included person P may be generated.
- the skeletal model generator 23b first uses various known object recognition techniques to identify the head, eyes, nose, mouth, shoulders, waist, feet, knees, elbows, hands, and so on of the human body in the image I. All three-dimensional position coordinates of each skeletal part such as a joint are detected. After that, the skeletal model generation unit 23b generates the skeletal model MDL of each person P by matching the detected skeletal parts for each person P and joining them together.
- this monitoring system 1 can use object recognition technology using various types of machine learning as an object recognition technology for recognizing the person P in the image I.
- the watching system 1 learns the person P in advance by various types of machine learning, for example, using an image I including the person P as data for learning.
- the watching system 1 can register users of the facility in advance, and can learn each user in the image I so that each user can be identified as an individual.
- the monitoring system 1 uses pre-collected data related to “an image I including a person P who is a user of the facility” as explanatory variables, and uses “identification information of the person P corresponding to the image (for example, a user ID, etc.
- the watching system 1 pre-stores in the storage unit 22 a learned mathematical model for object recognition (person recognition) obtained by this machine learning.
- the skeletal model generation unit 23b performs classification/regression based on, for example, a mathematical model for object recognition (person recognition) that has already been learned and stored in the storage unit 22 as described above. After recognizing P in a state in which the individual is specified by the identification information, a skeletal model MDL representing the person P is generated. More specifically, the skeleton model generation unit 23b inputs the image I captured by the imaging unit 11 to the mathematical model for object recognition. As a result, the skeletal model generation unit 23b recognizes the person P in the image I, acquires the identification information specifying the person P, and generates the skeletal model MDL representing the person P.
- the skeletal model generation unit 23b can generate the skeletal model MDL of the person P whose individual is specified by the identification information.
- the skeletal model generating unit 23b stores the generated skeletal model MDL in the storage unit 22 together with the identification information of the person P whose individual is specified.
- the skeleton model generation unit 23b executes a process of deleting the image I used for generating the skeleton model MDL, and temporarily stores the image I used in the generation of the skeleton model MDL. It is also possible not to leave the image data representing the image I in the copy or the like.
- the watching system 1 does not use the image I used to generate the skeletal model MDL, but uses the skeletal model MDL generated from the image in subsequent processes. Various processes are executed (see FIGS. 4, 5, etc.).
- the monitoring system 1 can perform various types of monitoring while ensuring the privacy of the facility user without using an image showing the individual face of the facility user.
- the determining unit 23c is a part having a function capable of executing processing for determining the state of the person P corresponding to the skeletal model MDL generated by the skeletal model generating unit 23b, based on the skeletal model MDL.
- the determining unit 23c determines the state of the person P corresponding to the skeletal model MDL generated by the skeletal model generating unit 23b by distinguishing between a standing state, a sitting state, a falling state, and the like. If a person P is included in the image I of the monitored space SP captured by the imaging unit 11, the determination unit 23c performs these state determinations on the person P.
- the watching system 1 learns the state of the person P by various machine learning in advance using, for example, the relative positional relationship and relative distance of each skeletal part in the skeletal model MDL, the size of the bounding box BB, etc. as parameters. back.
- the watching system 1 uses pre-collected data related to "the relative positional relationship, relative distance, and size of the bounding box BB of each skeletal part in the skeletal model MDL" as explanatory variables.
- various types of algorithms applicable to the present embodiment can be used, such as logistic regression, support vector machine, neural network, random forest, etc., as described above.
- the watching system 1 stores in the storage unit 22 in advance a learned mathematical model for state determination obtained by the machine learning.
- the determination unit 23c determines the state of the person corresponding to the skeletal model MDL by classification/regression based on the learned mathematical model for state determination stored in the storage unit 22 as described above. More specifically, the determining unit 23c determines the relative positional relationship, the relative distance, and the size of the bounding box BB of each skeletal part obtained from the skeletal model MDL of the person P included in the actually captured image I. , is input to the mathematical model for the state determination. Thereby, the determination unit 23c makes a determination by distinguishing between the states of the person P corresponding to the skeleton model MDL (standing state, sitting state, falling state, etc.).
- the determination unit 23c of the present embodiment generates the skeleton model MDL based on the skeleton model MDL generated by the skeleton model generation unit 23b, without using the image I used to generate the skeleton model MDL.
- a state of the person P corresponding to the MDL is determined.
- the determination unit 23c of the present embodiment determines whether or not the skeletal models MDL representing a plurality of persons P overlap, and based on the position of each person P with respect to the imaging depth direction X detected by the position detector 12, A process of determining contact of a plurality of persons P can also be executed.
- the determination unit 23c of the present embodiment uses a first skeleton model MDL1 representing the first person P1 and a second person P2. , the position of the first person P1 with respect to the imaging depth direction X detected by the position detector 12, and the second person P2 detected by the position detector 12 , the contact between the first person P1 and the second person P2 is determined based on the position in the imaging depth direction X.
- the determination unit 23c determines the first person P1 and the second person P2 when there is no overlap between the skeleton model MDL1 representing the person P1 and the skeleton model MDL2 representing the person P2. are not in contact with each other.
- the determination unit 23c determines the position of the person P1 detected by the position detector 12 when there is an overlap between the skeleton model MDL1 representing the person P1 and the skeleton model MDL2 representing the person P2. , P2 in the imaging depth direction X, the overall overlap between the person P1 and the person P2, in other words, the contact (interference) between the person P1 and the person P2 is determined.
- the contact range is a position range set in advance for determining contact (interference) between persons P, and the position of one person P in the imaging depth direction X is used as a reference.
- the position in the imaging depth direction X is within this contact range, it indicates that the persons P are in contact with each other.
- the determination unit 23c determines that the person P1 and the person P2 are not in contact. be able to.
- the determination unit 23c determines that there is an overlap between the skeleton model MDL1 representing the person P1 and the skeleton model MDL2 representing the person P2, and that the imaging depth direction X of the person P1 is detected. and the position of the person P2 in the imaging depth direction X are within the contact range, it is determined that the first person P1 and the second person P2 are in contact with each other.
- the information processing unit 23a of the present embodiment stores situation data representing the situation at the time of contact in the storage unit 22. may be stored as a record.
- the situation data representing the situation at the time of contact typically includes data representing the movement of the skeletal model MDL until the first person P1 and the second person P2 come into contact with each other.
- the skeletal model MDL until the contact between the first person P1 and the second person P2 includes the skeletal model MDL1 of the first person P1 and the skeletal model MDL2 of the second person P2.
- the information processing unit 23a stores, in the storage unit 22, identification information representing the persons P1 and P2 involved in the contact in association with the above data as situation data representing the situation at the time of contact. good.
- the information processing section 23a can prevent the image I used for generating the skeleton model MDL from being stored in the storage section 22 as a record, as described above.
- the watching system 1 can save the movement of the skeleton model MDL until the persons P come into contact with each other as a record in the storage unit 22 while ensuring the privacy of the person P.
- the operation processing unit 23d is a part having a function capable of executing processing for controlling the operation of each unit based on the determination result by the detection device 20.
- the operation processing unit 23d of the present embodiment can execute a process of transferring the situation data to another device other than the detecting device 20 based on the determination result by the determination unit 23c.
- the operation processing unit 23d controls the communication unit 21a based on the determination result by the determination unit 23c, transfers the situation data according to the determination result to the terminal device 30, , etc., of the contact between persons P.
- the terminal device 30 may store the received situation data at the time of contact in the storage unit 32 and save it as a record.
- the terminal device 30 may, for example, display the movement of the skeletal model MDL until the persons P come into contact with each other via the display 34, and allow facility personnel or the like to confirm the situation at the time of contact.
- the processing function of transferring the situation data to another device by the operation processing section 23d may be realized by the information processing section 23a described above.
- the information processing section 23a of the detecting device 20 controls the imaging section 11 to capture the image I of the monitored space SP, and stores the captured image information in the storage section 22.
- Store step S1.
- the skeleton model generation unit 23b of the detection device 20 detects the position of each skeleton part of the person P by object recognition and skeleton estimation based on the image I of the monitored space SP stored in the storage unit 22. (Step S2), a skeletal model MDL of the person P is generated (Step S3).
- the skeletal model generation unit 23b erases the image used to generate the skeletal model MDL (step S4) so that no image data representing the image remains.
- the processing unit 23 of the detecting device 20 performs various processes using the skeleton model MDL generated from the image I without using the image I used to generate the skeleton model MDL in each subsequent process.
- the determining unit 23c of the detecting device 20 determines whether the skeletal models MDL representing the plurality of persons P overlap based on the skeletal models MDL generated in the process of step S3 (step S5). .
- step S5 When determining that the skeletal models MDL representing the plurality of persons P do not overlap each other (step S5: No), the determination unit 23c determines that the plurality of persons P are not in contact with each other (step S6). end the control cycle of , and shift to the next control cycle.
- step S5 When determining in step S5 that the skeletal models MDL representing the plurality of persons P overlap each other (step S5: Yes), the determination unit 23c determines the positions of the plurality of persons P based on the detection result of the position detector 12. is within the contact range (step S7).
- step S7 determines that the positions of the plurality of persons P in the imaging depth direction X are outside the contact range.
- the determination unit 23c determines that the plurality of persons P are in contact with each other (step S8).
- the information processing unit 23a exchanges data representing the movement of the skeletal model MDL until the persons P come into contact with each other and identification information representing the persons P involved in the contact as situation data representing the situation at the time of contact. , and stored in the storage unit 22, and stored as a record.
- the operation processing unit 23d of the detection device 20 controls the communication unit 21a to transfer the contact situation data to the terminal device 30, and informs staff of the facility or the like via the terminal device 30 that the persons P have contacted each other. is notified (step S9), the current control cycle is ended, and the next control cycle is started.
- the terminal device 30 causes the storage unit 32 to store the received situation data at the time of contact and saves it as a record.
- the imaging unit 11 captures an image I of the monitored space SP, and the skeleton model generation unit 23b generates a skeleton model MDL representing the person P included in the image I. Then, based on the skeletal model MDL generated by the skeletal model generating unit 23b and the position of the person P detected by the position detector 12, the determining unit 23c determines whether or not the person P corresponding to the skeletal model MDL is in contact. judge. As a result, the watching system 1 can properly grasp the contact situation of the person P. For example, the watching system 1 can use the determination result by the determination unit 23c as a material for determining the cause when contact or interference accident between users occurs in a welfare facility.
- the monitoring system 1 can grasp the state of the person P based on the skeletal model MDL generated from the image I rather than the image I itself captured by the image capturing unit 11, the amount of data and calculation can be reduced. It is possible to properly grasp the situation at the time of falling with the load.
- the monitoring system 1 can grasp the situation at the time of contact based on the skeleton model MDL generated from the image I rather than the image I itself captured by the imaging unit 11. While ensuring privacy, the situation at the time of contact can be properly grasped as described above. As a result, the watching system 1 can, for example, reduce psychological pressure on installation of the installation device 10, and can be a system that facilitates obtaining consent for installation.
- the monitoring system 1 described above includes storage units 22 and 32 that store movements of the skeletal models MDL1 and MDL2 until the first person P1 and the second person P2 come into contact with each other.
- the watching system 1 can save the movement of the skeleton models MDL1 and MDL2 until the person P1 and the person P2 fall down as a record while ensuring the privacy of the person P.
- the monitoring system 1 described above determines that the plurality of persons P are not in contact when the skeletal models MDL representing the plurality of persons P do not overlap with each other by the determination unit 23c. Thereby, the watching system 1 can easily determine that the person P has not made contact according to the overlapping of the skeleton models MDL. Then, when the determination unit 23c determines that the skeletal models MDL representing the plurality of persons P overlap each other and the positions of the plurality of persons P in the imaging depth direction X are outside the contact range, the monitoring system 1 determines that the It is determined that a plurality of persons P are not in contact.
- the watching system 1 determines that they are in contact. As a result, the watching system 1 can accurately determine the contact and interference of the person P, and can appropriately grasp the contact situation of the person P as described above.
- the storage units 22 and 32 also store, for example, audio data collected by the microphone 15 as situation data, so that the situation before and after contact can be complemented by audio.
- each processing function of the processing units 23 and 33 may be realized by combining a plurality of independent processors and having each processor execute a program. Moreover, the processing functions of the processing units 23 and 33 may be appropriately distributed or integrated in a single or a plurality of processing circuits and implemented. Further, the processing functions of the processing units 23 and 33 may be realized entirely or in part by a program, or may be realized by hardware such as wired logic.
- the monitoring system according to this embodiment may be configured by appropriately combining the constituent elements of the embodiments and modifications described above.
- monitoring system 1A detection system 10 installation device 11 imaging unit 12 position detector 20 detection devices 21, 31 interface units 22, 32 storage units 23, 33 processing unit 23a information processing unit 23b skeleton model generation unit 23c determination unit 23d operation processing unit 30
Landscapes
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Emergency Management (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Alarm Systems (AREA)
- Closed-Circuit Television Systems (AREA)
- Emergency Alarm Devices (AREA)
- Image Analysis (AREA)
Abstract
Description
図1、図2に示す本実施形態の見守りシステム1は、監視対象空間SPに存在する人物Pの状態を監視し見守るシステムである。本実施形態の見守りシステム1は、例えば、通所介護(デイサービス)などの介護施設、高齢者施設等の福祉施設に適用される。監視対象空間SPは、例えば、当該施設の居室空間や廊下空間等である。
設置機器10は、監視対象空間SPに設置され、当該監視対象空間SPを撮像する機器である。設置機器10は、撮像部11と、位置検出器12と、ディスプレイ13と、スピーカ14と、マイク15とを備える。設置機器10は、例えば、これらの構成要素が筐体等に組み付けられユニット化された上で監視対象空間SPの天井等に設けられることで、種々の機能を統合した室内監視モジュールを構成する。また、設置機器10は、例えば、これらの構成要素が監視対象空間SPに個別に設けられてもよい。設置機器10は、例えば、見守りシステム1が適用される施設に複数設けられる。
検出機器20は、設置機器10によって撮像された画像Iから生成される骨格モデルMDLに基づいて監視対象空間SPに存在する人物Pの状態を検出する機器である。検出機器20は、インターフェース部21と、記憶部22と、処理部23とを備え、これらが相互に通信可能に接続されている。検出機器20は、ネットワーク上に実装されるいわゆるクラウドサービス型の装置(クラウドサーバ)を構成してもよいし、ネットワークから切り離されたいわゆるスタンドアローン型の装置を構成してもよい。検出機器20は、例えば、パーソナルコンピュータ、ワークステーション、タブレット端末等の種々のコンピュータ機器に種々の処理を実現させるアプリケーションをインストールすることで構成することもできる。検出機器20は、例えば、見守りシステム1が適用される施設の管理センタ等に設けられるがこれに限らない。
端末機器30は、検出機器20と通信可能に接続される機器である。端末機器30は、インターフェース部31と、記憶部32と、処理部33と、ディスプレイ34と、スピーカ35と、マイク36とを備え、これらが相互に通信可能に接続されている。端末機器30は、例えば、パーソナルコンピュータ、ワークステーション、タブレット端末等の種々のコンピュータ機器に種々の処理を実現させるアプリケーションをインストールすることで構成することもできる。端末機器30は、例えば、見守りシステム1が適用される施設の職員等によって携帯可能な携帯端末機器を構成してもよいし、据え置き型の管理端末機器を構成してもよい。
このような構成のもと、本実施形態に係る処理部23は、図3~図7に示すように、監視対象空間SPに存在する人物Pの状態を、当該人物Pを表す骨格モデルMDLに基づいて判定し、接触時の状況を適正に把握、記録する各種処理を行うための機能を有している。
次に、図8のフローチャート図を参照して見守りシステム1おける制御の一例について説明する。
1A 検出システム
10 設置機器
11 撮像部
12 位置検出器
20 検出機器
21、31 インターフェース部
22、32 記憶部
23、33 処理部
23a 情報処理部
23b 骨格モデル生成部
23c 判定部
23d 動作処理部
30 端末機器
BB バウンディングボックス
I 画像
MDL、MDL1、MDL2 骨格モデル
N ネットワーク
P、P1、P2 人物
SP 監視対象空間
X 撮像奥行方向
Y 撮像幅方向
Z 撮像上下方向
Claims (3)
- 監視対象空間の画像を撮像する撮像部と、
前記撮像部によって撮像された前記画像に含まれる人物を表す骨格モデルを生成する骨格モデル生成部と、
前記骨格モデルに対応する人物の、前記撮像部による撮像奥行方向に対する位置を検出可能である位置検出器と、
第1の人物を表す第1の前記骨格モデルと第2の人物を表す第2の前記骨格モデルとの重なりの有無、前記位置検出器によって検出された前記第1の人物の前記撮像奥行方向に対する位置、及び、前記位置検出器によって検出された前記第2の人物の前記撮像奥行方向に対する位置に基づいて、前記第1の人物と前記第2の人物との接触を判定する判定部とを備える、
見守りシステム。 - 前記第1の人物と前記第2の人物とが接触に至るまでの前記骨格モデルの動きを記憶する記憶部を備える、
請求項1に記載の見守りシステム。 - 前記判定部は、前記第1の骨格モデルと前記第2の骨格モデルとの重なりが無い場合に、前記第1の人物と前記第2の人物とが接触していないものと判定し、
前記第1の骨格モデルと前記第2の骨格モデルとの重なりが有り、かつ、前記第1の人物の前記撮像奥行方向に対する位置と前記第2の人物の前記撮像奥行方向に対する位置とが予め定められた接触範囲外である場合、前記第1の人物と前記第2の人物とが接触していないものと判定し、
前記第1の骨格モデルと前記第2の骨格モデルとの重なりが有り、かつ、前記第1の人物の前記撮像奥行方向に対する位置と前記第2の人物の前記撮像奥行方向に対する位置とが前記接触範囲内である場合、前記第1の人物と前記第2の人物とが接触しているものと判定する、
請求項1又は請求項2に記載の見守りシステム。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202280020914.7A CN116997942A (zh) | 2021-03-18 | 2022-02-14 | 看护系统 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021044212A JP7326363B2 (ja) | 2021-03-18 | 2021-03-18 | 見守りシステム |
JP2021-044212 | 2021-03-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022196213A1 true WO2022196213A1 (ja) | 2022-09-22 |
Family
ID=83322242
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/005693 WO2022196213A1 (ja) | 2021-03-18 | 2022-02-14 | 見守りシステム |
Country Status (3)
Country | Link |
---|---|
JP (1) | JP7326363B2 (ja) |
CN (1) | CN116997942A (ja) |
WO (1) | WO2022196213A1 (ja) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013232181A (ja) * | 2012-04-06 | 2013-11-14 | Canon Inc | 画像処理装置、画像処理方法 |
JP2018151693A (ja) * | 2017-03-09 | 2018-09-27 | 株式会社デンソーテン | 運転支援装置および運転支援方法 |
-
2021
- 2021-03-18 JP JP2021044212A patent/JP7326363B2/ja active Active
-
2022
- 2022-02-14 CN CN202280020914.7A patent/CN116997942A/zh active Pending
- 2022-02-14 WO PCT/JP2022/005693 patent/WO2022196213A1/ja active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013232181A (ja) * | 2012-04-06 | 2013-11-14 | Canon Inc | 画像処理装置、画像処理方法 |
JP2018151693A (ja) * | 2017-03-09 | 2018-09-27 | 株式会社デンソーテン | 運転支援装置および運転支援方法 |
Also Published As
Publication number | Publication date |
---|---|
JP2022143604A (ja) | 2022-10-03 |
CN116997942A (zh) | 2023-11-03 |
JP7326363B2 (ja) | 2023-08-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110212451A (zh) | 一种电力ar智能巡检装置 | |
JP2018523424A (ja) | モニタリング | |
US10847004B1 (en) | Security surveillance device | |
CN105373784A (zh) | 智能机器人数据处理方法及装置、智能机器人系统 | |
CN107378971A (zh) | 一种智能机器人控制系统 | |
EP3889887A1 (en) | Image generation device, robot training system, image generation method, and image generation program | |
Gomez-Donoso et al. | Enhancing the ambient assisted living capabilities with a mobile robot | |
Sarfraz et al. | A multimodal assistive system for helping visually impaired in social interactions | |
CN107111363B (zh) | 用于监视的方法、装置和系统 | |
Mettel et al. | Designing and evaluating safety services using depth cameras | |
WO2022196214A1 (ja) | 見守りシステム | |
Wengefeld et al. | The morphia project: first results of a long-term user study in an elderly care scenario from robotic point of view | |
Ghidoni et al. | A distributed perception infrastructure for robot assisted living | |
WO2022196213A1 (ja) | 見守りシステム | |
WO2022196212A1 (ja) | 見守りシステム | |
Christian et al. | Hand gesture recognition and infrared information system | |
Balasubramani et al. | Design IoT-Based Blind Stick for Visually Disabled Persons | |
Ismail et al. | Multimodal indoor tracking of a single elder in an AAL environment | |
Galatas et al. | Multi-modal person localization and emergency detection using the kinect | |
US10891755B2 (en) | Apparatus, system, and method for controlling an imaging device | |
US10638092B2 (en) | Hybrid camera network for a scalable observation system | |
Ardiyanto et al. | Autonomous monitoring framework with fallen person pose estimation and vital sign detection | |
Dias et al. | First Aid and Emergency Assistance Robot for Individuals at Home using IoT and Deep Learning | |
Yun et al. | Distributed sensor networks for multiple human recognition in indoor environments | |
Rossi et al. | A Framework for Personalized and Adaptive Socially Assistive Robotics. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22770972 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202347061232 Country of ref document: IN Ref document number: 202280020914.7 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22770972 Country of ref document: EP Kind code of ref document: A1 |