CN111241874A - Behavior monitoring method and device and computer readable storage medium - Google Patents
Behavior monitoring method and device and computer readable storage medium Download PDFInfo
- Publication number
- CN111241874A CN111241874A CN201811436518.4A CN201811436518A CN111241874A CN 111241874 A CN111241874 A CN 111241874A CN 201811436518 A CN201811436518 A CN 201811436518A CN 111241874 A CN111241874 A CN 111241874A
- Authority
- CN
- China
- Prior art keywords
- behavior
- target object
- target
- key point
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 238000012544 monitoring process Methods 0.000 title claims abstract description 55
- 238000012806 monitoring device Methods 0.000 claims abstract description 11
- 230000006399 behavior Effects 0.000 claims description 165
- 230000015654 memory Effects 0.000 claims description 38
- 238000005192 partition Methods 0.000 claims description 27
- 230000003542 behavioural effect Effects 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 14
- 206010000117 Abnormal behaviour Diseases 0.000 description 24
- 238000005516 engineering process Methods 0.000 description 23
- 230000002159 abnormal effect Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 210000000707 wrist Anatomy 0.000 description 9
- 238000001514 detection method Methods 0.000 description 8
- 230000001360 synchronised effect Effects 0.000 description 8
- 210000003127 knee Anatomy 0.000 description 6
- 238000012417 linear regression Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 210000003423 ankle Anatomy 0.000 description 5
- 210000003414 extremity Anatomy 0.000 description 5
- 230000005291 magnetic effect Effects 0.000 description 5
- 210000002414 leg Anatomy 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 239000000969 carrier Substances 0.000 description 3
- 230000001815 facial effect Effects 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 210000000988 bone and bone Anatomy 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000036544 posture Effects 0.000 description 2
- 208000020016 psychiatric disease Diseases 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000002860 competitive effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000005294 ferromagnetic effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000000474 nursing effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Alarm Systems (AREA)
Abstract
本申请实施例公开了一种行为监测方法,所述行为监测方法包括:确定采集的实时视频流中的目标对象的目标关键点;根据所述目标关键点确定所述目标对象的行为信息;根据所述行为信息确定所述目标对象的行为状态;本申请实施例还公开了一种行为监测装置及计算机可读存储介质。
The embodiment of the present application discloses a behavior monitoring method. The behavior monitoring method includes: determining a target key point of a target object in a real-time video stream collected; determining the behavior information of the target object according to the target key point; The behavior information determines the behavior state of the target object; the embodiment of the present application further discloses a behavior monitoring device and a computer-readable storage medium.
Description
技术领域technical field
本申请实施例涉及计算机视觉技术领域,涉及但不限于一种行为监测方法、装置及计算机可读存储介质。The embodiments of the present application relate to the technical field of computer vision, and relate to, but are not limited to, a behavior monitoring method, an apparatus, and a computer-readable storage medium.
背景技术Background technique
在相关技术中对用户行为进行监测的技术包括:载具监测、轮廓检测、高斯背景建模、体感Kinect传感器技术。The technologies for monitoring user behavior in the related art include: vehicle monitoring, contour detection, Gaussian background modeling, and somatosensory Kinect sensor technology.
载具监测技术在对用户行为进行监测时,需要用户佩戴载具,使用不便,且可能对特殊人群带来危险;轮廓检测技术在对用户行为进行监测时,存在拟合坐标全部未命中,跟踪丢失的情况;高斯背景建模技术在对用户行为进行监测时,需要提前对用户进行行为图像采集;Kinect传感器技术在对用户行为进行监测时,一方面需要提前录制行为视频,另一方面需要购买Kinect传感器。When the vehicle monitoring technology monitors the user's behavior, the user needs to wear the vehicle, which is inconvenient to use and may bring danger to special groups; when the contour detection technology monitors the user's behavior, all the fitting coordinates are missed, and the tracking Lost situation; when Gaussian background modeling technology monitors user behavior, it needs to collect user behavior images in advance; when Kinect sensor technology monitors user behavior, on the one hand, it needs to record behavior videos in advance, on the other hand, it needs to purchase Kinect sensor.
发明内容SUMMARY OF THE INVENTION
本申请实施例提供一种行为监测方法、装置及计算机可读存储介质。Embodiments of the present application provide a behavior monitoring method, device, and computer-readable storage medium.
本申请实施例的技术方案是这样实现的:The technical solutions of the embodiments of the present application are implemented as follows:
本申请提供了一种行为监测的方法,所述方法包括:确定采集的实时视频流中的目标对象的目标关键点;根据所述目标关键点确定所述目标对象的行为信息;根据所述行为信息确定所述目标对象的行为状态。The present application provides a method for behavior monitoring, the method includes: determining a target key point of a target object in a real-time video stream collected; determining behavior information of the target object according to the target key point; according to the behavior The information determines the behavioral state of the target object.
在上述方案中,所述确定采集的实时视频流中的目标对象的目标关键点包括:识别所述视频流中的目标对象及所述目标对象的关键点;通过深度网络模型,根据应用场景在所述关键点中确定所述目标关键点。In the above solution, determining the target key points of the target object in the collected real-time video stream includes: identifying the target object in the video stream and the key points of the target object; The target key point is determined from the key points.
在上述方案中,所述确定采集的实时视频流中的目标对象的目标关键点包括:识别所述视频流中的目标对象及所述目标对象的关键点;对所述目标对象的关键点进行分区,得到至少一个分区,所述分区包含至少一个关键点;选取所述分区中的一个分区的关键点作为所述目标关键点。In the above solution, the determining the target key points of the target object in the collected real-time video stream includes: identifying the target object in the video stream and the key points of the target object; partition to obtain at least one partition, and the partition contains at least one key point; and a key point of one partition in the partition is selected as the target key point.
在上述方案中,所述根据所述目标关键点确定所述目标对象的行为信息包括:确定所述目标关键点的坐标信息;根据所述坐标信息,利用深度网络模型,确定所述目标对象的行为信息。In the above solution, the determining the behavior information of the target object according to the target key points includes: determining the coordinate information of the target key points; behavioral information.
在上述方案中,所述方法还包括:获取对象在不同应用场景下的不同行为状态的样本数据;根据所述样本数据的行为状态确定所述对象的行为信息;根据所述行为信息,确定所述对象的目标关键点;确定应用场景、行为状态、行为信息与目标关键点之间的对应关系;根据所述对应关系建立所述深度网络模型。In the above solution, the method further includes: acquiring sample data of different behavior states of the object in different application scenarios; determining behavior information of the object according to the behavior states of the sample data; determining the behavior information according to the behavior information. The target key point of the object is described; the corresponding relationship between the application scene, behavior state, behavior information and the target key point is determined; the deep network model is established according to the corresponding relationship.
在上述方案中,所述方法还包括:设置应用场景;相应地,所述根据所述行为信息确定所述目标对象的行为状态包括:根据所述行为信息和所述应用场景确定所述目标对象的行为状态。In the above solution, the method further includes: setting an application scene; correspondingly, the determining the behavior state of the target object according to the behavior information includes: determining the target object according to the behavior information and the application scene behavioral state.
在上述方案中,所述方法还包括:获取与所述行为状态对应的用户标识;以所述用户标识为目标地址,输出所述目标对象的行为状态。In the above solution, the method further includes: acquiring a user identifier corresponding to the behavior state; and using the user identifier as a target address, outputting the behavior state of the target object.
本申请还提供了一种行为监测的装置,所述装置包括:第一确定模块、第二确定模块和第三确定模块;其中,所述第一确定模块,用于确定采集的实时视频流中的目标对象的目标关键点;所述第二确定模块,用于根据所述目标关键点确定所述目标对象的行为信息;所述第三确定模块,用于根据所述行为信息确定所述目标对象的行为状态。The present application also provides a behavior monitoring device, the device includes: a first determination module, a second determination module, and a third determination module; wherein the first determination module is used to determine the real-time video stream collected. The target key point of the target object; the second determination module is used to determine the behavior information of the target object according to the target key point; the third determination module is used to determine the target according to the behavior information. The behavioral state of the object.
本申请还提供了一种行为监测的装置,包括处理器和用于存储能够在处理器上运行的计算机程序的存储器;其中,所述处理器用于运行所述计算机程序时,执行应用于终端设备的上述方案中所述行为监测的方法的步骤。The present application also provides an apparatus for behavior monitoring, including a processor and a memory for storing a computer program that can be run on the processor; wherein, when the processor is used to run the computer program, the execution is applied to a terminal device. The steps of the method of behavioral monitoring described in the above scheme.
本申请还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现应用于终端设备的上述方案中所述行为监测的方法的步骤。The present application also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the steps of the method for behavior monitoring described in the above solution applied to a terminal device.
本申请实施例所提供的行为监测方法、装置及计算机可读存储介质,确定采集的实时视频流中的目标对象的目标关键点;根据所述目标关键点确定所述目标对象的行为信息;根据所述行为信息确定所述目标对象的行为状态;如此,能够利用采集的实时视频流,通过确定和监测人体关键点,识别用户的行为,提升了用户体验。The behavior monitoring method, device, and computer-readable storage medium provided by the embodiments of the present application determine the target key points of the target object in the collected real-time video stream; determine the behavior information of the target object according to the target key points; The behavior information determines the behavior state of the target object; in this way, the user's behavior can be recognized by determining and monitoring the key points of the human body by using the collected real-time video stream, thereby improving the user experience.
附图说明Description of drawings
图1为本申请实施例的行为监测方法的实现流程示意图一;FIG. 1 is a schematic diagram 1 of the implementation flow of the behavior monitoring method according to the embodiment of the present application;
图2为本申请实施例的人体关键点示意图;2 is a schematic diagram of a human body key point according to an embodiment of the application;
图3为本申请实施例的行为监测方法的实现流程示意图二;FIG. 3 is a second implementation flowchart of the behavior monitoring method according to an embodiment of the present application;
图4为本申请实施例的单人关键点监测示意图;4 is a schematic diagram of single-person key point monitoring according to an embodiment of the application;
图5为本申请实施例的多人关键点监测示意图;5 is a schematic diagram of multi-person key point monitoring according to an embodiment of the application;
图6为本申请实施例的行为监测装置的结构示意图一;6 is a first structural schematic diagram of a behavior monitoring device according to an embodiment of the present application;
图7为本申请实施例的行为监测装置的结构示意图二。FIG. 7 is a second schematic structural diagram of a behavior monitoring apparatus according to an embodiment of the present application.
具体实施方式Detailed ways
下面结合附图及具体实施例对本申请作进一步详细的说明。The present application will be described in further detail below with reference to the accompanying drawings and specific embodiments.
图1为本申请实施例中的行为监测方法的实现流程示意图,如图1所示,该方法包括以下步骤:FIG. 1 is a schematic flowchart of the implementation of the behavior monitoring method in the embodiment of the present application. As shown in FIG. 1 , the method includes the following steps:
步骤101:确定采集的实时视频流中的目标对象的目标关键点;Step 101: Determine the target key point of the target object in the real-time video stream collected;
其中,目标关键点表征目标对象的不同部位,比如:脖子、左腕等。在获取的实时视频流中确定目标对象及目标对象的目标关键点。Among them, the target key points represent different parts of the target object, such as: neck, left wrist and so on. Determine the target object and the target key points of the target object in the acquired real-time video stream.
在实际的应用中,可利用摄像头进行视频流的采集,对采集的视频流进行实时处理,确定视频流中的目标对象,并对目标对象进行定位,确定目标对象的目标关键点。In practical applications, the camera can be used to collect video streams, process the collected video streams in real time, determine the target object in the video stream, locate the target object, and determine the target key points of the target object.
在确定目标对象时,可确定视频流中的对象的轮廓信息,并将各对象的轮廓信息与预设的轮廓信息进行比较;将轮廓信息与预设的轮廓信息相符的对象确定为目标对象。When determining the target object, the contour information of the object in the video stream can be determined, and the contour information of each object is compared with the preset contour information; the object whose contour information is consistent with the preset contour information is determined as the target object.
在一实施例中,所述确定采集的实时视频流中的目标对象的目标关键点包括:识别所述视频流中的目标对象及所述目标对象的关键点;通过深度网络模型,根据应用场景在所述关键点中确定所述目标关键点。In one embodiment, the determining the target key points of the target object in the collected real-time video stream includes: identifying the target object in the video stream and the key points of the target object; The target keypoint is determined among the keypoints.
在利用摄像头进行图像采集,对采集的实时视频流进行处理,确定目标对象后,获取目标对象的所有关键点;通过深度网络模型,在目标对像的所有关键点中选取应用场景对应的目标关键点。其中,应用场景可包括:老人监护场景、患者监护场景等应用场景,深度网络模型根据不同的应用场景从关键点中选取对应的目标关键点。After using the camera to collect images, process the collected real-time video stream, and determine the target object, all key points of the target object are obtained; through the deep network model, the target key corresponding to the application scene is selected from all the key points of the target object. point. The application scenarios may include: elderly monitoring scenarios, patient monitoring scenarios and other application scenarios, and the deep network model selects corresponding target key points from key points according to different application scenarios.
比如:视频流中的目标对象为老人时,关键点为18个部位对应的点,比如:关键点1、关键点2…关键点18,其中,不同的关键点表征不同的部位。在应用场景为老人监护时,根据深度网络模型中老人监护场景和目标关键点的对应关系,确定在老人监护场景下的目标关键点为其中的6个关键点:关键点1、关键点3、关键点7、关键点11、关键点14、关键点16。For example: when the target object in the video stream is the elderly, the key points are the points corresponding to 18 parts, such as:
在一实施例中,所述确定采集的实时视频流中的目标对象的目标关键点包括:识别所述视频流中的目标对象及所述目标对象的关键点;对所述目标对象的关键点进行分区,得到至少一个分区,所述分区包含至少一个关键点;选取所述分区中的一个分区的关键点作为所述目标关键点。In one embodiment, the determining the target key points of the target object in the captured real-time video stream includes: identifying the target object in the video stream and the key points of the target object; identifying the key points of the target object Partitioning is performed to obtain at least one partition, and the partition includes at least one key point; and a key point of one partition in the partition is selected as the target key point.
在利用摄像头进行图像采集,对采集的实时视频流进行处理,确定目标对象后,获取目标对象的所有关键点;对目标对象的所有关键点进行分区,得到多个分区,每个分区中包含多个关键点,选取上述分区的一个分区中的多个关键点作为目标关键点。After the camera is used for image acquisition, the collected real-time video stream is processed, and after the target object is determined, all key points of the target object are obtained; all key points of the target object are partitioned to obtain multiple partitions, each partition contains multiple A key point is selected, and multiple key points in one partition of the above partition are selected as target key points.
比如:利用摄像头确定的目标对象的关键点包括18个部位对应的点,比如:鼻子、脖子、右肩、右肘、右腕、左肩、左肘、左腕、右髋、右膝、右踝、左髋、左膝、左眼、左眼、右眼、左耳和右耳。对所有的关键点进行分区,形成两个分区:面部关键点分区和腿部关键点分区。面部关键点分区的关键点可包括:鼻子、左眼、右眼、左耳和右耳部位对应的点;腿部关键点分区的关键点可包括:右髋、右膝、右踝、左髋、左膝和左眼部位对应的点。在需要确定目标对象的面部行为时,可以选取面部关键点分区的关键点作为目标关键点;在需要确定目标对象的腿部行为时,可以选取腿部关键点分区的关键点作为目标关键点。For example, the key points of the target object determined by the camera include points corresponding to 18 parts, such as: nose, neck, right shoulder, right elbow, right wrist, left shoulder, left elbow, left wrist, right hip, right knee, right ankle, left Hip, left knee, left eye, left eye, right eye, left ear and right ear. All keypoints are partitioned to form two partitions: face keypoint partition and leg keypoint partition. The key points of the face key point partition can include: the points corresponding to the nose, left eye, right eye, left ear and right ear; the key points of the leg key point partition can include: right hip, right knee, right ankle, left hip , the corresponding points of the left knee and left eye. When the facial behavior of the target object needs to be determined, the key points of the facial key point partition can be selected as the target key points; when the leg behavior of the target object needs to be determined, the key points of the leg key point partition can be selected as the target key points.
步骤102:根据所述目标关键点确定所述目标对象的行为信息;Step 102: Determine the behavior information of the target object according to the target key point;
其中,行为信息可包括:斜率、角度等表征目标对象的行为状态的信息。Wherein, the behavior information may include: slope, angle and other information representing the behavior state of the target object.
在从目标对象的关键点中选取目标关键点后,根据目标关键点的信息,利用深度网络模型,确定目标对象的行为信息。After selecting the target key points from the key points of the target object, according to the information of the target key points, the deep network model is used to determine the behavior information of the target object.
在一实施例中,所述根据所述目标关键点确定所述目标对象的行为信息包括:确定所述目标关键点的坐标信息;根据所述坐标信息,利用深度网络模型,确定所述目标对象的行为信息。In an embodiment, the determining the behavior information of the target object according to the target key points includes: determining the coordinate information of the target key points; according to the coordinate information, using a deep network model to determine the target object behavioral information.
根据所述目标关键点,利用获取的实时视频流确定目标关键点的坐标信息;将目标关键点的坐标信息输入深度网络模型,深度网络模型将目标关键点的坐标信息带入线性回归模型的拟合方程求解,确定目标对象的行为信息。According to the target key point, the obtained real-time video stream is used to determine the coordinate information of the target key point; the coordinate information of the target key point is input into the deep network model, and the deep network model brings the coordinate information of the target key point into the simulation of the linear regression model. Solve the equation and determine the behavior information of the target object.
拟合方程如公式(1)所示:The fitting equation is shown in formula (1):
其中,yi、xi为目标关键点的坐标信息,ei为确定的目标关键点的坐标信息与实际的坐标信息的误差,为回归系数。Among them, y i and xi are the coordinate information of the target key point, e i is the error between the determined coordinate information of the target key point and the actual coordinate information, is the regression coefficient.
这里,将目标对象的目标关键点的坐标带入上述拟合方程的xi、yi,根据最小二乘法对进行求解,将作为目标对象的行为信息。Here, the coordinates of the target key points of the target object are brought into x i , y i of the above fitting equation, and according to the least squares method to solve, Information about the behavior of the target object.
步骤103:根据所述行为信息确定所述目标对象的行为状态。Step 103: Determine the behavior state of the target object according to the behavior information.
根据获取的行为信息确定目标对象的行为状态,其中,行为状态可包括:倾斜、摔倒、站立等用户行为。The behavior state of the target object is determined according to the acquired behavior information, where the behavior state may include user behaviors such as leaning, falling, standing, and the like.
比如:获取的目标对象的行为信息为斜率,若斜率对应的角度等于45度时,可以确定目标对象的行为状态为倾斜。For example, the acquired behavior information of the target object is the slope, and if the angle corresponding to the slope is equal to 45 degrees, it can be determined that the behavior state of the target object is inclined.
在一实施例中,所述方法还包括:获取对象在不同应用场景下的不同行为状态的样本数据;根据所述样本数据的行为状态确定所述对象的行为信息;根据所述行为信息,确定所述对象的目标关键点;确定应用场景、行为状态、行为信息与目标关键点之间的对应关系;根据所述对应关系建立所述深度网络模型。In one embodiment, the method further includes: acquiring sample data of different behavior states of an object in different application scenarios; determining behavior information of the object according to the behavior states of the sample data; determining, according to the behavior information, target key points of the object; determine the corresponding relationship between application scenarios, behavior states, behavior information and target key points; establish the deep network model according to the corresponding relationship.
在确定获取的视频流中的目标对象的目标关键点之前,还包括:基于对象的行为进行建模。其中,行为信息可包括:斜率、角度等表征目标对象的行为状态的信息;行为状态可包括:倾斜、摔倒、站立等用户行为。Before determining the target key points of the target object in the acquired video stream, the method further includes: modeling based on the behavior of the object. Wherein, the behavior information may include: slope, angle, and other information representing the behavior state of the target object; the behavior state may include user behaviors such as leaning, falling, standing, and the like.
在实际应用中,获取对象在不同应用场景下的不同行为状态的样本数据,对样本数据的行为状态进行分析,确定对象的行为信息,根据行为信息利用线性回归模型,确定对象的目标关键点,从而可以得到应用场景、行为状态、行为信息与目标关键点之间的对应关系,确定深度网络模型。其中,应用场景可包括:老人监护场景、患者监护场景等监护场景。In practical applications, sample data of different behavioral states of the object in different application scenarios are obtained, the behavioral state of the sample data is analyzed, the behavioral information of the object is determined, and the target key point of the object is determined by using a linear regression model according to the behavioral information. In this way, the corresponding relationship between application scenarios, behavioral states, behavioral information and target key points can be obtained, and the deep network model can be determined. The application scenarios may include: elderly monitoring scenarios, patient monitoring scenarios and other monitoring scenarios.
比如:获取老人在老人监护场景下的身体倾斜状态的样本数据,对老人的身体倾斜状态进行分析,确定老人身体倾斜状态下身体的斜率即行为信息,利用线性回归模型建立拟合方程,将斜率带入拟合方程进行求解,获得目标关键点的坐标,从而建立应用场景、行为状态、行为信息与目标关键点之间的对应关系,根据上述对应关系建立深度网络模型。For example: obtain the sample data of the body tilt state of the elderly in the elderly monitoring scene, analyze the body tilt state of the elderly, determine the slope of the body in the tilt state of the elderly, that is, the behavior information, use a linear regression model to establish a fitting equation, and use the slope to establish a fitting equation. Bring into the fitting equation to solve, and obtain the coordinates of the target key points, so as to establish the corresponding relationship between the application scene, behavior state, behavior information and the target key points, and establish a deep network model according to the above corresponding relationship.
在一实施例中,所述方法还包括:设置应用场景;相应地,所述根据所述行为信息确定所述目标对象的行为状态包括:根据所述行为信息和所述应用场景确定所述目标对象的行为状态。In an embodiment, the method further includes: setting an application scenario; correspondingly, the determining the behavior state of the target object according to the behavior information includes: determining the target according to the behavior information and the application scenario. The behavioral state of the object.
设置应用场景,在根据行为信息确定目标对象的行为状态时,还可以根据行为信息和应用场景共同确定目标对象的行为状态。The application scene is set, and when the behavior state of the target object is determined according to the behavior information, the behavior state of the target object can also be jointly determined according to the behavior information and the application scene.
比如:设置的应用场景为患者监护的场景,获取的患者的行为信息为斜率,且斜率对应的角度小于45度,在上述患者监护的场景下,当斜率对应的角度小于45度时,表示患者的行为状态为摔倒。For example: the set application scenario is the scenario of patient monitoring, the acquired behavior information of the patient is the slope, and the angle corresponding to the slope is less than 45 degrees. In the above scenario of patient monitoring, when the angle corresponding to the slope is less than 45 degrees, it means that the patient The behavioral state is fall.
在一实施例中,所述方法还包括:获取与所述行为状态对应的用户标识;以所述用户标识为目标地址,输出所述目标对象的行为状态。In an embodiment, the method further includes: acquiring a user identifier corresponding to the behavior state; and using the user identifier as a target address, outputting the behavior state of the target object.
其中,用户标识可包括:手机号码、微信号码等。预设用户标识,在确定目标对象的行为状态后,获取与目标对象的行为状态的对应的用户标识,比如:目标对象的行为状态为倾斜时,对应的用户标识为家人的用户标识;目标对象的行为状态为摔倒时,对应的用户标识为救援号码。Wherein, the user identification may include: a mobile phone number, a micro-signal code, and the like. Preset the user ID, after determining the behavior state of the target object, obtain the user ID corresponding to the behavior state of the target object, for example: when the behavior state of the target object is tilted, the corresponding user ID is the user ID of the family; the target object When the behavior status is fall, the corresponding user ID is the rescue number.
以用户标识为目标地址,将目标对象的行为状态发送给与目标对象相关的用户标识,或者根据获取的用户标识发起用户标识与目标对象的通话。Taking the user ID as the target address, the behavior status of the target object is sent to the user ID related to the target object, or a call between the user ID and the target object is initiated according to the obtained user ID.
比如:在确定目标对象的行为状态为摔倒时,根据用户标识,发起用户标识与目标对象的通话,或者以用户标识为目标地址,将目标对象的行为状态发送给与目标对象相关的用户标识。For example: when it is determined that the behavior state of the target object is falling, according to the user identification, a call between the user identification and the target object is initiated, or the user identification is used as the target address, and the behavioral state of the target object is sent to the user identification related to the target object. .
在将目标对象的行为状态发送给用户标识的同时,还可以给出处理该行为状态的建议。While sending the behavior state of the target object to the user ID, it can also give suggestions for dealing with the behavior state.
在本申请实施例中,确定采集的实时视频流中的目标对象的目标关键点;根据所述目标关键点确定所述目标对象的行为信息;根据所述行为信息确定所述目标对象的行为状态;如此,能够利用采集的实时视频流,通过确定和监测人体关键点,识别用户的行为,并且可以对目标对象某部分的行为进行细致监控;在监控到用户出现异常行为时,可针对该异常行为进行预警,并给出建议,进一步提升了用户体验。In the embodiment of the present application, the target key points of the target object in the collected real-time video stream are determined; the behavior information of the target object is determined according to the target key points; the behavior state of the target object is determined according to the behavior information ; In this way, the collected real-time video stream can be used to identify and monitor the key points of the human body to identify the user's behavior, and to monitor the behavior of a certain part of the target object in detail; when the abnormal behavior of the user is monitored, it can be targeted for the abnormal behavior. Behaviors are warned and suggestions are given, which further improves the user experience.
本实施例中以用户为患者为例,通过场景对本申请实施例提供的行为监测方法进行说明。本实施例中的主要关键点对应上述实施例的目标关键点。In this embodiment, taking a user as a patient as an example, the behavior monitoring method provided by this embodiment of the present application is described through a scenario. The main key points in this embodiment correspond to the target key points of the above-mentioned embodiments.
在患者异常行为监测方面,相关技术主要包括载具监测、轮廓检测、高斯背景建模、Kinect传感器技术等。In terms of abnormal patient behavior monitoring, related technologies mainly include vehicle monitoring, contour detection, Gaussian background modeling, and Kinect sensor technology.
载具监测技术在肢体识别中需要佩戴上半身载具、手指载具、手部单节点载具等相关产品来进行异常行为监测,不能直接通过摄像头来实时识别,并且佩戴载具降低了使用体验,同时也可能给某些特殊人群带来危险。Vehicle monitoring technology needs to wear upper body vehicles, finger vehicles, hand single-node vehicles and other related products in limb recognition to monitor abnormal behavior. At the same time, it may also bring danger to some special groups.
通过轮廓检测技术在进行异常行为监测时,需要对采集的监控图像进行轮廓检测,使用帧差法计算老人轮廓区域的像素个数变化来判断老人所处的状态。而且在监测过程中,存在所有可能的拟合坐标全部未命中,跟踪丢失的情况。When monitoring abnormal behavior through contour detection technology, it is necessary to perform contour detection on the collected monitoring images, and use the frame difference method to calculate the change in the number of pixels in the contour area of the elderly to determine the state of the elderly. Moreover, in the monitoring process, all possible fitting coordinates are all missed, and the tracking is lost.
通过高斯背景建模技术在进行异常行为监测时,基于运动序列技术及无线射频识别(Radio Frequency Identification,RFID)标签技术均需要提取一段视频,利用提取的视频进行高斯背景建模,然后提取前景中目标的轮廓和矩阵特征信息,将前景的特征信息与人的异常动作特征库相比对,判断该前景中是否存在人的异常行为。使用RFID标签技术还需要给所有需要监护的老人佩戴RFID标签,基于模板匹配的方法需要提前对每一位被监护者进行行为图像采集,提取特征形成特征模板。When monitoring abnormal behavior through Gaussian background modeling technology, it is necessary to extract a video based on motion sequence technology and Radio Frequency Identification (RFID) tag technology, use the extracted video for Gaussian background modeling, and then extract the foreground The contour and matrix feature information of the target are compared, and the feature information of the foreground is compared with the abnormal action feature library of the person to determine whether there is abnormal behavior of the person in the foreground. Using RFID tag technology also needs to wear RFID tags for all elderly people who need to be monitored. The method based on template matching needs to collect behavioral images of each supervised person in advance, and extract features to form feature templates.
通过Kinect传感器技术获取骨骼信息,将骨骼对的角度旋转移动形成矩阵信息,与录制的危险动作信息进行比较来判断差距。这种技术一方面需要提前进行危险动作信息的录制,机制上不是很灵活,而且需要购买Kinect传感器,增加硬件成本;另一方面采用Kinect只能追踪肢体中的20个关键点,头部只用一个点表示,在复杂背景下使用受到限制。在实际应用过程中使用上述方法存在诸多不便,降低使用体验。The bone information is obtained through the Kinect sensor technology, and the angular rotation and movement of the bone pair form matrix information, which is compared with the recorded dangerous action information to judge the gap. On the one hand, this technology requires the recording of dangerous action information in advance, which is not very flexible in mechanism, and requires the purchase of Kinect sensors, which increases hardware costs; on the other hand, Kinect can only track 20 key points in the body, and the head only uses A dot indicates that use is restricted in complex contexts. There are many inconveniences in using the above method in the actual application process, which reduces the use experience.
相关技术在应用到健康医疗领域的患者进行异常行为监测过程中存在诸多问题,不能单纯依靠智能摄像头对患者进行有效肢体和行为的监测,并在危险发生时第一时间预警及通知家人和医护人员。相关技术的技术缺点如下:There are many problems in the application of related technologies to the monitoring of abnormal behavior of patients in the field of health care. Smart cameras cannot be used to monitor patients' limbs and behavior effectively, and to warn and notify family members and medical staff immediately when danger occurs. . The technical disadvantages of the related art are as follows:
1)部分现有技术依靠佩戴上半身载具、手指载具、手部单节点载具等相关产品,才能进行关键点检测,进而对动作进行识别,硬件成本较大,且在实际场景中方案局限性很大。同时,考虑到在远程护理过程中,患者由于其本身原因,不适宜佩戴相关载具,并且相关载具可能对其本身能产生一定伤害。1) Some existing technologies rely on wearing upper body carriers, finger carriers, hand single-node carriers and other related products to perform key point detection and then recognize actions. The hardware cost is high, and the solution is limited in actual scenarios. Sex is great. At the same time, considering that in the process of remote care, patients are not suitable to wear related vehicles due to their own reasons, and related vehicles may cause certain damage to themselves.
2)有些方法需提前录制异常行为信息作为参照,无法达到实时识别。这对很多患者的一些异常行为无法进行实时检测,很可能造成严重的后果,挽救不及时。2) Some methods need to record abnormal behavior information in advance as a reference, which cannot achieve real-time identification. This cannot be detected in real time for some abnormal behaviors of many patients, which is likely to cause serious consequences and will not be rescued in time.
而本申请实施例不需要患者佩戴载具及提前录制危险动作作为参照,将能够识别肢体行为的算法应用到摄像监测装置中,通过监测患者的人体关键点,可自动监测并识别患者的异常行为,在患者因异常行为产生危险时及时发出警报并联系救援。且本申请实施例采用新型人体检测算法,可以实时检测130个人体关键点,包含了2×21个手的基本关键点;70个脸部基本关键点,并以3D彩色火柴人的形式呈现出来,融合了实时多人2D姿态估计、动态3D重建与手部关键点检测等多项计算机视觉项目等多项功能,且可以很好地对应复杂背景,比传统监测应用更具竞争力。如图2所示,以3D彩色火柴人的形式表征人体关键点。However, the embodiment of the present application does not require the patient to wear a vehicle and record dangerous actions in advance as a reference, and applies an algorithm capable of recognizing limb behavior to the camera monitoring device. By monitoring the key points of the patient's human body, the abnormal behavior of the patient can be automatically monitored and identified. , when the patient is in danger due to abnormal behavior, it will issue an alarm and contact rescue in time. Moreover, the embodiment of the present application adopts a new human body detection algorithm, which can detect 130 human body key points in real time, including 2×21 basic key points of the hand; 70 basic key points of the face, which are presented in the form of 3D color stick figures. , which integrates multiple functions such as real-time multi-person 2D pose estimation, dynamic 3D reconstruction and hand key point detection and other computer vision projects, and can well correspond to complex backgrounds, making it more competitive than traditional monitoring applications. As shown in Figure 2, the human body key points are represented in the form of 3D colored stick figures.
本申请实施例采用新型肢体语言识别技术监测患者异常行为,能弥补传统图像处理技术只识别患者躺、坐、站等部分姿势的不足。传统患者异常行为监测中,很难捕捉患者面部表情,新型监测应用中,眼、口等关键点均被描绘出来,肢体语言和表情均能被同时识别。该应用可应用于老年人护理、辅助康复治疗、精神疾病辅助治疗及其他心理疾病等的辅助治疗和护理中。The embodiment of the present application adopts the novel body language recognition technology to monitor the abnormal behavior of the patient, which can make up for the deficiency that the traditional image processing technology only recognizes some postures of the patient, such as lying, sitting, and standing. In the traditional monitoring of abnormal patient behavior, it is difficult to capture the patient's facial expressions. In the new monitoring application, key points such as eyes and mouth are depicted, and body language and expressions can be recognized at the same time. The application can be applied to the auxiliary treatment and nursing of elderly care, auxiliary rehabilitation treatment, auxiliary treatment of mental diseases and other mental diseases.
本申请实施例实时读取摄像头中传入的视频源,可通过肢体关键点识别和特定场景下的异常行为参数判断后,判断出病人的异常行为,及时与患者进行智能语音交流,在患者的异常行为在短时间内没有消除迹象时,能即刻发出警告,通过电话和短信等方式及时通知患者的家属和医护人员;并且通过肢体及面部关键点识别,能结合当时的情况揣摩用户心理,更贴近现实之中人的沟通,提供更个性化的方案。In the embodiment of the present application, the incoming video source from the camera is read in real time, and the abnormal behavior of the patient can be judged through the identification of key points of the limb and the judgment of abnormal behavior parameters in a specific scene, and the intelligent voice communication with the patient can be carried out in time. When the abnormal behavior does not disappear within a short period of time, it can immediately issue a warning, and notify the patient's family members and medical staff in a timely manner through phone calls and text messages. Close to real people's communication and provide more personalized solutions.
本申请实施例的流程示意图如图3所示,包括:The schematic flowchart of the embodiment of the present application is shown in FIG. 3 , including:
步骤301:获取实时视频流;Step 301: obtain a real-time video stream;
采用现有高清网络摄像头,获取实时的视频流。Use existing high-definition webcams to obtain real-time video streams.
步骤302:确定主要关键点;Step 302: Determine the main key points;
利用深度网络模型,对获取到的视频流进行人体定位,识别关键点,并从关键点中选取主要关键点。比如:识别的关键点共计18个部位的点,分别为:0鼻子、1脖子、2右肩、3右肘、4右腕、5左肩、6左肘、7左腕、8右髋、9右膝、10右踝、11左髋、12左膝、13左眼、14左眼、15右眼、16左耳和17右耳,每个关键点坐标为ai(xi,yi),从关键点中选取主要关键点。如果选取的主要关键点为:1脖子、4右腕、7左腕、8右髋、10右踝和11左髋,可以表示为ai(xi,yi),(i=1,4,7,8,10,11),这里,x坐标和y坐标均不为0。Using the deep network model, human body positioning is performed on the obtained video stream, key points are identified, and the main key points are selected from the key points. For example: the identified key points are a total of 18 points, namely: 0 nose, 1 neck, 2 right shoulder, 3 right elbow, 4 right wrist, 5 left shoulder, 6 left elbow, 7 left wrist, 8 right hip, 9 right knee , 10 right ankle, 11 left hip, 12 left knee, 13 left eye, 14 left eye, 15 right eye, 16 left ear and 17 right ear, the coordinates of each key point are a i (x i , y i ), from Select the main key point from the key points. If the selected main key points are: 1 neck, 4 right wrist, 7 left wrist, 8 right hip, 10 right ankle and 11 left hip, it can be expressed as a i (x i , y i ), (i=1, 4, 7 , 8, 10, 11), where neither the x-coordinate nor the y-coordinate is 0.
步骤303:检测主要关键点;Step 303: Detect main key points;
检测主要关键点,得到主要关键点的坐标信息;Detect the main key points and obtain the coordinate information of the main key points;
步骤304:对主要关键点拟合,得到斜率;Step 304: Fit the main key points to obtain the slope;
根据线性回归模型对主要关键点进行线性回归,利用最小二乘法,得到回归直线的斜率;Perform linear regression on the main key points according to the linear regression model, and use the least squares method to obtain the slope of the regression line;
线性回归模型如公式(1)所示,利用最小二乘法对公式(1)进行求解,得到回归系数的计算如公式(2),的计算如公式(3),其中为计算获得的斜率。The linear regression model is shown in formula (1), and the least squares method is used to solve formula (1) to obtain the regression coefficient is calculated as formula (2), is calculated as formula (3), where is the slope obtained for the calculation.
其中,yi、xi为目标关键点的坐标信息,ei为确定的目标关键点的坐标信息与实际的坐标信息的误差,为回归系数。Among them, y i and xi are the coordinate information of the target key point, e i is the error between the determined coordinate information of the target key point and the actual coordinate information, is the regression coefficient.
步骤305:根据确定的斜率判断是否行为异常;Step 305: Determine whether the behavior is abnormal according to the determined slope;
根据斜率进行判断被检测者行为是否异常;若行为异常,转至步骤306;若行为正常,则返回至步骤303;Judging whether the behavior of the tested person is abnormal according to the slope; if the behavior is abnormal, go to step 306; if the behavior is normal, go back to step 303;
比如:当斜率小于1时,说明被检测者的倾斜角度小于45°,则认为被检测者存在危险行为,有可能晕倒或摔倒,此时判断检测者行为异常。For example: when the slope When it is less than 1, it means that the inclination angle of the detected person is less than 45°, and it is considered that the detected person has a dangerous behavior, and may faint or fall.
步骤306:根据异常行为进行语音询问;Step 306: conduct a voice inquiry according to the abnormal behavior;
在设定时间内,若收到被检测者回复的语音信息,返回至步骤303;Within the set time, if the voice message replied by the tested person is received, return to step 303;
如果收到并识别出被检测者的正向反馈语音信息,则开始计时并继续监测,若在连续时间内异常行为有被消除的迹象,被监测者行为正常,则此次不进行危险预警,系统继续监测患者行为。If the positive feedback voice information of the tested person is received and recognized, the timing will be started and the monitoring will continue. If the abnormal behavior has been eliminated within a continuous period of time and the monitored person’s behavior is normal, no danger warning will be given this time. The system continues to monitor patient behavior.
进一步地,如果收到并识别出被检测者的正向反馈语音信息后,在设定时间内异常行为没有被消除的迹象,则立即通知患者家属和医护人员,确认后在第一时间采取救援。Further, if the abnormal behavior is not eliminated within the set time after receiving and recognizing the positive feedback voice information of the tested person, immediately notify the patient's family members and medical staff, and take rescue as soon as possible after confirmation. .
在设定时间内,若没有收到被检测者回复的语音信息,转至步骤307。Within the set time, if no voice message replied by the detected person is received, go to step 307 .
步骤307:获取预设的用户标识,将目标对象的异常行为输出至用户标识对应的终端。Step 307: Acquire a preset user ID, and output the abnormal behavior of the target object to the terminal corresponding to the user ID.
若通过语音识别技术没有收到被监测者的正向反馈信息,则立即通过短信和电话方式通知被监测者的家属和医护人员等相关人员,家属和医护人员查看即时视频确认危险后,立即采取救援措施。If no positive feedback from the monitored person is received through the voice recognition technology, the monitored person's family members and medical staff and other relevant personnel will be notified immediately through text messages and phone calls. rescue measures.
如图4所示,为本申请实施例的单人关键点监测示意图;人体摔倒状态下的关键点为:1脖子,4右腕,7左腕,8右髋,10右踝,11左髋;如图5所示,为本申请实施例的多人关键点监测示意图,根据不同人的不同的监测需求确定不同关键点。As shown in FIG. 4 , it is a schematic diagram of single-person key point monitoring according to the embodiment of the present application; the key points under the state of human fall are: 1 neck, 4 right wrist, 7 left wrist, 8 right hip, 10 right ankle, 11 left hip; As shown in FIG. 5 , which is a schematic diagram of multi-person key point monitoring according to an embodiment of the present application, different key points are determined according to different monitoring needs of different people.
本申请实施例的使用特点有:The use characteristics of the embodiments of the present application include:
(1)可将人体关键点应用到不同场景;(1) Human body key points can be applied to different scenarios;
新型患者异常行为监测应用中,摄像机捕获到2D图像后,关键点检测器会识别并标记出身体特点的部位,帮助身体跟踪算法了解不同角度下每个姿势的表现,并以3D彩色火柴人的形式呈现出来。将这些人体关键点通过相关规则应用到不同危险情景场景下,在实际应用中更加灵活。In the new abnormal patient behavior monitoring application, after the camera captures the 2D image, the key point detector will identify and mark the characteristic parts of the body, help the body tracking algorithm to understand the performance of each posture at different angles, and display the 3D color stick figure. form presented. Applying these key points of the human body to different dangerous scenarios through relevant rules is more flexible in practical applications.
(2)追踪关键点更为细致;(2) Tracking key points is more detailed;
与传统方法只能追踪到20个关键点相比,新型患者异常行为监测应用的追踪功能要细致得多。同一个动作,传统方法感知到一个人在抬手,而基于新一代识别系统的新型患者异常行为监测应用,可以观察到这个人实际是用手指指向了某样东西。Compared with traditional methods, which can only track 20 key points, the tracking capabilities of the new abnormal patient behavior monitoring application are much more detailed. For the same action, the traditional method perceives that a person is raising his hand, but the new patient abnormal behavior monitoring application based on the new generation recognition system can observe that the person is actually pointing his finger at something.
(3)新增面部追踪,辨识表情;(3) Added facial tracking to recognize expressions;
面部跟踪方面,传统方法里整个头部只是一个点,而新型患者异常行为监测应用中,眉、眼、鼻、口等能被数十个关键点描绘出来,肢体语言和表情都能被识别。In terms of face tracking, the entire head is just a point in the traditional method, but in the new application of abnormal patient behavior monitoring, the eyebrows, eyes, nose, mouth, etc. can be depicted by dozens of key points, and body language and expressions can be recognized.
(4)基于患者异常行为进行建库建模;(4) Build database modeling based on abnormal behavior of patients;
基于观测到的患者行为建立数据库,利用大数据建模技术,针对某种疾病的特定行为建立模型,识别行为深层次代表意义。利用模型可在预警时,直接识别行为,评估出危险等级,给出建议采取的措施。A database is established based on the observed patient behavior, and big data modeling technology is used to establish a model for the specific behavior of a certain disease, and to identify the deep representative meaning of the behavior. The model can be used to directly identify the behavior, evaluate the risk level, and give the recommended measures at the time of early warning.
本申请实施例不需要患者在使用中佩戴载具、标签、提前采集被监测者异常行为等,且不需要借助摄像头的硬件性能,不需要购买Kinect传感器等设备,通过关键点获取及关键点拟合即可进行监测。更具有普遍适用性,提升患者及家属医护人员等的使用体验;且可以实时监测,不会在患者异常行为产生紧急状况时会带来延迟,并适用于复杂背景下的多场景应用。The embodiment of this application does not require the patient to wear a vehicle, a label, or collect abnormal behaviors of the monitored person in advance, and does not require the hardware performance of the camera, nor the purchase of Kinect sensors and other equipment. can be monitored together. It is more universally applicable, improving the experience of patients and their family members, medical staff, etc.; and it can be monitored in real time, without delay when an emergency occurs due to abnormal patient behavior, and is suitable for multi-scenario applications in complex backgrounds.
本申请实施例在识别患者肢体各关键点坐标的基础上,通过对不同场景,比如,老年人,精神疾病患者等,有针对性的异常行为定义,对患者的异常行为进行进一步判别;本申请实施例将包含将人体关键点的新型实时识别人体肢体动作方法应用到健康医疗领域中的老年人看护,辅助康复等领域中,并结合智能语音识别等相关技术,具有积极而新颖的行业应用意义。On the basis of identifying the coordinates of each key point of the patient's limb, the embodiment of the present application further discriminates the abnormal behavior of the patient by defining the abnormal behavior of different scenarios, such as the elderly, mentally ill patients, etc.; The embodiment applies the new real-time recognition method of human body movement including key points of the human body to the fields of elderly care, assisted rehabilitation and other fields in the field of health care, combined with related technologies such as intelligent speech recognition, which has positive and novel industry application significance .
本实施例提供一种行为监测装置,如图6所示,行为监测装置60包括:第一确定模块601、第二确定模块602和第三确定模块603;其中,This embodiment provides a behavior monitoring device. As shown in FIG. 6 , the behavior monitoring device 60 includes: a first determination module 601, a second determination module 602, and a third determination module 603; wherein,
第一确定模块601,用于确定采集的实时视频流中的目标对象的目标关键点;The first determination module 601 is used to determine the target key point of the target object in the real-time video stream collected;
第二确定模块602,用于根据所述目标关键点确定所述目标对象的行为信息;A second determining module 602, configured to determine the behavior information of the target object according to the target key point;
第三确定模块603,用于根据所述行为信息确定所述目标对象的行为状态。The third determining module 603 is configured to determine the behavior state of the target object according to the behavior information.
在一实施例中,第一确定模块601,用于:识别所述视频流中的目标对象及所述目标对象的关键点;通过深度网络模型,根据应用场景在所述关键点中确定所述目标关键点。In one embodiment, the first determination module 601 is configured to: identify the target object in the video stream and the key points of the target object; determine the key points in the key points according to the application scenario through a deep network model. target key points.
在一实施例中,第一确定模块601,用于:识别所述视频流中的目标对象及所述目标对象的关键点;对所述目标对象的关键点进行分区,得到至少一个分区,所述分区包含至少一个关键点;选取所述分区中的一个分区的关键点作为所述目标关键点。In one embodiment, the first determination module 601 is configured to: identify the target object in the video stream and the key points of the target object; partition the key points of the target object to obtain at least one The partition contains at least one key point; a key point of one of the partitions is selected as the target key point.
在一实施例中,第二确定模块602,用于:确定所述目标关键点的坐标信息;根据所述坐标信息,利用深度网络模型,确定所述目标对象的行为信息。In one embodiment, the second determination module 602 is configured to: determine the coordinate information of the target key point; and use a deep network model to determine the behavior information of the target object according to the coordinate information.
在一实施例中,行为监测装置60还包括:建模模块604,用于获取对象在不同应用场景下的不同行为状态的样本数据;根据所述样本数据的行为状态确定所述对象的行为信息;根据所述行为信息,确定所述对象的目标关键点;确定应用场景、行为状态、行为信息与目标关键点之间的对应关系;根据所述对应关系建立所述深度网络模型。In one embodiment, the behavior monitoring device 60 further includes: a
在一实施例中,行为监测装置60还包括:设置模块605,用于设置应用场景;相应地,第三确定模块603,用于:根据所述行为信息和所述应用场景确定所述目标对象的行为状态。In an embodiment, the behavior monitoring device 60 further includes: a setting
在一实施例中,行为监测装置60还包括:通知模块606,用于获取与所述行为状态对应的用户标识;以所述用户标识为目标地址,输出所述目标对象的行为状态。In an embodiment, the behavior monitoring apparatus 60 further includes: a
需要说明的是:上述实施例提供的行为监测装置在进行行为监测时,仅以上述各程序模块的划分进行举例说明,实际应用中,可以根据需要而将上述处理分配由不同的程序模块完成,即将装置的内部结构划分成不同的程序模块,以完成以上描述的全部或者部分处理。另外,上述实施例提供的行为监测装置与行为监测方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。It should be noted that: when the behavior monitoring device provided in the above embodiment performs behavior monitoring, only the division of the above program modules is used as an example for illustration. In practical applications, the above processing can be allocated to different program modules according to needs. That is, the internal structure of the device is divided into different program modules to complete all or part of the processing described above. In addition, the behavior monitoring apparatus and the behavior monitoring method embodiments provided by the above embodiments belong to the same concept, and the specific implementation process thereof is detailed in the method embodiments, which will not be repeated here.
基于前述的实施例,本申请实施例提供一种行为监测装置,如图7所示,所述装置包括处理器702和用于存储能够在处理器702上运行的计算机程序的存储器701;其中,所述处理器702用于运行所述计算机程序时,以实现:Based on the foregoing embodiments, an embodiment of the present application provides a behavior monitoring apparatus. As shown in FIG. 7 , the apparatus includes a
确定采集的实时视频流中的目标对象的目标关键点;Determine the target key point of the target object in the captured real-time video stream;
根据所述目标关键点确定所述目标对象的行为信息;Determine the behavior information of the target object according to the target key point;
根据所述行为信息确定所述目标对象的行为状态。The behavior state of the target object is determined according to the behavior information.
上述本申请实施例揭示的方法可以应用于所述处理器702中,或者由所述处理器702实现。所述处理器702可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过所述处理器702中的硬件的集成逻辑电路或者软件形式的指令完成。上述的所述处理器702可以是通用处理器、DSP,或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。所述处理器702可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤,可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于存储介质中,该存储介质位于存储器701,所述处理器702读取存储器701中的信息,结合其硬件完成前述方法的步骤。The methods disclosed in the above embodiments of the present application may be applied to the
可以理解,本申请实施例的存储器(存储器701)可以是易失性存储器或者非易失性存储器,也可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(ROM,Read Only Memory)、可编程只读存储器(PROM,Programmable Read-OnlyMemory)、可擦除可编程只读存储器(EPROM,Erasable Programmable Read-Only Memory)、电可擦除可编程只读存储器(EEPROM,Electrically Erasable Programmable Read-OnlyMemory)、磁性随机存取存储器(FRAM,ferromagnetic random access memory)、快闪存储器(Flash Memory)、磁表面存储器、光盘、或只读光盘(CD-ROM,Compact Disc Read-OnlyMemory);磁表面存储器可以是磁盘存储器或磁带存储器。易失性存储器可以是随机存取存储器(RAM,Random Access Memory),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(SRAM,Static Random Access Memory)、同步静态随机存取存储器(SSRAM,Synchronous Static Random Access Memory)、动态随机存取存储器(DRAM,Dynamic Random Access Memory)、同步动态随机存取存储器(SDRAM,Synchronous Dynamic Random Access Memory)、双倍数据速率同步动态随机存取存储器(DDRSDRAM,Double Data Rate Synchronous Dynamic Random Access Memory)、增强型同步动态随机存取存储器(ESDRAM,Enhanced Synchronous Dynamic Random AccessMemory)、同步连接动态随机存取存储器(SLDRAM,SyncLink Dynamic Random AccessMemory)、直接内存总线随机存取存储器(DRRAM,Direct Rambus Random Access Memory)。本申请实施例描述的存储器旨在包括但不限于这些和任意其它适合类型的存储器。It can be understood that the memory (memory 701 ) in this embodiment of the present application may be a volatile memory or a non-volatile memory, and may also include both volatile and non-volatile memory. Among them, the non-volatile memory may be a read-only memory (ROM, Read Only Memory), a programmable read-only memory (PROM, Programmable Read-Only Memory), an erasable programmable read-only memory (EPROM, Erasable Programmable Read-Only Memory) Memory), Electrically Erasable Programmable Read-Only Memory (EEPROM, Electrically Erasable Programmable Read-Only Memory), Magnetic Random Access Memory (FRAM, ferromagnetic random access memory), Flash Memory (Flash Memory), Magnetic Surface Memory, Optical Disc , or compact disc read-only memory (CD-ROM, Compact Disc Read-Only Memory); the magnetic surface memory can be a magnetic disk memory or a tape memory. The volatile memory may be random access memory (RAM, Random Access Memory), which is used as an external cache memory. By way of example and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory Memory (DRAM, Dynamic Random Access Memory), Synchronous Dynamic Random Access Memory (SDRAM, Synchronous Dynamic Random Access Memory), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM, Double Data Rate Synchronous Dynamic Random Access Memory), Enhanced Type Synchronous Dynamic Random Access Memory (ESDRAM, Enhanced Synchronous Dynamic Random Access Memory), Synchronous Link Dynamic Random Access Memory (SLDRAM, SyncLink Dynamic Random Access Memory), Direct Memory Bus Random Access Memory (DRRAM, Direct Rambus Random Access Memory). The memories described in the embodiments of the present application are intended to include, but not be limited to, these and any other suitable types of memories.
这里需要指出的是:以上终端实施例项的描述,与上述方法描述是类似的,具有同方法实施例相同的有益效果,因此不做赘述。对于本申请终端实施例中未披露的技术细节,本领域的技术人员请参照本申请方法实施例的描述而理解,为节约篇幅,这里不再赘述。It should be pointed out here that the descriptions of the above terminal embodiments are similar to the descriptions of the above methods, and have the same beneficial effects as the method embodiments, so they will not be repeated. For technical details that are not disclosed in the terminal embodiments of the present application, those skilled in the art should refer to the description of the method embodiments of the present application to understand, and to save space, details are not repeated here.
在示例性实施例中,本申请实施例还提供了一种计算机可读存储介质,例如包括存储计算机程序的存储器701,上述计算机程序可由处理器702处理,以完成前述方法所述步骤。计算机可读存储介质可以是FRAM、ROM、PROM、EPROM、EEPROM、Flash Memory、磁表面存储器、光盘、或CD-ROM等存储器。In an exemplary embodiment, an embodiment of the present application further provides a computer-readable storage medium, for example, including a
本申请实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器处理时实现:Embodiments of the present application also provide a computer-readable storage medium, on which a computer program is stored, and the computer program is implemented when processed by a processor:
确定采集的实时视频流中的目标对象的目标关键点;Determine the target key point of the target object in the captured real-time video stream;
根据所述目标关键点确定所述目标对象的行为信息;Determine the behavior information of the target object according to the target key point;
根据所述行为信息确定所述目标对象的行为状态。The behavior state of the target object is determined according to the behavior information.
这里需要指出的是:以上计算机介质实施例项的描述,与上述方法描述是类似的,具有同方法实施例相同的有益效果,因此不做赘述。对于本申请终端实施例中未披露的技术细节,本领域的技术人员请参照本申请方法实施例的描述而理解,为节约篇幅,这里不再赘述。It should be pointed out here that the descriptions of the above computer medium embodiments are similar to the descriptions of the above methods, and have the same beneficial effects as the method embodiments, so they are not repeated. For technical details that are not disclosed in the terminal embodiments of the present application, those skilled in the art should refer to the descriptions of the method embodiments of the present application to understand, and to save space, they will not be repeated here.
以上所述,仅为本申请的较佳实施例而已,并非用于限定本申请的保护范围。The above descriptions are only preferred embodiments of the present application, and are not intended to limit the protection scope of the present application.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811436518.4A CN111241874A (en) | 2018-11-28 | 2018-11-28 | Behavior monitoring method and device and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811436518.4A CN111241874A (en) | 2018-11-28 | 2018-11-28 | Behavior monitoring method and device and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111241874A true CN111241874A (en) | 2020-06-05 |
Family
ID=70863614
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811436518.4A Pending CN111241874A (en) | 2018-11-28 | 2018-11-28 | Behavior monitoring method and device and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111241874A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107320090A (en) * | 2017-06-28 | 2017-11-07 | 广东数相智能科技有限公司 | A kind of burst disease monitor system and method |
US20170345181A1 (en) * | 2016-05-27 | 2017-11-30 | Beijing Kuangshi Technology Co., Ltd. | Video monitoring method and video monitoring system |
CN108830240A (en) * | 2018-06-22 | 2018-11-16 | 广州通达汽车电气股份有限公司 | Fatigue driving state detection method, device, computer equipment and storage medium |
-
2018
- 2018-11-28 CN CN201811436518.4A patent/CN111241874A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170345181A1 (en) * | 2016-05-27 | 2017-11-30 | Beijing Kuangshi Technology Co., Ltd. | Video monitoring method and video monitoring system |
CN107320090A (en) * | 2017-06-28 | 2017-11-07 | 广东数相智能科技有限公司 | A kind of burst disease monitor system and method |
CN108830240A (en) * | 2018-06-22 | 2018-11-16 | 广州通达汽车电气股份有限公司 | Fatigue driving state detection method, device, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12094607B2 (en) | Systems and methods to identify persons and/or identify and quantify pain, fatigue, mood, and intent with protection of privacy | |
JP7229174B2 (en) | Person identification system and method | |
US20200205697A1 (en) | Video-based fall risk assessment system | |
US10262196B2 (en) | System and method for predicting neurological disorders | |
Lucey et al. | Automatically detecting pain in video through facial action units | |
US20150320343A1 (en) | Motion information processing apparatus and method | |
Banerjee et al. | Day or night activity recognition from video using fuzzy clustering techniques | |
CN111524608B (en) | Intelligent detection and epidemic prevention system and method | |
WO2022022551A1 (en) | Method and device for analyzing video for evaluating movement disorder having privacy protection function | |
Withanage et al. | Fall recovery subactivity recognition with RGB-D cameras | |
Chiu et al. | Emotion recognition through gait on mobile devices | |
CN113823376A (en) | Intelligent medicine taking reminding method, device, equipment and storage medium | |
Seredin et al. | The study of skeleton description reduction in the human fall-detection task | |
Romaissa et al. | Fall detection using body geometry in video sequences | |
CN109784179A (en) | Intelligent monitor method, apparatus, equipment and medium based on micro- Expression Recognition | |
Nouisser et al. | Deep learning and kinect skeleton-based approach for fall prediction of elderly physically disabled | |
CN116013548B (en) | Intelligent ward monitoring method and device based on computer vision | |
Zhukova et al. | Smart room for patient monitoring based on IoT technologies | |
CN111241874A (en) | Behavior monitoring method and device and computer readable storage medium | |
Lin et al. | An ensemble model using face and pose tracking for engagement detection in game-based rehabilitation | |
Wang et al. | Video-based inpatient fall risk assessment: A case study | |
Jolly et al. | Posture Correction and Detection using 3-D Image Classification | |
Bačić et al. | Towards real-time drowsiness detection for elderly care | |
Hsu et al. | Extraction of visual facial features for health management | |
US20240065596A1 (en) | Method and system for detecting short-term stress and generating alerts inside the indoor environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200605 |