WO2021114766A1 - Method and system for analyzing behavior pattern of person on the basis of depth data - Google Patents

Method and system for analyzing behavior pattern of person on the basis of depth data Download PDF

Info

Publication number
WO2021114766A1
WO2021114766A1 PCT/CN2020/113952 CN2020113952W WO2021114766A1 WO 2021114766 A1 WO2021114766 A1 WO 2021114766A1 CN 2020113952 W CN2020113952 W CN 2020113952W WO 2021114766 A1 WO2021114766 A1 WO 2021114766A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
data
behavior
depth data
category
Prior art date
Application number
PCT/CN2020/113952
Other languages
French (fr)
Chinese (zh)
Inventor
邵肖伟
许永伟
Original Assignee
深圳市鸿逸达科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市鸿逸达科技有限公司 filed Critical 深圳市鸿逸达科技有限公司
Publication of WO2021114766A1 publication Critical patent/WO2021114766A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction

Definitions

  • the present invention relates to pedestrian detection and point cloud intelligent analysis technology, and more specifically, to a method and system for analyzing personnel behavior patterns based on depth data.
  • the present invention proposes a method for analyzing personnel behavior patterns based on depth data, which includes: S1, deploying multiple distance sensors to collect depth data in the detection area; S2, performing analysis on the collected depth data Process to form structured data; S3, analyze the target's behavior pattern, determine the target's abnormal behavior, and if it is abnormal, give an alarm.
  • the present invention also proposes a personnel behavior pattern analysis system based on depth data, which includes a memory storing a computer program and a processor capable of executing the computer program, and the computer program is executed to implement the following steps: S1, deploy multiple units The distance sensor collects depth data in the detection area; S2, processes the collected depth data to form structured data; S3, analyzes the behavior pattern of the target, judges the abnormal behavior of the target, and if it is abnormal, alarms.
  • the invention adopts the personnel behavior pattern analysis technology based on in-depth data to monitor personnel. Install distance sensors indoors, collect personnel's depth data, use intelligent algorithms to automatically monitor personnel behavior, and automatically alert abnormal behaviors.
  • Automatic detection real-time monitoring of the current personnel posture and detection of abnormal behaviors. When abnormal behavior occurs, it will automatically alert.
  • Behavior analysis Continuously monitor the behavior of personnel and analyze the living habits and habits of personnel.
  • the beneficial effects of the present invention include: 1. High-precision spatial data collection: millimeter-level recognition accuracy. 2. Strong data resolving ability: accurate object recognition, accurate identification of behaviors and actions. 3. No external light is required. It is not affected by the external environment such as night and fog. 4. Privacy protection: Does not recognize facial information. 5. Active intelligent judgment and early warning: structured data, rule-based data detection and active early warning. 6. Intelligent linkage: Drive traditional security equipment such as video surveillance, sound and light alarms based on the results of rule judgments, and coordinate linkage.
  • Fig. 1 is a flowchart of an embodiment of the method of the present invention.
  • Fig. 2 is a flowchart of an embodiment of the method of the present invention.
  • the method of one embodiment of the present invention is as follows.
  • S1 deploy multiple distance sensors to collect depth data in the detection area.
  • the distance sensor is arranged in the air.
  • the sensor upgrades the existing application level from two-dimensional to three-dimensional, presents management objects from a spatial perspective, improves security management capabilities with structured spatial data, and realizes active security supported by accurate data.
  • Each distance sensor is a client, which communicates through wireless LSN, and the receipt collected by the sensor is returned to the host server.
  • the host server can view the working status of each sensor, and use the data assimilation algorithm to collect and integrate the data of all sensors to form a God's perspective and view the flow of people in the entire area in real time.
  • the distance sensor emits and receives invisible light beams through the area scanning method, and obtains the scanning data of each frame.
  • the data includes the distance from the scanned object to the sensor, scanning time and scanning frequency.
  • the distance sensor needs to be processed as follows:
  • Synchronization Before data collection starts, each client needs to synchronize time. Even if the devices are time synchronized before the acquisition, the acquisition PC and sensor will still have a slight time error during long-term work. These errors need to be manually eliminated in the later stage to make the data time completely consistent.
  • Registration Place multiple sensor data point clouds in a world coordinate display to form a God's perspective; select a plane data point in the original point cloud data space, such as a ground data point, and convert the ground data point to the same plane data point , The least square method is used to minimize the distance between the ground data point and this plane, so as to obtain a transformation parameter, and then calculate the relative coordinates of each sensor.
  • the step S2 includes:
  • the depth data is preprocessed, including: 1) Data denoising: Due to the influence of hardware equipment and the actual environment, the depth data collected by the depth sensor has some noise. According to the neighborhood information of each data point, a certain size of filter is set The operator uses a two-dimensional convolution operation to remove invalid data points, which can improve the accuracy of target detection; 2) Data normalization: Due to the influence of the actual environment, there are some invalid distance data in the depth data collected by the depth sensor. By selecting an effective detection distance, the distance value is converted to a linear transformation between [0-1]. This normalized depth data can eliminate the influence of different target distances on the detection, which is beneficial to mention the target inspection accuracy; (
  • S3 Analyze the behavior pattern of the target, determine the abnormal behavior of the target, and give an alarm if it is abnormal. Specifically, the depth data is compared with the determined background model library, and the behavior pattern information of the target is obtained, thereby determining that the target belongs to a sitting posture, a standing posture, or a lying posture.
  • S31 The behavior mode of collecting and marking the target. Perform deep learning detection and analysis on the current depth data, manually label the acquired structured data according to the target task, and label the category as sitting, standing or lying position, and the label information is the boundary information of the target in the depth image.
  • the deep neural network can adopt the SSD network structure (see Liu W, Snguelov D, ErhSn D, et Sl. SSD: Single Shot MultiBox Detector), and continuously train the network parameters through the network output results and the gaps in the actual results.
  • the purpose of the present invention is to detect the location and category information of a person, and the deep learning SSD network structure combines convolution operation and regression analysis to obtain the location and category of the target.
  • the SSD network structure uses convolution operation to obtain the feature information of the target, regression analysis to obtain the location and category of the target, and then combines the multi-scale feature information to precisely determine the target location.
  • S33 Use the obtained behavior pattern classifier to predict the behavior category of the target in the depth data, judge the abnormal behavior according to the detected category result, and send an alarm when the abnormal behavior is detected.
  • the target position and posture of the person in the detection area can be determined, and then the personnel's behavior can be judged through the temporal spatial position and posture information of the person. Finally, the behavior is detected, and when irregularities or violations occur, the alarm mechanism is triggered, and real-time alarms are triggered.
  • the background detection software when there is a person's super-elevation self-hanging behavior, the background detection software will detect that there is a person's continuous height abnormality at a certain position, so as to trigger the super-elevation self-hanging alarm in real time; such as in a monitored room If there is a person who does not take a break within the specified time, the background detection software will detect that someone has not lie down and rest at the specified time, thereby triggering an alarm of irregular behavior in real time.
  • Fusion of multiple distance sensor detection results, based on long-term behavior data to analyze the target life habits After obtaining the target's spatial position and posture category, the status of the target person can be monitored in real time, and the status information of the target over a long period of time can also be obtained. Analysis of the status information of the target over a long period of time includes:
  • the present invention can intelligently recognize the posture of indoor personnel, and indoor monitoring can be applied to the following scenarios: 1. Indoor monitoring and early warning of schools, kindergartens, judicial institutions, and government agencies: real-time real-time monitoring of crowd gathering, abnormal crowd behavior and abnormal individual behavior, etc. Detection and early warning. 2. Early warning of mass incidents and abnormal events in traffic hubs and public places: real-time grasp of passenger distribution, regional density, and speed of action, plan scientific paths for emergency relief, cross-border warnings, illegal intrusion warnings, and cluster warnings linked to the broadcasting system .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Alarm Systems (AREA)

Abstract

A method for analyzing a behavior pattern of a person on the basis of depth data, comprising: deploying a plurality of distance sensors, and collecting depth data in a detection region (S1); processing the collected depth data, and forming structured data (S2); and analyzing a behavior pattern of a target, determining whether or not the target has displayed abnormal behavior, and issuing an alert if so (S3). The method can achieve the following objectives: 1) automatic detection: monitoring the current posture of a person in real time to detect abnormal behavior, and automatically issuing an alert upon detection of abnormal behavior; and (2) behavior analysis: continuously monitoring behavior of a person to analyze lifestyle and habits of said person.

Description

一种基于深度数据的人员行为模式分析方法和系统A method and system for analyzing personnel behavior patterns based on in-depth data 技术领域Technical field
本发明涉及行人检测和点云智能化分析技术,更具体地,涉及一种基于深度数据的人员行为模式分析方法和系统。The present invention relates to pedestrian detection and point cloud intelligent analysis technology, and more specifically, to a method and system for analyzing personnel behavior patterns based on depth data.
背景技术Background technique
在室内环境中对人员进行视频监控,一般使用常规彩色摄像头人工监控人员行为,也有一些基于彩色视频运用机器视觉技术来自动监控人员行为。使用视频对人员进行监控,对异常行为进行监控,在安防领域具有重要作用。当前存在的问是:1、在一些司法安全领域,不适合使用常规彩色摄像,如何保护被监控人员一定的隐私。2、基于彩色视频运用机器视觉技术可以监控一定的人员行为,但是不能定位监控人员的空间信息。3、夜间监控,常规彩色摄像无法监控人员行为。4、视频监控异常的告警。For video surveillance of people in indoor environments, conventional color cameras are generally used to manually monitor people's behavior, and some use machine vision technology to automatically monitor people's behavior based on color video. The use of video to monitor personnel and monitor abnormal behavior plays an important role in the field of security. The current questions are: 1. In some judicial security fields, it is not suitable to use conventional color cameras. How to protect the privacy of the monitored persons. 2. The use of machine vision technology based on color video can monitor certain personnel behavior, but cannot locate the spatial information of the monitoring personnel. 3. Night monitoring, conventional color cameras cannot monitor personnel behavior. 4. Alarm of abnormal video surveillance.
发明内容Summary of the invention
为解决现有技术中的问题,本发明提出一种基于深度数据的人员行为模式分析方法,包括:S1,部署多台距离传感器,采集检测区域内的深度数据;S2,对采集的深度数据进行处理,形成结构化数据;S3,分析目标的行为模式,判断目标的异常行为,如果异常,则告警。In order to solve the problems in the prior art, the present invention proposes a method for analyzing personnel behavior patterns based on depth data, which includes: S1, deploying multiple distance sensors to collect depth data in the detection area; S2, performing analysis on the collected depth data Process to form structured data; S3, analyze the target's behavior pattern, determine the target's abnormal behavior, and if it is abnormal, give an alarm.
本发明还提出一种基于深度数据的人员行为模式分析系统,包括存储有计算机程序的存储器和能够执行所述计算机程序的处理器,所述计算机程序被执行时实现如下步骤:S1,部署多台距离传感器,采集检测区域内的深度数据;S2,对采集的深度数据进行处理,形成结构化数据;S3,分析目标的行为模式,判断目标的异常行为,如果异常,则告警。The present invention also proposes a personnel behavior pattern analysis system based on depth data, which includes a memory storing a computer program and a processor capable of executing the computer program, and the computer program is executed to implement the following steps: S1, deploy multiple units The distance sensor collects depth data in the detection area; S2, processes the collected depth data to form structured data; S3, analyzes the behavior pattern of the target, judges the abnormal behavior of the target, and if it is abnormal, alarms.
本发明采用基于深度数据的人员行为模式分析技术对人员进行监控。在室内安装距离传感器,采集人员的深度数据,利用智能算法自动监控人员行为,对异常行为自动进行告警。解决了以下问题:1、自动检测:对当前人员姿态进行实时监测,检测异常行为情况。出现异常行为时,自动告警。2、行为分析:对人员行为进行连续监测,为人员生活习惯、习性等进行分析。The invention adopts the personnel behavior pattern analysis technology based on in-depth data to monitor personnel. Install distance sensors indoors, collect personnel's depth data, use intelligent algorithms to automatically monitor personnel behavior, and automatically alert abnormal behaviors. The following problems have been solved: 1. Automatic detection: real-time monitoring of the current personnel posture and detection of abnormal behaviors. When abnormal behavior occurs, it will automatically alert. 2. Behavior analysis: Continuously monitor the behavior of personnel and analyze the living habits and habits of personnel.
本发明的有益效果包括:1、高精度空间数据采集:毫米级识别精度。2、数据分辨能力强:物体精准识别、行为动作准确判别。3、无需外部光照。不受夜晚、雾霭等外部环境影响。4、隐私保护:不识别面部信息。5、主动的智能化判断及预警:结构化数据,基于规则的数据检测和主动式预警。6、智能联动:基于规则判断结果驱动视频监控、声光告警等传统安防设备,协同联动。The beneficial effects of the present invention include: 1. High-precision spatial data collection: millimeter-level recognition accuracy. 2. Strong data resolving ability: accurate object recognition, accurate identification of behaviors and actions. 3. No external light is required. It is not affected by the external environment such as night and fog. 4. Privacy protection: Does not recognize facial information. 5. Active intelligent judgment and early warning: structured data, rule-based data detection and active early warning. 6. Intelligent linkage: Drive traditional security equipment such as video surveillance, sound and light alarms based on the results of rule judgments, and coordinate linkage.
附图说明Description of the drawings
图1为本发明的方法的一个实施方式的流程图。Fig. 1 is a flowchart of an embodiment of the method of the present invention.
图2为本发明的方法的一个实施方式的流程图。Fig. 2 is a flowchart of an embodiment of the method of the present invention.
具体实施方式Detailed ways
下面参照附图描述本发明的实施方式。The embodiments of the present invention will be described below with reference to the drawings.
如图1所示,本发明的一个实施方式的方法如下。As shown in FIG. 1, the method of one embodiment of the present invention is as follows.
S1,部署多台距离传感器,采集检测区域内的深度数据。S1, deploy multiple distance sensors to collect depth data in the detection area.
距离传感器布置在空中,传感器将现有的应用层面从二维提升到三维空间,从空间视角呈现管理对象,以结构化的空间数据提升安防管理能力,实现精准数据支撑的主动式安防。The distance sensor is arranged in the air. The sensor upgrades the existing application level from two-dimensional to three-dimensional, presents management objects from a spatial perspective, improves security management capabilities with structured spatial data, and realizes active security supported by accurate data.
每个距离传感器为一个客户端,通过无线LSN进行通信,传感器采集的收据返回到主机服务器。主机服务器可查看各台传感器的工作状态,利用数据同化算法,可收集、统合所有传感器的数据,形成上帝视角,实时查看整个区域内的人群流动状况。Each distance sensor is a client, which communicates through wireless LSN, and the receipt collected by the sensor is returned to the host server. The host server can view the working status of each sensor, and use the data assimilation algorithm to collect and integrate the data of all sensors to form a God's perspective and view the flow of people in the entire area in real time.
距离传感器通过面扫描方式发射并接收非可见光光束,获取每一 帧的扫描数据。该数据包括被扫描物体到传感器的距离、扫描时间和扫描频率。The distance sensor emits and receives invisible light beams through the area scanning method, and obtains the scanning data of each frame. The data includes the distance from the scanned object to the sensor, scanning time and scanning frequency.
优选地,在数据采集之前,需要对距离传感器进行如下处理:Preferably, before data collection, the distance sensor needs to be processed as follows:
1、同步:数据采集开始前,各客户端需要进行时间同步。即使在采集前各设备进行了时间同步,采集PC、传感器在长时间工作中,仍会有稍许时间误差,需要在后期手动消除这些误差,使其数据时间完全一致。1. Synchronization: Before data collection starts, each client needs to synchronize time. Even if the devices are time synchronized before the acquisition, the acquisition PC and sensor will still have a slight time error during long-term work. These errors need to be manually eliminated in the later stage to make the data time completely consistent.
2、配准:使多台传感器数据点云放置在一个世界坐标显示,形成上帝视角;选取原始点云数据空间中一个平面数据点,如地面数据点,将地面数据点转换为同一平面数据点,通过最小二乘方法使得地面数据点与此平面的距离最小,从而获取一个变换参数,再计算各传感器的相对坐标。2. Registration: Place multiple sensor data point clouds in a world coordinate display to form a God's perspective; select a plane data point in the original point cloud data space, such as a ground data point, and convert the ground data point to the same plane data point , The least square method is used to minimize the distance between the ground data point and this plane, so as to obtain a transformation parameter, and then calculate the relative coordinates of each sensor.
S2,对采集的深度数据进行处理,形成结构化数据。所述步骤S2包括:S2, processing the collected depth data to form structured data. The step S2 includes:
S21,对深度数据进行预处理,包括:1)数据去噪:由于硬件设备和实际环境的影响,深度传感器采集到的深度数据有部分噪声,根据各数据点邻域信息,设置一定大小的滤波算子,使用二维卷积运算,去除无效数据点,可以提升目标检测精度;2)数据归一化:由于实际环境的影响,深度传感器采集到的深度数据中有部分无效距离数据。通过选择有效的检测距离,将距离值转换线性变换到[0-1]之间,这种归一化深度数据可以消除不同目标距离远近对检测的影响,有利于提到目标检查精度;(S21, the depth data is preprocessed, including: 1) Data denoising: Due to the influence of hardware equipment and the actual environment, the depth data collected by the depth sensor has some noise. According to the neighborhood information of each data point, a certain size of filter is set The operator uses a two-dimensional convolution operation to remove invalid data points, which can improve the accuracy of target detection; 2) Data normalization: Due to the influence of the actual environment, there are some invalid distance data in the depth data collected by the depth sensor. By selecting an effective detection distance, the distance value is converted to a linear transformation between [0-1]. This normalized depth data can eliminate the influence of different target distances on the detection, which is beneficial to mention the target inspection accuracy; (
S22,对预处理后的深度数据,根据采集时间、数据大小、深度数据和点云数据等信息,按照需要制作深度数据数据库,用于目标检测、实时告警、行为分析等任务。S22: For the preprocessed depth data, according to information such as collection time, data size, depth data, and point cloud data, create a depth data database as needed for tasks such as target detection, real-time warning, and behavior analysis.
S3,分析目标的行为模式,判断目标的异常行为,如果异常,则告警。具体地,将深度数据与已经确定的后台模型库进行比对,得出目标的行为模式信息,由此断定目标属于坐姿、站姿或是躺姿等。S3: Analyze the behavior pattern of the target, determine the abnormal behavior of the target, and give an alarm if it is abnormal. Specifically, the depth data is compared with the determined background model library, and the behavior pattern information of the target is obtained, thereby determining that the target belongs to a sitting posture, a standing posture, or a lying posture.
在分析目标的行为模式时,具体包括如下步骤:When analyzing the behavior pattern of the target, the following steps are specifically included:
S31、采集、标注目标的行为模式。对当前深度数据进行深度学习检测分析,根据目标任务对获取的结构化数据进行人工标注,标注类别为坐姿、站姿或是躺姿等,标注信息为目标在深度图像中的边界信息。S31. The behavior mode of collecting and marking the target. Perform deep learning detection and analysis on the current depth data, manually label the acquired structured data according to the target task, and label the category as sitting, standing or lying position, and the label information is the boundary information of the target in the depth image.
S32、通过深度神经网络训练一个目标位置和类别检测的分类器。对当前深度数据进行预处理,使用训练得到的目标检测模型,前向运算得到当前帧的目标检测结果,检测目标是否为坐姿、站姿或躺姿等姿态信息;结合对应目标的点云空间信息获取目标所在的坐标位置;最后得到目标的空间位置信息和行为模式类别。S32. Train a classifier for target location and category detection through a deep neural network. Preprocess the current depth data, use the trained target detection model, and get the target detection result of the current frame by forward calculation, and detect whether the target is sitting, standing or lying posture information; combined with the point cloud space information of the corresponding target Get the coordinate position of the target; finally get the spatial position information and behavior pattern category of the target.
深度神经网络可以采用SSD网络结构(参见Liu W,Snguelov D,ErhSn D,et Sl.SSD:Single Shot MultiBox Detector),通过网络输出结果和标注真实结果差距不断训练网络参数。本发明的目的是检测人员的位置和类别信息,深度学习SSD网络结构结合使用卷积运算和回归分析来获取目标的位置和类别。SSD网络结构使用卷积运算获取目标的特征信息,回归分析获取目标的位置和类别,再结合多尺度特征信息精精确定目标位置。The deep neural network can adopt the SSD network structure (see Liu W, Snguelov D, ErhSn D, et Sl. SSD: Single Shot MultiBox Detector), and continuously train the network parameters through the network output results and the gaps in the actual results. The purpose of the present invention is to detect the location and category information of a person, and the deep learning SSD network structure combines convolution operation and regression analysis to obtain the location and category of the target. The SSD network structure uses convolution operation to obtain the feature information of the target, regression analysis to obtain the location and category of the target, and then combines the multi-scale feature information to precisely determine the target location.
S33、用得到的行为模式分类器预测深度数据中目标的行为类别,根据检测得到的类别结果判断异常行为,检测到异常行为时进行告警。当获取到目标的空间坐标位置和姿态类别后,可以确定检测区域内人员的目标位置和姿态,再通过人员时序性的空间位置和姿态信息,可以判断人员的行为。最后对行为进行检测,当出现不规范或违规行为时,触发告警机制,实时告警。如在某一被监控的房间内,出现人员超高自缢行为时,后台检测软件会检测出有人员在一定位置持续出现高度异常,从而实时触发超高自缢告警;如在某一被监控的房间内,出现人员未在指定时间内休息时,后台检测软件会检测出有人员未在指定时间躺下休息,从而实时触发行为不规范告警。S33: Use the obtained behavior pattern classifier to predict the behavior category of the target in the depth data, judge the abnormal behavior according to the detected category result, and send an alarm when the abnormal behavior is detected. After the spatial coordinate position and posture category of the target are obtained, the target position and posture of the person in the detection area can be determined, and then the personnel's behavior can be judged through the temporal spatial position and posture information of the person. Finally, the behavior is detected, and when irregularities or violations occur, the alarm mechanism is triggered, and real-time alarms are triggered. For example, in a monitored room, when there is a person's super-elevation self-hanging behavior, the background detection software will detect that there is a person's continuous height abnormality at a certain position, so as to trigger the super-elevation self-hanging alarm in real time; such as in a monitored room If there is a person who does not take a break within the specified time, the background detection software will detect that someone has not lie down and rest at the specified time, thereby triggering an alarm of irregular behavior in real time.
S4,对目标行为进行分析。S4, analyze the target behavior.
融合多个距离传感器检测结果,根据长期的行为数据对目标生活习惯进行分析。获取目标的空间位置和姿态类别后,可以实时监测目 标人员的状态,也可以获取目标的长时间内的状态信息。对目标长时间内的状态信息的分析包括:Fusion of multiple distance sensor detection results, based on long-term behavior data to analyze the target life habits. After obtaining the target's spatial position and posture category, the status of the target person can be monitored in real time, and the status information of the target over a long period of time can also be obtained. Analysis of the status information of the target over a long period of time includes:
1)运动轨迹分析:根据目标人员的位置和姿态,获取目标人员的运动和动作轨迹;1) Motion trajectory analysis: According to the position and posture of the target person, obtain the motion and action trajectory of the target person;
2)作息习惯分析:根据目标人员的位置和姿态,获取目标人员的起讫睡觉时间、睡觉时长和睡觉质量等信息,从而得到目标人员的作息习惯;2) Analysis of work and rest habits: According to the target person's position and posture, get the target person's starting and ending sleep time, sleep duration and sleep quality, etc., so as to get the target person's work and rest habits;
3)交流习惯分析:根据目标人员的位置和姿态,获取目标人员与其他人员的交流频率、交流时间和独自状态等信息,从而得到目标人员的交流习惯。可对目标人员心理分析和心理治疗等;3) Communication habit analysis: According to the location and posture of the target person, obtain information such as the frequency of communication between the target person and other persons, communication time, and individual status, so as to obtain the communication habits of the target person. Psychological analysis and psychological treatment of target personnel can be performed;
本发明可以对室内人员姿态进行智能识别,室内监控可以应用在以下场景:1、学校、幼儿园、司法机构和政府机构等室内监控预警:对人群聚集、人群行为异常和个体行为异常等,实现实时侦测、预警。2、交通枢纽、公共场所群体性事件和异常事件预警:实时掌握乘客的分布、区域密度、行动速度,规划应急疏导的科学化路径,与广播系统联动的越界告警、非法闯入告警、聚集告警。The present invention can intelligently recognize the posture of indoor personnel, and indoor monitoring can be applied to the following scenarios: 1. Indoor monitoring and early warning of schools, kindergartens, judicial institutions, and government agencies: real-time real-time monitoring of crowd gathering, abnormal crowd behavior and abnormal individual behavior, etc. Detection and early warning. 2. Early warning of mass incidents and abnormal events in traffic hubs and public places: real-time grasp of passenger distribution, regional density, and speed of action, plan scientific paths for emergency relief, cross-border warnings, illegal intrusion warnings, and cluster warnings linked to the broadcasting system .
以上所述仅是本发明较优选的具体实施方式,应当指出,本领域的技术人员在本发明技术方案范围内进行的通常变化和替换都应包含在本发明的保护范围内。The above are only preferred specific embodiments of the present invention. It should be pointed out that the usual changes and substitutions made by those skilled in the art within the scope of the technical solution of the present invention should be included in the protection scope of the present invention.

Claims (10)

  1. 一种基于深度数据的人员行为模式分析方法,其特征在于,包括:A method for analyzing personnel behavior patterns based on in-depth data, which is characterized in that it includes:
    S1,部署多台距离传感器,采集检测区域内的深度数据;S1, deploy multiple distance sensors to collect depth data in the detection area;
    S2,对采集的深度数据进行处理,形成结构化数据;S2, processing the collected depth data to form structured data;
    S3,分析目标的行为模式,判断目标的异常行为,如果异常,则告警。S3: Analyze the behavior pattern of the target, determine the abnormal behavior of the target, and give an alarm if it is abnormal.
  2. 根据权利要求1所述的方法,其特征在于,在步骤S1中,还包括:The method according to claim 1, characterized in that, in step S1, further comprising:
    1)数据采集开始前,时间同步各个距离传感器;1) Before the start of data collection, time synchronize each distance sensor;
    2)对各个距离传感器进行配置,计算各个传感器的相对坐标。2) Configure each distance sensor and calculate the relative coordinates of each sensor.
  3. 根据权利要求1所述的方法,其特征在于,在步骤S2中,还包括:The method according to claim 1, characterized in that, in step S2, further comprising:
    S21,对深度数据进行预处理,包括数据去噪和数据归一化;S21, preprocessing the depth data, including data denoising and data normalization;
    S22,对预处理后的深度数据,根据采集时间、数据大小、深度数据和点云数据信息,按照需要制作深度数据数据库。S22: For the preprocessed depth data, according to the collection time, data size, depth data, and point cloud data information, create a depth data database as needed.
  4. 根据权利要求1所述的方法,其特征在于,在步骤S3中,还包括:The method according to claim 1, characterized in that, in step S3, further comprising:
    S31、采集、标注目标的行为模式:对当前深度数据进行深度学习检测分析,根据目标任务对获取的结构化数据进行人工标注;S31. The behavior mode of collecting and labeling the target: performing deep learning detection and analysis on the current deep data, and manually labeling the obtained structured data according to the target task;
    S32、通过深度神经网络训练一个目标位置和类别检测的分类器。S32. Train a classifier for target location and category detection through a deep neural network.
    S33、用得到的行为模式分类器预测深度数据中目标的行为类别,根据检测得到的类别结果判断异常行为,检测到异常行为时进行告警。S33: Use the obtained behavior pattern classifier to predict the behavior category of the target in the depth data, judge the abnormal behavior according to the detected category result, and send an alarm when the abnormal behavior is detected.
  5. 根据权利要求4所述的方法,其特征在于,在步骤S32中,还包括:The method according to claim 4, characterized in that, in step S32, further comprising:
    1)对当前深度数据进行预处理,使用训练得到的目标检测模型,前向运算得到当前帧的目标检测结果,检测目标是否为坐姿、站姿或躺姿的姿态信息;1) Preprocess the current depth data, use the trained target detection model, and obtain the target detection result of the current frame by forward calculation, and detect whether the target is sitting, standing or lying posture information;
    2)结合对应目标的点云空间信息获取目标所在的坐标位置;2) Combining the point cloud space information of the corresponding target to obtain the coordinate position of the target;
    3)得到目标的空间位置信息和行为模式类别。3) Obtain the target's spatial location information and behavior pattern category.
  6. 根据权利要求5所述的方法,其特征在于,在步骤S33中,还包括:The method according to claim 5, characterized in that, in step S33, further comprising:
    使用SSD网络结构来进行训练,通过网络输出结果和标注真实结果差距不断训练网络参数。The SSD network structure is used for training, and the network parameters are continuously trained through the gap between the network output results and the actual results.
  7. 根据权利要求4所述的方法,其特征在于,在步骤S33中,还包括:The method according to claim 4, characterized in that, in step S33, further comprising:
    当获取到目标的空间坐标位置和姿态类别后,可以确定检测区域内人员的目标位置和姿态,再通过人员时序性的空间位置和姿态信息,可以判断人员的行为。After the spatial coordinate position and posture category of the target are obtained, the target position and posture of the person in the detection area can be determined, and then the personnel's behavior can be judged through the temporal spatial position and posture information of the person.
  8. 一种基于深度数据的人员行为模式分析系统,其特征在于,包括存储有计算机程序的存储器和能够执行所述计算机程序的处理器,所述计算机程序被执行时实现如下步骤:A human behavior pattern analysis system based on depth data, characterized in that it comprises a memory storing a computer program and a processor capable of executing the computer program, and the computer program implements the following steps when the computer program is executed:
    S1,部署多台距离传感器,采集检测区域内的深度数据;S1, deploy multiple distance sensors to collect depth data in the detection area;
    S2,对采集的深度数据进行处理,形成结构化数据;S2, processing the collected depth data to form structured data;
    S3,分析目标的行为模式,判断目标的异常行为,如果异常,则告警。S3: Analyze the behavior pattern of the target, determine the abnormal behavior of the target, and give an alarm if it is abnormal.
  9. 根据权利要求8所述的系统,其特征在于,The system according to claim 8, wherein:
    步骤S1还包括:1)数据采集开始前,时间同步各个距离传感器;2)对各个距离传感器进行配置,计算各个传感器的相对坐标;Step S1 also includes: 1) Time synchronization of each distance sensor before the start of data collection; 2) Configure each distance sensor and calculate the relative coordinates of each sensor;
    步骤S2还包括:S21,对深度数据进行预处理,包括数据去噪和数据归一化;S22,对预处理后的深度数据,根据采集时间、数据大小、深度数据和点云数据信息,按照需要制作深度数据数据库;Step S2 also includes: S21, preprocessing the depth data, including data denoising and data normalization; S22, on the preprocessed depth data, according to the acquisition time, data size, depth data and point cloud data information, according to Need to make an in-depth data database;
    步骤S3还包括:S31、采集、标注目标的行为模式:对当前深度数据进行深度学习检测分析,根据目标任务对获取的结构化数据进行人工标注;S32、通过深度神经网络训练一个目标位置和类别检测的分类器。S33、用得到的行为模式分类器预测深度数据中目标的行为类别,根据检测得到的类别结果判断异常行为,检测到异常行为时进行告警。Step S3 also includes: S31, the behavior mode of collecting and labeling the target: performing deep learning detection and analysis on the current deep data, and manually labeling the obtained structured data according to the target task; S32, training a target location and category through a deep neural network The detected classifier. S33: Use the obtained behavior pattern classifier to predict the behavior category of the target in the depth data, judge the abnormal behavior according to the detected category result, and send an alarm when the abnormal behavior is detected.
  10. 根据权利要求9所述的系统,其特征在于,The system according to claim 9, wherein:
    步骤S32还包括:1)对当前深度数据进行预处理,使用训练得到的目标检测模型,前向运算得到当前帧的目标检测结果,检测目标是否为坐姿、站姿或躺姿的姿态信息;2)结合对应目标的点云空间信息获取目标所在的坐标位置;3)得到目标的空间位置信息和行为模式类别;Step S32 also includes: 1) Preprocessing the current depth data, using the trained target detection model, forward calculation to obtain the target detection result of the current frame, and detecting whether the target is sitting, standing or lying posture information; 2 ) Combining the point cloud space information of the corresponding target to obtain the coordinate position of the target; 3) Obtain the target's spatial position information and behavior pattern category;
    步骤S33还包括:使用SSD网络结构来进行训练,通过网络输出结果和标注真实结果差距不断训练网络参数。Step S33 also includes: using the SSD network structure for training, and continuously training network parameters through the gap between the network output results and the actual results.
PCT/CN2020/113952 2019-12-09 2020-09-08 Method and system for analyzing behavior pattern of person on the basis of depth data WO2021114766A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911249544.0A CN111723633B (en) 2019-12-09 2019-12-09 Personnel behavior pattern analysis method and system based on depth data
CN201911249544.0 2019-12-09

Publications (1)

Publication Number Publication Date
WO2021114766A1 true WO2021114766A1 (en) 2021-06-17

Family

ID=72563974

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/113952 WO2021114766A1 (en) 2019-12-09 2020-09-08 Method and system for analyzing behavior pattern of person on the basis of depth data

Country Status (2)

Country Link
CN (1) CN111723633B (en)
WO (1) WO2021114766A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688740A (en) * 2021-08-26 2021-11-23 燕山大学 Indoor posture detection method based on multi-sensor fusion vision
CN114898257A (en) * 2022-04-28 2022-08-12 西安电子科技大学 Airport security check behavior pattern analysis and abnormal state detection method and system
CN115883779A (en) * 2022-10-13 2023-03-31 湖北公众信息产业有限责任公司 Smart park information safety management system based on big data
CN116580424A (en) * 2023-05-05 2023-08-11 深圳市领天智杰科技有限公司 Myopia prevention method and device based on neural network
CN117478843A (en) * 2023-11-15 2024-01-30 廊坊博联科技发展有限公司 Intelligent patrol control system and method

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436426A (en) * 2021-06-22 2021-09-24 北京时空数智科技有限公司 Personnel behavior warning system based on video AI analysis
CN113433598A (en) * 2021-08-26 2021-09-24 泰豪信息技术有限公司 Self-constriction behavior monitoring method, device and system applied to prison
CN114418903A (en) * 2022-01-21 2022-04-29 支付宝(杭州)信息技术有限公司 Man-machine interaction method and man-machine interaction device based on privacy protection
CN116665402A (en) * 2023-04-27 2023-08-29 无锡稚慧启蒙教育科技有限公司 Anti-lost system and method based on Internet of things

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5963653A (en) * 1997-06-19 1999-10-05 Raytheon Company Hierarchical information fusion object recognition system and method
CN105787469A (en) * 2016-03-25 2016-07-20 广州市浩云安防科技股份有限公司 Method and system for pedestrian monitoring and behavior recognition
CN109003303A (en) * 2018-06-15 2018-12-14 四川长虹电器股份有限公司 Apparatus control method and device based on voice and space object identification and positioning
US20190095716A1 (en) * 2017-09-26 2019-03-28 Ambient AI, Inc Systems and methods for intelligent and interpretive analysis of video image data using machine learning
CN110516720A (en) * 2019-08-13 2019-11-29 北京三快在线科技有限公司 Safety monitoring equipment and method for safety monitoring

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850846B (en) * 2015-06-02 2018-08-24 深圳大学 A kind of Human bodys' response method and identifying system based on deep neural network
CN106447698B (en) * 2016-09-28 2019-07-02 深圳市鸿逸达科技有限公司 A kind of more pedestrian tracting methods and system based on range sensor
US10977818B2 (en) * 2017-05-19 2021-04-13 Manor Financial, Inc. Machine learning based model localization system
CN107423679A (en) * 2017-05-31 2017-12-01 深圳市鸿逸达科技有限公司 A kind of pedestrian is intended to detection method and system
CN109697830B (en) * 2018-12-21 2020-10-20 山东大学 Method for detecting abnormal behaviors of people based on target distribution rule

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5963653A (en) * 1997-06-19 1999-10-05 Raytheon Company Hierarchical information fusion object recognition system and method
CN105787469A (en) * 2016-03-25 2016-07-20 广州市浩云安防科技股份有限公司 Method and system for pedestrian monitoring and behavior recognition
US20190095716A1 (en) * 2017-09-26 2019-03-28 Ambient AI, Inc Systems and methods for intelligent and interpretive analysis of video image data using machine learning
CN109003303A (en) * 2018-06-15 2018-12-14 四川长虹电器股份有限公司 Apparatus control method and device based on voice and space object identification and positioning
CN110516720A (en) * 2019-08-13 2019-11-29 北京三快在线科技有限公司 Safety monitoring equipment and method for safety monitoring

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688740A (en) * 2021-08-26 2021-11-23 燕山大学 Indoor posture detection method based on multi-sensor fusion vision
CN113688740B (en) * 2021-08-26 2024-02-27 燕山大学 Indoor gesture detection method based on multi-sensor fusion vision
CN114898257A (en) * 2022-04-28 2022-08-12 西安电子科技大学 Airport security check behavior pattern analysis and abnormal state detection method and system
CN115883779A (en) * 2022-10-13 2023-03-31 湖北公众信息产业有限责任公司 Smart park information safety management system based on big data
CN116580424A (en) * 2023-05-05 2023-08-11 深圳市领天智杰科技有限公司 Myopia prevention method and device based on neural network
CN117478843A (en) * 2023-11-15 2024-01-30 廊坊博联科技发展有限公司 Intelligent patrol control system and method

Also Published As

Publication number Publication date
CN111723633A (en) 2020-09-29
CN111723633B (en) 2022-05-20

Similar Documents

Publication Publication Date Title
WO2021114766A1 (en) Method and system for analyzing behavior pattern of person on the basis of depth data
CN110044486B (en) Method, device and equipment for avoiding repeated alarm of human body inspection and quarantine system
CN109684916B (en) Method, system, equipment and storage medium for detecting data abnormity based on path track
US9408561B2 (en) Activity analysis, fall detection and risk assessment systems and methods
CN105574501B (en) A kind of stream of people's video detecting analysis system
EP3745713A1 (en) Cloud server-based rodent outbreak smart monitoring system and method
US9208678B2 (en) Predicting adverse behaviors of others within an environment based on a 3D captured image stream
CN106128022B (en) A kind of wisdom gold eyeball identification violent action alarm method
CN110210461B (en) Multi-view collaborative abnormal behavior detection method based on camera grid
CN111653368A (en) Artificial intelligence epidemic situation big data prevention and control early warning system
CN104821056A (en) Intelligent guarding method based on radar and video integration
CN113269142A (en) Method for identifying sleeping behaviors of person on duty in field of inspection
CN106341661A (en) Patrol robot
CN108802758A (en) A kind of Intelligent security monitoring device, method and system based on laser radar
CN106796746A (en) Activity monitoring approach and system
AU2015203771A1 (en) A method and apparatus for surveillance
CN112235537A (en) Transformer substation field operation safety early warning method
CN112270807A (en) Old man early warning system that tumbles
CN110275042A (en) A kind of throwing object in high sky detection method based on computer vision and radio signal analysis
CN111612815A (en) Infrared thermal imaging behavior intention analysis method and system
CN106210633A (en) Line detection alarm method and device are got in a kind of wisdom gold eyeball identification
CN109831634A (en) The density information of target object determines method and device
CN110807345A (en) Building evacuation method and building evacuation system
Pramerdorfer et al. Fall detection based on depth-data in practice
CN115496640A (en) Intelligent safety system of thermal power plant

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20900192

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14.11.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20900192

Country of ref document: EP

Kind code of ref document: A1