WO2022022368A1 - Appareil et procédé basés sur un apprentissage profond pour surveillance de normes de comportement dans une prison - Google Patents

Appareil et procédé basés sur un apprentissage profond pour surveillance de normes de comportement dans une prison Download PDF

Info

Publication number
WO2022022368A1
WO2022022368A1 PCT/CN2021/107746 CN2021107746W WO2022022368A1 WO 2022022368 A1 WO2022022368 A1 WO 2022022368A1 CN 2021107746 W CN2021107746 W CN 2021107746W WO 2022022368 A1 WO2022022368 A1 WO 2022022368A1
Authority
WO
WIPO (PCT)
Prior art keywords
detection
behavior
classifier
human
behavioral
Prior art date
Application number
PCT/CN2021/107746
Other languages
English (en)
Chinese (zh)
Inventor
杨景翔
许根
黄业鹏
吕立
王菊
徐刚
肖江剑
Original Assignee
宁波环视信息科技有限公司
中国科学院宁波材料技术与工程研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202010736024.9A external-priority patent/CN114092846A/zh
Application filed by 宁波环视信息科技有限公司, 中国科学院宁波材料技术与工程研究所 filed Critical 宁波环视信息科技有限公司
Publication of WO2022022368A1 publication Critical patent/WO2022022368A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the present application relates to the field of machine learning research, and in particular, to a deep learning-based device and method for detecting behavioral norms in prisons.
  • the behavior analysis method based on video stream feature points and single-frame image features has achieved remarkable results in traditional single-view or single-person mode, but it is currently used in areas with relatively large pedestrian traffic such as streets, airports, and stations, or human occlusion.
  • the present application proposes a deep learning-based detection device and method for behavioral norms of prisons and institutions, which adopts a deep learning network to analyze human behavior and improves the robustness of a classification model; especially a deep learning network It is suitable for training and learning based on big data, and can give full play to its advantages.
  • the embodiment of the present application provides a deep learning-based detection device for behavioral norms of prisons, which completely designs a behavioral detection algorithm in accordance with the code of conduct for detainees in prisons.
  • the detection process includes setting detection triggers for each standardized behavioral detection. Time period and detection area, the behavior detection is only triggered in the set time period and the set detection area, and the corresponding recognition algorithm is not performed in other time periods and other areas. Reduce the execution complexity of the system and improve the stability of the system.
  • the detection time period and detection area are completely user-defined and set in accordance with the standard code of conduct, which can well meet the needs of code of conduct detection.
  • this application proposes a deep learning-based behavioral code detection device for prisons, including: a head count detection module and a behavioral code detection module; wherein:
  • the head count detection module is used for non-sensing roll call and/or crowd density identification; the head count detection module includes a target detection and segmentation process;
  • the behavior norm detection module is used for real-time calculation and discrimination of personnel behavior; the behavior norm detection module includes a training process of obtaining a classifier by using a training sample set, and a recognition process of using the classifier to identify test samples.
  • an embodiment of the present application also proposes a deep learning-based method for detecting behavioral norms in prisons, characterized in that the method includes the following steps:
  • Head count detection used for non-sensing roll call and/or crowd density identification; the head count detection includes a target detection and segmentation process;
  • Behavioral norm detection is used for real-time calculation and discrimination of personnel behavior; the behavioral norm detection includes a training process of obtaining a classifier by using a training sample set, and a recognition process of using the classifier to identify test samples.
  • the advantages of this application are: the global high-level features are obtained by using the CNN method, and the feature enhancement of STN has good robustness to real-life videos, and then SPPE is used to obtain the human body posture information, and the SDTN returns to Human body detection frame, optimize its own network, use PP-NMS to solve the problem of redundant detection, and conduct corresponding classifier training based on the attitude estimation results.
  • the features obtained from the global features are more comprehensive, making the behavior description more complete and more applicable.
  • FIG. 1 is a schematic flowchart of the target detection and segmentation process of the applicant's head technology detection module
  • Fig. 2 is the schematic flow chart of the training process of the code of conduct detection module of the present application
  • 3 is a schematic flowchart of the discrimination process of the code of conduct detection module of the present application.
  • Fig. 4 is a simplified flowchart of extraction and modeling of underlying features
  • Figure 5 is a process flow diagram of a general CNN.
  • An apparatus for detecting behavior norms of prisons based on deep learning uses a CNN method to perform feature extraction on underlying features to obtain global features instead of key points obtained by traditional methods, and the embodiments of the present application Provided is a deep learning-based monitoring device for behavioral norms of prisons and institutions, which uses the STN method to perform feature enhancement on the obtained global features instead of directly modeling the obtained features;
  • the learned prison behavior norm detection device uses the SDTN method to remap the obtained pose features to further enhance the accuracy of the detection frame.
  • a layer of deconvolution layer is used to perform key detection. The point regression operation can effectively improve the accuracy of multi-person key point detection.
  • the deep learning-based prison code of conduct detection device provided by the embodiment of the present application also takes into account the connectivity of multiple key points, and establishes a method for connecting key points. to the field.
  • the connected keypoint pairs are explicitly matched according to the connectivity of the human keypoints and the human body structure.
  • An embodiment of the present application provides a deep learning-based behavioral code detection device for prisons, including: a head count detection module and a behavioral code detection module; wherein:
  • the head count detection module is used for non-sensing roll call and/or crowd density identification; the head count detection module includes a target detection and segmentation process;
  • the behavior norm detection module is used for real-time calculation and discrimination of personnel behavior; the behavior norm detection module includes a training process of obtaining a classifier by using a training sample set, and a recognition process of using the classifier to identify test samples.
  • the target detection and segmentation process of the head count detection module includes the following steps:
  • S1 use the labeling tool to label the head of the image, generate a JSON file for each picture, and extract the feature information of the image labeling through a convolutional neural network;
  • step S2) using the feature information obtained in step S1) to extract the ROI, that is, the region of interest, using the region generation network, and then use the region of interest pooling to turn these ROIs into a fixed size;
  • step S3) perform Bounding box regression and classification prediction on the ROI obtained in step S2) through the fully connected layer, sample at different points of the feature map, and apply bilinear interpolation;
  • the head count detection module specifically includes:
  • the target detection unit is used for non-sensing real-time detection and statistics of detainees
  • Density analysis unit used for real-time accurate density detection and abnormal alarm in dormitories and venting circles;
  • the target detection unit includes the following steps: firstly collect five groups to expose their heads in video images in different environments according to specification requirements, wherein four groups of videos are used as training data sets, and one group of videos is used as verification data sets; The video frame images of the four groups are operated according to the steps S1) to S5) to obtain a human head detection model; finally, this human head detection model is loaded to the remaining video frame images of that group, and the final real-time personnel detection and statistics are carried out;
  • the training process of the behavior specification detection module includes the following steps:
  • step S6 input the target detection frame obtained in step S5) into the STN, that is, the spatial transformation network, and carry out a reinforcement operation to extract a high-quality single-person area from the inaccurate candidate frame;
  • step S7) use SPPE to the single person area frame after step S6) strengthening, namely single person posture estimator, estimate the posture skeleton of this person;
  • step S8) Remap the single-person posture obtained in step S7) to the image coordinate system through SDTN, that is, the space inverse transformation network, so as to obtain a more accurate human body target detection frame, and perform the human body posture estimation operation again; then through PP- NMS, that is, parameterized non-maximum suppression, solves the problem of redundant detection, and obtains the human skeleton information under this behavior;
  • SDTN that is, the space inverse transformation network
  • step S9 For the multi-scale key points obtained in step S8), the key point regression operation is performed through the deconvolution layer, which is equivalent to performing an up-sampling process, which can improve the accuracy of the target key points; consider the connectivity of multiple key points. , establish a directed field connecting key points, and match the connected key point pairs according to the connectivity and structure of human body parts, reduce misconnections, and obtain the final human skeleton information;
  • step S10) perform feature extraction on the final human skeleton information obtained in step S9), and input it into the classifier for training as a training sample of this type of behavior;
  • the identification process of the behavior specification detection module includes the following steps:
  • step S14 Input the human skeleton feature information obtained in step S13) into the classifier for identification to obtain the video behavior category.
  • the identification process includes setting the detection trigger time period and detection area of each standard behavior detection and using the classifier to identify, including artificially setting the detection time and detection area, and strictly following the code of conduct for the detainees in the detention center.
  • the corresponding behavior recognition operation is carried out in the set detection area.
  • an alarm message needs to be issued; if it is not within the detection trigger time period, the corresponding behavior recognition operation will not be performed;
  • the detection time period and detection area are completely user-defined and set in accordance with the standard code of conduct, which can well meet the needs of code of conduct detection.
  • the PP-NMS operation specifically includes: selecting the attitude of the maximum confidence as a reference, and eliminating the area frame close to the reference according to the elimination standard, repeating the process many times until the redundant identification frame. is eliminated and each recognition box appears uniquely;
  • the human skeleton information obtained in the step S8) further includes: using the enhanced data set, by learning the description information of different postures in the output result, to imitate the formation process of the human body area frame, and further generate a larger training set.
  • Yet another embodiment of the present application provides a deep learning-based method for detecting behavioral norms in prisons, the method comprising the following steps:
  • Head count detection used for non-sensing roll call and/or crowd density identification; the head count detection includes a target detection and segmentation process;
  • Behavior norm detection is used for real-time calculation and discrimination of personnel behavior; the behavior norm detection includes a training process of obtaining a classifier by using a training sample set, and a recognition process of using the classifier to identify test samples.
  • target detection and segmentation process specifically includes the following steps:
  • S1 use the labeling tool to label the head of the image, generate a JSON file for each picture, and extract the feature information of the image labeling through a convolutional neural network;
  • step S2) using the feature information obtained in step S1) to extract the ROI, that is, the region of interest, using the region generation network, and then use the region of interest pooling to turn these ROIs into a fixed size;
  • step S3) perform Bounding box regression and classification prediction on the ROI obtained in step S2) through the fully connected layer, sample at different points of the feature map, and apply bilinear interpolation;
  • the head count detection specifically includes the following steps:
  • Target detection which is used for non-sensing real-time detection and statistics of detainees
  • Density analysis used for real-time accurate density detection and abnormal alarm in dormitories and ventilation circles;
  • the target detection includes the following steps: firstly collect five groups to expose their heads in video images in different environments according to the specification requirements, wherein four groups of videos are used as training data sets, and one group of videos is used as verification data sets; The video frame images of the group are operated according to the steps S1) to S5) to obtain a human head detection model; finally, the human head detection model is loaded for the remaining video frame images of the group, and the final real-time personnel detection and statistics are performed.
  • the deep learning-based method for detecting behavioral norms in prisons of the present application includes a head count detection module and a behavioral norms detection module.
  • the head count detection module is used for the nonsensical roll call of detainees and the identification of crowd density in the prison; the behavioral code detection module is used for the order of washing, housekeeping, dining and sleeping, getting up, and television education in prisons.
  • safety rotation norms conduct assessment norms, three-position supervision norms, and out-of-jail holding head norms conduct real-time calculation and judgment.
  • the head count detection module specifically includes: a target detection unit, which is used for insensitive real-time detection and statistics of detainees.
  • the density analysis unit is used for real-time accurate density detection and abnormal alarm in prisons and venting circles.
  • the behavior specification detection module specifically includes:
  • the washing order comparison unit is used to set the toilet and the waiting area in the prison, and calculate in real time whether there are only 2 people in the toilet and whether other people are waiting in the specified area.
  • the housekeeping standard unit is used to set the bed and the waiting area against the wall in the dormitory, and calculate in real time whether the bed is always kept for 4 people to clean the house, and whether other personnel are waiting in the area against the wall.
  • the meal order comparison unit is used for the meal time in the dormitory, and real-time calculation to determine whether there are abnormal people who do not sit and eat.
  • the sleeping order comparison unit is used for the rest time in the dormitory, and real-time calculation to determine whether there is a head-covered sleep or a violation of getting up.
  • the wake-up order specification unit is used for the deadline for getting up in the dormitory, and real-time calculation to determine whether someone is in the bed.
  • the TV education order comparison unit is used for the TV education time in the prison. It will calculate and judge in real time whether there are any abnormal people who are not sitting and watching TV education, and if there are too many people walking around, an alarm will be issued.
  • the safety rotation specification unit is used for setting the safety rotation area in the prison, and it is calculated in real time to determine whether two people are present in the safety rotation area, and it is judged to be a violation if they stay in the same position for a long time.
  • the conduct norm assessment unit is used for the operation time of the prison house, and the uniformity of the queue is calculated and scored in real time.
  • the three-positioning supervision unit is used to calculate and judge in real time whether the personnel perform the "three-positioning" operation in accordance with the regulations when a fight occurs in the prison.
  • the standard unit for holding the head when leaving the prison is used to set the cordon area in the prison, and calculate in real time whether the person leaves the prison to carry the head with both hands in the cordon area according to regulations.
  • the human head technology detection module includes a target detection and segmentation process
  • the behavior specification detection module includes a training process of obtaining a classifier by using a training sample set and a recognition process of using the classifier to identify test samples.
  • the corresponding behavior detection algorithm is designed in full accordance with the code of conduct for detainees in the detention center.
  • the identification process includes setting the detection triggering time period and detection area of each standard behavior detection and using the classifier to identify, including artificially setting the detection time and detection area, strictly in accordance with the code of conduct for the detainees in the detention center.
  • the corresponding behavior identification operation is performed in the set detection area, and an alarm message is issued when a violation is identified.
  • the detection time period and detection area are completely user-defined and set in accordance with the standard code of conduct, which can well meet the needs of code of conduct detection.
  • the behavior detection is only triggered in the set time period and the set detection area, and the corresponding recognition algorithm is not carried out in other time periods and other areas. Reduce the execution complexity of the system and improve the stability of the system.
  • the detection time period and detection area are completely user-defined and set in accordance with the standard code of conduct, which can well meet the needs of code of conduct detection.
  • FIG. 1 the target detection and segmentation process of the human head technology detection module is shown in Figure 1, including the following steps:
  • RPN Region Proposal Network, region generation network
  • step S3 Perform Bounding box regression and classification prediction on the ROI obtained in step S2) through a fully connected layer, sample at different points of the feature map, and apply bilinear interpolation.
  • the dataset contains four different environments, 10 people are divided into five groups, and each group is repeated three times according to the specification. Four of these groups were used as training datasets, and the remaining group was used as test datasets.
  • the target detection to complete target detection, first collect five groups of videos that expose their heads in different environments according to the specification requirements. Four groups of videos are used as training data sets, and one group of videos is used as validation data sets. First, the four groups of video frame images are operated according to the above-mentioned steps S1) to S5), and finally the model of human head detection is obtained; then the human head detection model is loaded on the remaining group of video frame images, and the final real-time personnel detection is carried out. and statistics. To complete the density detection, the final step of density calculation is required.
  • the training process of the behavior specification detection module is shown in Figure 2, including the following steps:
  • step S6) Input the target detection frame obtained in step S5) into STN (Spatial Transform Networks) for reinforcement operation, and extract high-quality single-person regions from inaccurate candidate frames.
  • STN Sesian Transform Networks
  • step S8) Remap the single-person pose obtained in step S7) to the image coordinate system through SDTN (Spatial De-Transformer Network), so as to obtain a more accurate human target detection frame, and perform the human pose estimation operation again. . Then, the redundant detection problem is solved by PP-NMS (Parametric Pose Non-Maximum-Suppression, parametric non-maximum suppression), and the human skeleton information under this behavior is obtained.
  • SDTN Spatial De-Transformer Network
  • step S9 For the multi-scale key points obtained in step S8), the key point regression operation is performed through the deconvolution layer, which is equivalent to performing an up-sampling process, which can improve the accuracy of the target key points.
  • the key point regression operation is performed through the deconvolution layer, which is equivalent to performing an up-sampling process, which can improve the accuracy of the target key points.
  • a directed field connecting the key points is established, and the connected key point pairs are clearly matched according to the connectivity and structure of human body parts to reduce misconnections and obtain the final human skeleton information.
  • step S9) obtains the final human body skeleton information and carries out feature extraction, and it is input into the classifier as the training sample of this type of behavior for training;
  • the identification process of the behavior specification detection module is shown in Figure 2, including the following steps:
  • step S14 Input the human skeleton feature information obtained in step S13) into the classifier for identification to obtain the video behavior category.
  • step S5 preferably two layers of convolution are used to extract detection results for different feature maps.
  • step S8 PP-NMS operates as follows:
  • the pose with the highest confidence is selected as the reference, and the area frame close to the reference is eliminated according to the elimination standard. This process is repeated many times until the redundant identification frame is eliminated and each identification frame is unique.
  • the human skeleton information obtained in step S8) also includes the following operations:
  • This application preferably adopts the detention center data set.
  • the data set contains four different environments, 10 people are divided into five groups, and each group is repeated three times according to the specification requirements. Four of these groups were used as training datasets, and the remaining group was used as test datasets.
  • Figure 3 shows a simplified low-level feature extraction and modeling flowchart.
  • the posture estimation framework adopted is RMPE (Regional Multi-Person Pose Estimation, regional multi-person posture detection).
  • RMPE Registered Multi-Person Pose Estimation, regional multi-person posture detection.
  • the outputs of each specific convolutional layer are convolved with two 3*3 convolution kernels respectively, and all the generated bounding boxes are collected together to obtain the filtered target detection frame through NMS, and then the detection frame is input to STN and
  • the human body posture is automatically detected in SPPE, and then regression is performed through SDTN and PP-NMS to establish a directed field connecting key points, reducing misconnection to obtain the final human posture skeleton feature.
  • the technical solution of the present application adopts a two-layer convolution operation to extract the underlying features, and then uses a non-maximum suppression method to eliminate redundancy in the detection results.
  • the detection frame after redundancy elimination is input into the STN layer to enhance the features.
  • the function of the STN network is to make the obtained features robust to translation, rotation and scale changes.
  • the feature image output by STN is used for SPPE single-person pose estimation, and then the pose estimation result is returned to the image coordinate system through SDTN, which can extract high-quality human regions in the inaccurate region frame.
  • the problem of redundant detection is solved by PP-NMS.
  • the key point regression is carried out through the deconvolution layer, the accuracy of the key points is improved, the directed field connecting the key points is established, and the misconnection is reduced, so as to obtain the final human skeleton information.
  • CNN is an efficient identification method that has been developed in recent years and has attracted attention.
  • Hubel and Wiesel discovered that their unique network structure can effectively reduce the complexity of the feedback neural network when they studied the neurons used for local sensitivity and direction selection in the cat cerebral cortex, and then proposed CNN.
  • CNN has become one of the research hotspots in many scientific fields, especially in the field of pattern classification, because the network avoids the complex pre-processing of the image and can directly input the original image, so it has been more widely used.
  • the basic structure of CNN includes two layers, one of which is a feature extraction layer, the input of each neuron is connected to the local receptive field of the previous layer, and the local features are extracted. Once the local feature is extracted, the positional relationship between it and other features is also determined; the second is the feature mapping layer, each computing layer of the network consists of multiple feature maps, each feature map is a plane, All neurons in the plane have equal weights.
  • the feature mapping layer is used to extract the global underlying features in the video frame images, and then perform deeper processing on the underlying features.
  • the layer to be used in the technical solution of this application is the Feature Map obtained after convolution.
  • the detection result is obtained by convolving the feature map, and the detection value includes the class confidence and the position of the bounding box. Each is done with a 3 ⁇ 3 convolution.

Abstract

L'invention concerne un appareil et un procédé basés sur un apprentissage profond, destinés à surveiller des normes de comportement dans une prison. L'appareil basé sur l'apprentissage profond pour surveillance de normes de comportement dans une prison comporte: un module de comptage et de détection de personnes et un module de surveillance de normes de comportement, le module de comptage et de détection de personnes comportant un processus de détection et de segmentation de cibles, et étant utilisé pour un appel nominal imperceptible de personnes et une reconnaissance de densité de foule; et le module de surveillance de normes de comportement comportant un processus d'apprentissage consistant à obtenir un classificateur en utilisant un ensemble d'échantillons d'apprentissage et un processus de reconnaissance consistant à reconnaître un échantillon de test à l'aide du classificateur, et étant utilisé pour effectuer un calcul en temps réel et une discrimination sur les comportements de personnes. De cette façon, selon la présente invention, une reconnaissance de normes de comportement peut être réalisée efficacement, concernant les exigences d'une prison, sur des détenus, et des comportements anormaux sont détectés et des alarmes les concernant sont fournies, renforçant ainsi la protection de sécurité de la prison et améliorant le rendement de travail des agents pénitentiaires.
PCT/CN2021/107746 2020-07-28 2021-07-22 Appareil et procédé basés sur un apprentissage profond pour surveillance de normes de comportement dans une prison WO2022022368A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010736024.9 2020-07-28
CN202010736024.9A CN114092846A (zh) 2020-07-08 2020-07-28 基于深度学习的监所行为规范检测装置及方法

Publications (1)

Publication Number Publication Date
WO2022022368A1 true WO2022022368A1 (fr) 2022-02-03

Family

ID=80037108

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/107746 WO2022022368A1 (fr) 2020-07-28 2021-07-22 Appareil et procédé basés sur un apprentissage profond pour surveillance de normes de comportement dans une prison

Country Status (1)

Country Link
WO (1) WO2022022368A1 (fr)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114740774A (zh) * 2022-04-07 2022-07-12 青岛沃柏斯智能实验科技有限公司 一种通风柜安全操作的行为分析控制系统
CN115205929A (zh) * 2022-06-23 2022-10-18 池州市安安新材科技有限公司 避免电火花切割机床工作台误控制的认证方法及系统
CN115273154A (zh) * 2022-09-26 2022-11-01 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) 基于边缘重构的热红外行人检测方法、系统及存储介质
CN115294661A (zh) * 2022-10-10 2022-11-04 青岛浩海网络科技股份有限公司 一种基于深度学习的行人危险行为识别方法
CN115482491A (zh) * 2022-09-23 2022-12-16 湖南大学 一种基于transformer的桥梁缺陷识别方法与系统
CN115841651A (zh) * 2022-12-13 2023-03-24 广东筠诚建筑科技有限公司 基于计算机视觉与深度学习的施工人员智能监测系统
CN115953741A (zh) * 2023-03-14 2023-04-11 江苏实点实分网络科技有限公司 一种基于嵌入式算法的边缘计算系统及方法
CN115988181A (zh) * 2023-03-08 2023-04-18 四川三思德科技有限公司 一种基于红外图像算法的人员监控系统及方法
CN115995119A (zh) * 2023-03-23 2023-04-21 山东特联信息科技有限公司 基于物联网的气瓶充装环节违规行为识别方法及系统
CN116206265A (zh) * 2023-05-05 2023-06-02 昆明轨道交通四号线土建项目建设管理有限公司 用于轨道交通运营维护的防护报警装置及方法
CN116260990A (zh) * 2023-05-16 2023-06-13 合肥高斯智能科技有限公司 一种多路视频流的ai异步检测并实时渲染方法及系统
CN116343343A (zh) * 2023-05-31 2023-06-27 杭州电子科技大学 一种基于云边端架构的起重机吊运指挥动作智能测评方法
CN116665419A (zh) * 2023-05-09 2023-08-29 三峡高科信息技术有限责任公司 电力生产作业中基于ai分析的故障智能预警系统及方法
CN116665309A (zh) * 2023-07-26 2023-08-29 山东睿芯半导体科技有限公司 一种步姿特征识别方法、装置、芯片及终端
CN117115926A (zh) * 2023-10-25 2023-11-24 天津大树智能科技有限公司 一种基于实时图像处理的人体动作标准判定方法及装置
CN117253176A (zh) * 2023-11-15 2023-12-19 江苏海内软件科技有限公司 基于视频分析与计算机视觉的安全生产Al智能检测方法
CN117275069A (zh) * 2023-09-26 2023-12-22 华中科技大学 基于可学习向量与注意力机制的端到端头部姿态估计方法
CN117351434A (zh) * 2023-12-06 2024-01-05 山东恒迈信息科技有限公司 一种基于动作识别的工作区域人员行为规范监控分析系统
CN116631050B (zh) * 2023-04-20 2024-02-13 北京电信易通信息技术股份有限公司 一种面向智能视频会议的用户行为识别方法及系统
CN117893953A (zh) * 2024-03-15 2024-04-16 四川深蓝鸟科技有限公司 一种软式消化道内镜操作规范动作评估方法及系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416254A (zh) * 2018-01-17 2018-08-17 上海鹰觉科技有限公司 一种用于人流行为识别和人数统计的统计系统以及方法
CN109800665A (zh) * 2018-12-28 2019-05-24 广州粤建三和软件股份有限公司 一种人体行为识别方法、系统及存储介质
CN109886085A (zh) * 2019-01-03 2019-06-14 四川弘和通讯有限公司 基于深度学习目标检测的人群计数方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416254A (zh) * 2018-01-17 2018-08-17 上海鹰觉科技有限公司 一种用于人流行为识别和人数统计的统计系统以及方法
CN109800665A (zh) * 2018-12-28 2019-05-24 广州粤建三和软件股份有限公司 一种人体行为识别方法、系统及存储介质
CN109886085A (zh) * 2019-01-03 2019-06-14 四川弘和通讯有限公司 基于深度学习目标检测的人群计数方法

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114740774A (zh) * 2022-04-07 2022-07-12 青岛沃柏斯智能实验科技有限公司 一种通风柜安全操作的行为分析控制系统
CN115205929B (zh) * 2022-06-23 2023-07-28 池州市安安新材科技有限公司 避免电火花切割机床工作台误控制的认证方法及系统
CN115205929A (zh) * 2022-06-23 2022-10-18 池州市安安新材科技有限公司 避免电火花切割机床工作台误控制的认证方法及系统
CN115482491A (zh) * 2022-09-23 2022-12-16 湖南大学 一种基于transformer的桥梁缺陷识别方法与系统
CN115273154A (zh) * 2022-09-26 2022-11-01 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) 基于边缘重构的热红外行人检测方法、系统及存储介质
CN115273154B (zh) * 2022-09-26 2023-01-17 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) 基于边缘重构的热红外行人检测方法、系统及存储介质
CN115294661A (zh) * 2022-10-10 2022-11-04 青岛浩海网络科技股份有限公司 一种基于深度学习的行人危险行为识别方法
CN115841651A (zh) * 2022-12-13 2023-03-24 广东筠诚建筑科技有限公司 基于计算机视觉与深度学习的施工人员智能监测系统
CN115841651B (zh) * 2022-12-13 2023-08-22 广东筠诚建筑科技有限公司 基于计算机视觉与深度学习的施工人员智能监测系统
CN115988181A (zh) * 2023-03-08 2023-04-18 四川三思德科技有限公司 一种基于红外图像算法的人员监控系统及方法
CN115953741A (zh) * 2023-03-14 2023-04-11 江苏实点实分网络科技有限公司 一种基于嵌入式算法的边缘计算系统及方法
CN115995119B (zh) * 2023-03-23 2023-07-28 山东特联信息科技有限公司 基于物联网的气瓶充装环节违规行为识别方法及系统
CN115995119A (zh) * 2023-03-23 2023-04-21 山东特联信息科技有限公司 基于物联网的气瓶充装环节违规行为识别方法及系统
CN116631050B (zh) * 2023-04-20 2024-02-13 北京电信易通信息技术股份有限公司 一种面向智能视频会议的用户行为识别方法及系统
CN116206265A (zh) * 2023-05-05 2023-06-02 昆明轨道交通四号线土建项目建设管理有限公司 用于轨道交通运营维护的防护报警装置及方法
CN116665419B (zh) * 2023-05-09 2024-01-16 三峡高科信息技术有限责任公司 电力生产作业中基于ai分析的故障智能预警系统及方法
CN116665419A (zh) * 2023-05-09 2023-08-29 三峡高科信息技术有限责任公司 电力生产作业中基于ai分析的故障智能预警系统及方法
CN116260990A (zh) * 2023-05-16 2023-06-13 合肥高斯智能科技有限公司 一种多路视频流的ai异步检测并实时渲染方法及系统
CN116343343A (zh) * 2023-05-31 2023-06-27 杭州电子科技大学 一种基于云边端架构的起重机吊运指挥动作智能测评方法
CN116343343B (zh) * 2023-05-31 2023-07-25 杭州电子科技大学 一种基于云边端架构的起重机吊运指挥动作智能测评方法
CN116665309B (zh) * 2023-07-26 2023-11-14 山东睿芯半导体科技有限公司 一种步姿特征识别方法、装置、芯片及终端
CN116665309A (zh) * 2023-07-26 2023-08-29 山东睿芯半导体科技有限公司 一种步姿特征识别方法、装置、芯片及终端
CN117275069A (zh) * 2023-09-26 2023-12-22 华中科技大学 基于可学习向量与注意力机制的端到端头部姿态估计方法
CN117115926A (zh) * 2023-10-25 2023-11-24 天津大树智能科技有限公司 一种基于实时图像处理的人体动作标准判定方法及装置
CN117115926B (zh) * 2023-10-25 2024-02-06 天津大树智能科技有限公司 一种基于实时图像处理的人体动作标准判定方法及装置
CN117253176A (zh) * 2023-11-15 2023-12-19 江苏海内软件科技有限公司 基于视频分析与计算机视觉的安全生产Al智能检测方法
CN117253176B (zh) * 2023-11-15 2024-01-26 江苏海内软件科技有限公司 基于视频分析与计算机视觉的安全生产Al智能检测方法
CN117351434A (zh) * 2023-12-06 2024-01-05 山东恒迈信息科技有限公司 一种基于动作识别的工作区域人员行为规范监控分析系统
CN117351434B (zh) * 2023-12-06 2024-04-26 山东恒迈信息科技有限公司 一种基于动作识别的工作区域人员行为规范监控分析系统
CN117893953A (zh) * 2024-03-15 2024-04-16 四川深蓝鸟科技有限公司 一种软式消化道内镜操作规范动作评估方法及系统

Similar Documents

Publication Publication Date Title
WO2022022368A1 (fr) Appareil et procédé basés sur un apprentissage profond pour surveillance de normes de comportement dans une prison
Gong et al. A real-time fire detection method from video with multifeature fusion
CN109819208A (zh) 一种基于人工智能动态监控的密集人群安防监控管理方法
US9001199B2 (en) System and method for human detection and counting using background modeling, HOG and Haar features
Sun et al. Articulated part-based model for joint object detection and pose estimation
CN109190479A (zh) 一种基于混合深度学习的视频序列表情识别方法
CN110717389B (zh) 基于生成对抗和长短期记忆网络的驾驶员疲劳检测方法
CN107330371A (zh) 3d脸部模型的脸部表情的获取方法、装置和存储装置
CN108345894B (zh) 一种基于深度学习和熵模型的交通事件检测方法
CN110427834A (zh) 一种基于骨架数据的行为识别系统及方法
CN104504395A (zh) 基于神经网络实现人车分类的方法和系统
CN112183472A (zh) 一种基于改进RetinaNet的试验现场人员是否穿着工作服检测方法
CN111860297A (zh) 一种应用于室内固定空间的slam回环检测方法
Elbasi Reliable abnormal event detection from IoT surveillance systems
Zambanini et al. Detecting falls at homes using a network of low-resolution cameras
Wu et al. An eye localization, tracking and blink pattern recognition system: Algorithm and evaluation
CN114782979A (zh) 一种行人重识别模型的训练方法、装置、存储介质及终端
Hung et al. Fall detection with two cameras based on occupied area
CN112766145B (zh) 人工神经网络的人脸动态表情识别方法及装置
Alsaedi et al. Design and Simulation of Smart Parking System Using Image Segmentation and CNN
Choi et al. A View-based Multiple Objects Tracking and Human Action Recognition for Interactive Virtual Environments.
CN114373205A (zh) 一种基于卷积宽度网络的人脸检测和识别方法
CN114038011A (zh) 一种室内场景下人体异常行为的检测方法
Hao et al. Evaluation System of Foreign Language Teaching Quality Based on Spatiotemporal Feature Fusion
CN114092846A (zh) 基于深度学习的监所行为规范检测装置及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21848547

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21848547

Country of ref document: EP

Kind code of ref document: A1