WO2024124970A1 - Appareil et procédé de surveillance destinés à réaliser une reconnaissance de comportement dans un environnement complexe - Google Patents

Appareil et procédé de surveillance destinés à réaliser une reconnaissance de comportement dans un environnement complexe Download PDF

Info

Publication number
WO2024124970A1
WO2024124970A1 PCT/CN2023/115727 CN2023115727W WO2024124970A1 WO 2024124970 A1 WO2024124970 A1 WO 2024124970A1 CN 2023115727 W CN2023115727 W CN 2023115727W WO 2024124970 A1 WO2024124970 A1 WO 2024124970A1
Authority
WO
WIPO (PCT)
Prior art keywords
behavior
module
face recognition
information
camera
Prior art date
Application number
PCT/CN2023/115727
Other languages
English (en)
Chinese (zh)
Inventor
王冬梅
薛志峰
吴韩
梁闯
王小惠
刘崇勋
刘坚
董帅
Original Assignee
上海船舶工艺研究所(中国船舶集团有限公司第十一研究所)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海船舶工艺研究所(中国船舶集团有限公司第十一研究所) filed Critical 上海船舶工艺研究所(中国船舶集团有限公司第十一研究所)
Publication of WO2024124970A1 publication Critical patent/WO2024124970A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Definitions

  • the present invention relates to the field of monitoring and identification technology, and in particular to a monitoring device and a monitoring method for complex environment behavior identification.
  • the common monitoring device is a computer connected to a surveillance camera to identify behavior information.
  • the computer simulates the motion contour of the object and compares it with the database to achieve behavior recognition.
  • the recognition target is clear and the recognition environment is relatively simple, but in a complex environment, the recognition rate is low and the false alarm rate is high, which cannot meet the management requirements.
  • the early warning information and display are insufficient, and the real-time warning effect cannot be achieved.
  • the present invention provides a monitoring device and a monitoring method for behavior recognition in complex environments, which can solve the current technical problems of lack of automatic behavior recognition function in complex environments, automatic analysis and judgment of unsafe behaviors on site, active voice alarm, and inability to realize intelligent patrol and early warning.
  • the present invention provides the following technical solutions:
  • a monitoring device for complex environment behavior recognition comprising a supercomputer, a server, a monitoring camera, a face recognition camera, a PC computer, a large display screen, a switch, a behavior recognition module, a face recognition module and an early warning module;
  • the monitoring camera is connected to the behavior recognition module
  • the face recognition camera is connected to the face recognition module
  • the supercomputer is connected to the behavior recognition module, the switch and the early warning module;
  • the machine is connected to the switch, and the large display screen is connected to the PC computer;
  • the behavior recognition module has built-in behavior recognition algorithms and convolutional neural networks, which are used to make behavior judgments on the video stream transmitted by the surveillance camera, analyze the behaviors in combination with the deep learning model library, and calculate the results;
  • the face recognition module is used to perform face recognition comparison based on the face photo data of the face recognition camera, verify the identity information of the person entering the workplace, and link the warning module to identify and warn strangers;
  • the early warning module is used to broadcast abnormal behaviors and send display information according to the early warning information of the behavior recognition module and the face recognition module;
  • the PC computer is used to obtain the display information of the early warning module through the switch, and control the large display screen to display abnormal behavior of the display information.
  • the behavior recognition module includes an abnormal behavior judgment module and an abnormal behavior verification module;
  • the abnormal behavior judgment module is used to judge whether there is abnormal behavior in the workplace through the smoking algorithm, the mobile phone playing algorithm, the not wearing a helmet algorithm and the not wearing a seat belt algorithm;
  • the abnormal behavior verification module is used to verify false alarm information of the abnormal behavior judgment information output by the abnormal behavior judgment module.
  • the monitoring device further includes a storage module and a parameter configuration module
  • the storage module is used to receive information transmitted by the switch and store output results of the behavior recognition module and the face recognition module;
  • the parameter configuration module is used to adjust the parameters of the face recognition camera and the surveillance camera, and the parameters include angle, focal length, confidence and sensitivity.
  • the face recognition camera and the surveillance camera both include infrared sensors.
  • the warning module includes a plurality of smart speakers and a display information generation module
  • the display information generation module is used to edit text content according to the warning information of the behavior recognition module and the face recognition module to generate broadcast information or display information for warning of abnormal behavior;
  • the smart speaker is set in multiple operation areas, and the smart speaker is used to broadcast the broadcast information corresponding to abnormal behavior in each operation area in the form of text-to-speech.
  • the comparison result of the face recognition comparison performed by the face recognition module is displayed on the large display screen, and the display information on the large display screen includes name, gender, snapshot photo and work group; when the face recognition module cannot recognize the identity information of the workplace personnel and determines that the person is a stranger, the face recognition module links with the early warning module to identify and warn the stranger, and sends a warning sound to the stranger through the large display screen.
  • the convolutional neural network in the behavior recognition module includes an input layer, a convolution layer, a pooling layer, a fully connected layer and an output layer;
  • the training steps of the convolutional neural network include: manually calibrating samples of each type of target violation behavior and normal behavior image material samples in a complex environment through samples collected from video images, inputting the image material samples into a deep learning algorithm model to perform classification training on unsafe behaviors, and continuously iterating the training until convergence, outputting an algorithm model that can identify various types of unsafe behaviors as a convolutional neural network.
  • the present application also provides a monitoring method for complex environment behavior recognition, comprising the following steps:
  • the steps of recording surveillance video and simultaneously identifying unsafe behaviors include:
  • the monitoring video recording step wherein the monitoring camera transmits the monitoring video of each working area in the working field to the supercomputer for storage;
  • the face recognition database is established by storing the identity information and photos of the personnel in the workplace in the face recognition database, wherein the identity information of the personnel includes name, gender and team to which they belong.
  • the face recognition module performs face recognition comparison based on the face photo data of the face recognition camera to verify the identity information of the personnel entering the workplace until all target personnel in the monitored area have been monitored, and displays the comparison results in conjunction with the large display screen;
  • the step of recording the motion trajectory is that the supercomputer converts the monitoring video stream into frames of pictures through editing software, performs motion trajectory outlining according to the monitored object in the monitoring video, depicts the motion trajectory, and establishes a behavior model to record the motion trajectory;
  • Behavior recognition step wherein the behavior recognition module recognizes unsafe behaviors of operators based on the video stream generated in real time by the monitoring camera, and when unsafe behaviors are detected, the behavior recognition module is linked with the early warning module to issue early warnings by category and region;
  • the supercomputer saves the surveillance video, audio and captured photos recorded by the surveillance camera in the form of a compressed package.
  • the step of continuously performing behavior judgment and outputting judgment results includes:
  • the face recognition camera compares the acquired facial features with the data in the face recognition database one by one to confirm the personnel information, and outputs the face image with the highest degree of conformity in the face recognition database as the person
  • the face recognition module compares the personnel information of the person with the person in the workplace, and stores the captured person image in the database. It determines whether to allow entry based on the compared personnel information. If the face recognition module cannot recognize the identity information of the person in the workplace, it determines that the person is a stranger and works in conjunction with the warning module to issue a voice warning.
  • the supercomputer monitors in real time through the video stream of the surveillance camera whether the operator has any unsafe behavior.
  • the early warning model is linked to make real-time voice warnings by area and category, and the picture of the unsafe behavior is displayed on the large display screen in real time, and the supercomputer switches to the surveillance camera that detects the unsafe behavior, and the picture of the surveillance camera is displayed on the large display screen.
  • the present invention is a behavior recognition technology based on scaffolding in a complex environment.
  • the built-in behavior recognition algorithm on the supercomputer is a synthetic algorithm, such as a collection of algorithms for smoking, playing with mobile phones, not wearing a helmet, and not wearing a seat belt as an algorithm model, which effectively identifies abnormal behaviors.
  • the behavior recognition module is embedded with a behavior recognition algorithm and a convolutional neural network, which is used to make behavioral judgments on the video stream transmitted by the surveillance camera, and analyzes the behavior in combination with a deep learning model library, which can greatly increase the accuracy of behavior recognition.
  • the site can provide real-time voice warnings classified by region and category, and display abnormal behaviors such as pictures of unsafe behaviors on the large screen in real time, thereby improving the inspection efficiency of patrol personnel, truly achieving the first-time discovery, disposal and resolution of unsafe behaviors at the construction site, and realizing intelligent recognition of multiple unsafe behaviors in complex environments.
  • FIG2 is a flow chart of an identification method for deep learning algorithm model training in a complex environment according to an embodiment of the present invention
  • FIG3 is a structural block diagram of a monitoring device for complex environment behavior recognition according to an embodiment of the present invention.
  • FIG4 is a flow chart of a monitoring method for complex environment behavior recognition according to an embodiment of the present invention.
  • FIG. 5 is a flow chart of the steps of recording surveillance video and simultaneously identifying unsafe behaviors according to an embodiment of the present invention
  • Example 1 of the present invention a monitoring device and method based on deep-learning behavior recognition technology are provided, namely, a monitoring device and a monitoring method for complex environment behavior recognition.
  • a deep learning algorithm model is constructed, and the algorithm and computing resources are reasonably scheduled to realize real-time collection and intelligent analysis of unstructured data such as videos and pictures, and real-time warnings are linked to broadcast and large-screen.
  • the video monitoring of the operation site in the complex scaffolding operation environment is traceable, and real-time monitoring and warning of violations such as not wearing a safety helmet, not tying a safety rope, illegal use, smoking, and using mobile phones are realized. Face recognition is performed on people entering the area, and voice warnings are implemented for strangers.
  • FIG. 1 is a principle block diagram of a monitoring device for complex environment behavior recognition of the present invention.
  • the monitoring device for complex environment behavior recognition is based on scaffold complex environment behavior recognition technology, which includes a server, a PC computer, a supercomputer, a face recognition camera, a surveillance camera, a behavior recognition module, a face recognition module, an early warning module, and a database.
  • the supercomputer receives information obtained by other modules, the behavior recognition module identifies unsafe behaviors, the face recognition module identifies facial features, and the database stores the results of behavior and face recognition.
  • Both the face recognition camera and the surveillance camera can record surveillance videos, and the convolutional neural network can quickly process information.
  • the supercomputer is connected to a server, a behavior recognition module, a face recognition module and an early warning module, and has an embedded behavior recognition algorithm and a convolutional neural network.
  • the behavior recognition module is connected to an intelligent speaker, a surveillance camera, a PC computer, a large display screen and a supercomputer.
  • the face recognition modules are all embedded with a convolutional neural network and are connected to a PC computer.
  • the face recognition module includes an infrared lens and a fill light.
  • the early warning module is connected to an intelligent speaker and a surveillance camera, and supports the recognition of the camera IP address to link the early warning signal.
  • the face recognition camera and surveillance camera include infrared sensors, which automatically sense and switch to day mode or night mode.
  • Special cameras and surveillance cameras include infrared sensors, automatically sense day and night, automatically filter light spots, and perform deep learning training models based on the postures of people in the environment to reduce false alarms.
  • the early warning modules are linked with the corresponding surveillance cameras respectively, and can broadcast different voice information for different abnormal behaviors in different areas.
  • the described voice information can be broadcast in the form of MP3 or text-to-speech, and the text content can be edited by oneself.
  • the monitoring method for complex environment behavior recognition is used to identify unsafe behaviors of operators, including not wearing a helmet, smoking, playing with mobile phones, not wearing seat belts, etc. Once an operator exhibits unsafe behavior, the warning module is immediately activated, and the intelligent speaker in the corresponding area broadcasts the corresponding unsafe behavior voice, where the voice setting can be broadcast according to the voice or text-to-speech function.
  • the unsafe behavior algorithm is built into the supercomputer, which analyzes the video stream of the monitoring camera in real time. Once the confidence of the operator's unsafe behavior reaches the set threshold, a warning event occurs.
  • the supercomputer implementation process includes the following specific steps:
  • Step 2 Training of deep learning algorithm model in complex environment, as shown in Figure 2, the flow chart of the identification method.
  • the deep learning algorithm model marks various unsafe behaviors based on the on-site pictures of the video stream, builds a behavior algorithm model, and embeds the algorithm model into the supercomputer; by adjusting the camera angle, focal length, recognition sensitivity, and accuracy, the recognition effect is optimized.
  • the model can be updated and iterated by increasing the data set, becoming more accurate, and will be continuously optimized through learning in the scaffolding scenario, making the recognition model more suitable for the current scenario.
  • Step 3 Record surveillance video and identify unsafe behaviors.
  • the face recognition module uses a face matching algorithm built into the face recognition camera to add a person's photo to the model database, and add the person's information including name, gender, department, etc. to the model library.
  • the face matching step can be repeated until all targets in the monitoring area are fully monitored, and the face matching step is linked to the large display screen.
  • the supercomputer converts the surveillance video stream into frames of pictures through editing software, traces the motion trajectory according to the monitored object in the surveillance video, depicts the motion trajectory, and establishes a behavior model;
  • Behavior recognition the behavior recognition module recognizes unsafe behaviors of operators based on the video stream generated in real time by the monitoring camera. Once unsafe behaviors are detected, the module will work together with the early warning module to issue early warnings by intelligent speakers in different categories and regions.
  • (e) Backup file creation the supercomputer stores the surveillance video, audio and captured photos recorded by the surveillance camera in the form of a compressed package in a storage hard disk;
  • Step 4 Behavioral judgment and output.
  • Unsafe behavior analysis The supercomputer is embedded with a behavioral algorithm model, which establishes an unsafe behavior data model. It monitors in real time through the video stream of the surveillance camera whether its operators have any unsafe behavior. Once unsafe behavior is detected, the early warning model is immediately linked to provide real-time voice warnings by area and category, and the picture of the unsafe behavior is displayed on the large screen in real time. The system immediately switches to the surveillance camera that detected the unsafe behavior, and the picture of the surveillance camera is displayed on the large screen.
  • Figure 2 is a flow chart of the identification method for deep learning algorithm model training in a complex environment.
  • the deep learning algorithm model annotates unsafe behaviors based on the on-site pictures of the video stream, builds a behavioral algorithm model, and embeds the algorithm model into the supercomputer; the model can be updated and iterated by increasing the data set, becoming more accurate, and will be more suitable for the current scenario through continuous optimization.
  • Convolutional neural networks are constructed based on unsafe behaviors, and the basic parameters of the model are designed and set.
  • the generated data sets are then input into the model for training and evaluation.
  • samples collected from video images are manually calibrated for each type of target violation behavior and normal behavior image material samples in the complex environment of the scaffolding. All behaviors are manually calibrated, and the current unsafe behaviors are classified and trained using the model algorithm.
  • An algorithm model that can identify various types of unsafe behaviors is output. Based on the complex environment of the scaffolding, the algorithm is continuously iterated to adapt to the complex environment of the scaffolding.
  • the behavior recognition module is embedded in the supercomputer and performs behavior recognition based on the video stream according to the deep learning algorithm model.
  • the continuous practice of unsafe behavior can be identified in 1 second at the fastest setting or can last for 1s-10s according to the scenario.
  • the present invention is based on the behavior recognition technology in the complex environment of the scaffold.
  • the built-in behavior recognition algorithm on the supercomputer is a synthetic algorithm, that is, smoking, playing with mobile phones, not wearing a helmet and not wearing a seat belt are combined into one algorithm model, saving the GPU resources of the supercomputer.
  • Deep learning model training based on the on-site environment of the scaffold can greatly increase the accuracy of behavior recognition.
  • the site can issue real-time voice warnings by region and category, and display pictures of the unsafe behavior on the large screen in real time, and immediately switch to the surveillance camera that detected the unsafe behavior.
  • the surveillance camera picture is displayed on the large screen, and there is no need to manually switch the surveillance camera picture.
  • Example 2 includes all the technical features of Example 1.
  • a monitoring device 100 for behavior recognition in complex environments includes a supercomputer 1, a server 2, a monitoring camera 3, a face recognition camera 4, a PC computer 5, a large display screen 6, a switch 7, a behavior recognition module 8, a face recognition module 9 and an early warning module 10;
  • the monitoring camera 3 is connected to the behavior recognition module 8
  • the face recognition camera 4 is connected to the face recognition module 9
  • the supercomputer 1 is connected to the behavior recognition module 8, the switch 7 and the early warning module 10
  • the face recognition module 9, the server 2 and the PC computer 5 are connected to the switch 7, and the large display screen 6 is connected to the PC computer 5.
  • the behavior recognition module 8 has built-in behavior recognition algorithms and convolutional neural networks, which are used to make behavior judgments on the video stream transmitted by the surveillance camera 3, analyze the behaviors in combination with a deep learning model library, and calculate the results.
  • the face recognition module 9 includes a face recognition comparison module 91, which is used to perform face recognition comparison based on the face photo data of the face recognition camera 4, verify the identity information of the person entering the workplace, and link the warning module 10 to identify and warn strangers.
  • a face recognition comparison module 91 which is used to perform face recognition comparison based on the face photo data of the face recognition camera 4, verify the identity information of the person entering the workplace, and link the warning module 10 to identify and warn strangers.
  • the warning module 10 is used to detect the warning information of the behavior recognition module 8 and the face recognition module 9. Broadcast abnormal behavior and send display information.
  • the PC computer 5 is used to obtain the display information of the early warning module 10 through the switch 7, and control the large display screen 6 to display abnormal behavior of the display information.
  • the behavior recognition module 8 includes an abnormal behavior judgment module 81 and an abnormal behavior verification module 82 .
  • the abnormal behavior judgment module 81 is used to judge whether there is abnormal behavior in the workplace through a smoking algorithm, a mobile phone playing algorithm, a helmet-not-wearing algorithm, and a seatbelt-not-fastening algorithm.
  • the abnormal behavior verification module 82 is used to verify the false alarm information of the abnormal behavior judgment information output by the abnormal behavior judgment module 81 .
  • the monitoring device 100 further includes a storage module 11 and a parameter configuration module 12;
  • the storage module 11 is preferably a database set in the supercomputer 1, or a database set in the server 2.
  • the parameter configuration module 12 can be set in the supercomputer 1 or in the server 2, or can be set separately.
  • the storage module 11 is used to receive information transmitted by the switch 7 and store output results of the behavior recognition module 8 and the face recognition module 9 .
  • the parameter configuration module 12 is used to adjust the parameters of the face recognition camera 4 and the monitoring camera 3, and the parameters include angle, focal length, confidence and sensitivity.
  • the face recognition camera 4 and the monitoring camera 3 both include infrared sensors.
  • the face recognition camera 4 includes an infrared lens and a fill light.
  • the early warning module 10 includes a plurality of smart speakers 101 and a display information generation module 102 .
  • the display information generation module 102 is used to edit text content according to the warning information of the behavior recognition module 8 and the face recognition module 9 to generate broadcast information or display information for warning of abnormal behavior.
  • the smart speaker 101 is set in multiple operation areas, and the smart speaker 101 is used to broadcast the broadcast information corresponding to the abnormal behavior in each operation area in the form of text-to-speech.
  • the face recognition module 9 performs face recognition comparison and the comparison result is displayed on the large display screen 6, and the display information on the large display screen 6 includes name, gender, snapshot photo and team; when the face recognition module 9 cannot identify the identity information of the workplace personnel and determines that the person is a stranger, the face recognition module 9 links the warning module 10 to identify and warn the stranger, and the stranger passes the large display screen 6 Sound a warning.
  • the convolutional neural network in the behavior recognition module 8 includes an input layer, a convolution layer, a pooling layer, a full-link layer, and an output layer;
  • the training steps of the convolutional neural network include: manually calibrating the image material samples of each type of target violation behavior and normal behavior in a complex environment through samples collected from video images, inputting the image material samples into the deep learning algorithm model to perform classification training on unsafe behaviors, and continuously iterating the training until convergence, outputting an algorithm model that can identify various types of unsafe behaviors as a convolutional neural network.
  • the present application also provides a monitoring method for complex environment behavior recognition, comprising the following steps:
  • the steps of recording surveillance video and simultaneously identifying unsafe behaviors include:
  • the monitoring camera 3 transmits the monitoring video of each working area in the working field to the supercomputer 1 and stores it;
  • S32 a step of establishing a face recognition database, storing the identity information of the personnel in the workplace and the photos of the personnel in the face recognition database, wherein the identity information of the personnel includes name, gender and team, and the face recognition module 9 performs face recognition comparison based on the face photo data of the face recognition camera 4 to verify the identity information of the personnel entering the workplace until all target personnel in the monitored area are monitored, and displays the comparison results in conjunction with the display screen 6;
  • S33 a step of recording a motion trajectory
  • the supercomputer 1 converts the surveillance video stream into frames of pictures through editing software, performs motion trajectory outlining according to the monitored object in the surveillance video, depicts the motion trajectory, and establishes a behavior model to record the motion trajectory;
  • the behavior recognition module 8 recognizes the unsafe behavior of the operator according to the video stream generated by the monitoring camera in real time, and when unsafe behavior is detected, it is linked with the early warning module 10 to classify and warn in different areas;
  • the supercomputer 1 saves the surveillance video, audio and captured photos recorded by the surveillance camera 3 in the form of a compressed package.
  • the step of continuously performing behavior judgment and outputting judgment results includes:
  • the face recognition camera 4 compares the acquired facial features with the data in the face recognition database one by one to confirm the personnel information, outputs the face image with the highest degree of conformity in the face recognition database as the personnel information result of the personnel comparison, and stores the captured personnel image in the database, and determines whether to allow entry based on the compared personnel information result. If the face recognition module 9 cannot recognize the identity information of the workplace personnel, it is determined to be a stranger, and it is linked with the warning module 10 to issue a voice warning;
  • unsafe behavior analysis step the supercomputer 1 monitors in real time whether the operator has any unsafe behavior through the video stream of the monitoring camera.
  • the early warning model is linked to make real-time voice warnings by area and category, and the picture of the unsafe behavior is displayed on the large display screen 6 in real time, and the supercomputer 1 switches to the monitoring camera 3 that detects the unsafe behavior, and the picture of the monitoring camera 3 is displayed on the large display screen 6.
  • the present invention is a behavior recognition technology based on scaffolding in a complex environment.
  • the built-in behavior recognition algorithm on the supercomputer is a synthetic algorithm, such as a collection of algorithms for smoking, playing with mobile phones, not wearing a helmet and not wearing a seat belt as an algorithm model, which effectively identifies abnormal behaviors.
  • the behavior recognition module is embedded with a behavior recognition algorithm and a convolutional neural network, which is used to make behavioral judgments on the video stream transmitted by the surveillance camera, and analyzes the behavior in combination with a deep learning model library, which can greatly increase the accuracy of behavior recognition.
  • the site can provide real-time voice warnings classified by region and category, and display abnormal behaviors such as pictures of unsafe behaviors on the large screen in real time, thereby improving the inspection efficiency of patrol personnel, truly achieving the first-time discovery, disposal and resolution of unsafe behaviors at the construction site, and realizing intelligent recognition of multiple unsafe behaviors in complex environments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Biophysics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Processing (AREA)
  • Alarm Systems (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un appareil et un procédé de surveillance destinés à réaliser une reconnaissance de comportement dans un environnement complexe. L'appareil comprend un super-ordinateur, un serveur, une caméra de surveillance, une caméra de reconnaissance faciale, un PC, un grand écran d'affichage, un commutateur, un module de reconnaissance de comportement, un module de reconnaissance faciale et un module d'avertissement précoce, le module de reconnaissance faciale étant utilisé pour réaliser une reconnaissance et une comparaison faciales selon des données de photographie faciale provenant de la caméra de reconnaissance faciale, vérifier des informations d'identité d'une personne qui entre sur un lieu de travail, et étant lié au module d'avertissement précoce pour reconnaître un étranger et émettre un avertissement précoce; le module d'avertissement précoce est utilisé pour diffuser un comportement anormal selon des informations d'avertissement précoce du module de reconnaissance de comportement et du module de reconnaissance faciale, et envoyer des informations de présentation; et le PC est utilisé pour acquérir les informations de présentation à partir du module d'avertissement précoce au moyen du commutateur, et commander le grand écran d'affichage pour effectuer une présentation de comportement anormal des informations de présentation. Lorsqu'un comportement dangereux se produit, un avertissement précoce vocal est émis en temps réel selon des régions et des catégories, ce qui permet d'améliorer l'efficacité d'inspection.
PCT/CN2023/115727 2022-12-13 2023-08-30 Appareil et procédé de surveillance destinés à réaliser une reconnaissance de comportement dans un environnement complexe WO2024124970A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211596339.3 2022-12-13
CN202211596339.3A CN116129490A (zh) 2022-12-13 2022-12-13 一种用于复杂环境行为识别的监控装置及监控方法

Publications (1)

Publication Number Publication Date
WO2024124970A1 true WO2024124970A1 (fr) 2024-06-20

Family

ID=86299956

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/115727 WO2024124970A1 (fr) 2022-12-13 2023-08-30 Appareil et procédé de surveillance destinés à réaliser une reconnaissance de comportement dans un environnement complexe

Country Status (2)

Country Link
CN (1) CN116129490A (fr)
WO (1) WO2024124970A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116129490A (zh) * 2022-12-13 2023-05-16 上海船舶工艺研究所(中国船舶集团有限公司第十一研究所) 一种用于复杂环境行为识别的监控装置及监控方法
CN118072375B (zh) * 2024-04-17 2024-06-21 云燕科技(山东)股份有限公司 一种人脸图像采集系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110176117A (zh) * 2019-06-17 2019-08-27 广东翔翼科技信息有限公司 一种基于行为识别技术的监控装置及监控方法
CN114071070A (zh) * 2020-08-08 2022-02-18 徐州市五岳通信科技有限公司 一种基于智慧人脸识别及增强现实的动态监控预警系统
CN114926778A (zh) * 2022-04-18 2022-08-19 中航西安飞机工业集团股份有限公司 一种生产环境下的安全帽与人员身份识别系统
CN116129490A (zh) * 2022-12-13 2023-05-16 上海船舶工艺研究所(中国船舶集团有限公司第十一研究所) 一种用于复杂环境行为识别的监控装置及监控方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110176117A (zh) * 2019-06-17 2019-08-27 广东翔翼科技信息有限公司 一种基于行为识别技术的监控装置及监控方法
CN114071070A (zh) * 2020-08-08 2022-02-18 徐州市五岳通信科技有限公司 一种基于智慧人脸识别及增强现实的动态监控预警系统
CN114926778A (zh) * 2022-04-18 2022-08-19 中航西安飞机工业集团股份有限公司 一种生产环境下的安全帽与人员身份识别系统
CN116129490A (zh) * 2022-12-13 2023-05-16 上海船舶工艺研究所(中国船舶集团有限公司第十一研究所) 一种用于复杂环境行为识别的监控装置及监控方法

Also Published As

Publication number Publication date
CN116129490A (zh) 2023-05-16

Similar Documents

Publication Publication Date Title
WO2024124970A1 (fr) Appareil et procédé de surveillance destinés à réaliser une reconnaissance de comportement dans un environnement complexe
CN109300471B (zh) 融合声音采集识别的场区智能视频监控方法、装置及系统
CN211293956U (zh) 基于ai的施工现场异常行为人的识别和告警系统
TWI749113B (zh) 在視訊監控系統中產生警示之方法、系統及電腦程式產品
JP6474107B2 (ja) 映像監視システム、映像処理装置、映像処理方法および映像処理プログラム
CN110516529A (zh) 一种基于深度学习图像处理的投喂检测方法和系统
CN110705482A (zh) 一种基于视频ai智能分析的人员行为告警提示系统
CN111414873A (zh) 基于安全帽佩戴状态的告警提示方法、装置和告警系统
KR102407327B1 (ko) 화재감지장치 및 이를 포함하는 화재감지시스템
CN113282778A (zh) 一种品质异常记录方法、装置、ar设备、系统及介质
KR20200052418A (ko) 딥러닝 기반의 자동 폭력 감지 시스템
CN111178241A (zh) 一种基于视频分析的智能监控系统及方法
CN111223011A (zh) 一种基于视频分析的餐饮企业食品安全监管方法及系统
CN110287917A (zh) 基建工地的安全管控系统和方法
CN110602453B (zh) 一种物联网大数据智能视频安防监控系统
CN115019462A (zh) 视频处理方法、装置、存储介质及设备
CN210091231U (zh) 一种智慧园区管理系统
CN116208633A (zh) 一种人工智能服务平台系统、方法、设备及介质
CN116246424A (zh) 一种老年人行为安全监控系统
CN104767969A (zh) 铁路道口用作业智能化安全监控系统和方法
TWI784718B (zh) 廠區告警事件處理方法與系統
Mohammed et al. Implementation of human detection on raspberry pi for smart surveillance
CN204425522U (zh) 铁路道口用作业智能化安全监控系统
KR102512792B1 (ko) 산업용 블랙박스 시스템
CN118072255B (zh) 一种智慧园区多源数据动态监控和实时分析系统及方法