CN112580419A - Human skeletonization frame-making identification method based on video stream data - Google Patents
Human skeletonization frame-making identification method based on video stream data Download PDFInfo
- Publication number
- CN112580419A CN112580419A CN202010787531.5A CN202010787531A CN112580419A CN 112580419 A CN112580419 A CN 112580419A CN 202010787531 A CN202010787531 A CN 202010787531A CN 112580419 A CN112580419 A CN 112580419A
- Authority
- CN
- China
- Prior art keywords
- module
- audio
- video
- wireless
- human
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Alarm Systems (AREA)
Abstract
The invention discloses a human skeletonization frame-hitting identification method based on video stream data, which specifically comprises the following steps: s1, acquiring video image and audio data, S2, identifying and judging the fighting behavior, S3, monitoring the fighting behavior and carrying out subsequent processing, and the invention relates to the technical field of video detection. The human skeletonization framing identification method based on video stream data can monitor arm-lifting behaviors, emotional changes and continuous actions of people in a video picture through the intelligent monitoring module and process audio data in an area, and can effectively and automatically analyze whether abnormal behaviors exist in people in the video picture through subsequent algorithm detection and analysis processing.
Description
Technical Field
The invention relates to the technical field of video monitoring, in particular to a human skeletonization frame-hitting identification method based on video stream data.
Background
The existing management mode of the monitoring center is to discover whether abnormal behaviors exist in personnel in a manual mode, the manual discovery and processing efficiency is not high, the personnel cannot be found at every time, certain manpower and material resources are consumed, and the actual management needs are not facilitated.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects of the prior art, the invention provides a human skeletonization framing identification method based on video stream data, and solves the problems of low efficiency and waste of certain manpower and material resources when people manage abnormal behaviors.
(II) technical scheme
In order to achieve the purpose, the invention is realized by the following technical scheme: a human skeletonization frame-making identification method based on video stream data specifically comprises the following steps:
s1, acquiring video image and audio data: firstly, an intelligent camera is arranged in an area needing to be identified for monitoring, the whole area can be monitored through an intelligent detection and monitoring unit in a data acquisition system, the arm-lifting behavior monitoring module in the device can monitor whether the arm-lifting behavior occurs in a video picture, the human face expression change appearing in the video picture can be obtained through the face expression obtaining module, the continuous action picture of people in the picture can be monitored through the continuous action monitoring module, the intelligent monitoring unit can send the collected picture information to the RGB conversion module, the video picture information is RGB converted in the RGB conversion module and converted into corresponding color image, and is sent to the cutting module, the audio information in the current place can be acquired through the audio acquisition module, and can be preprocessed through the audio preprocessing module and then sent to the interior of the cutting module;
s2, identification and judgment of fighting behaviors: respectively cutting the video picture information and the audio information collected in the S1 in a cutting module to obtain behavior video and audio containing complete human body behaviors, then sending the behavior video and the audio to a storage module through a central processing system for storage, setting a similarity threshold in an algorithm detection module, sending the obtained audio information and the shelving characteristic audio in the audio data to an analysis comparison module for similarity comparison, judging whether the similarity reaches the threshold or not, and sending an alarm through an alarm module if the similarity is judged to be the shelving audio;
s3, monitoring fighting behaviors and subsequent processing: the video pictures stored in the storage module and the acquired shelf-made audio information can be sent to the monitoring center through the wireless transmission module through the data retrieval and extraction module, so that managers can check the video pictures, the confirmed shelf-made audio information is sent to the audio database through the automatic updating module to be stored, the standards of shelf-made data are enriched, and the monitoring personnel can know the place position information of the shelf-made behavior at the first time through the GRS positioning system so as to facilitate subsequent management work.
Preferably, the data acquisition system in step 1 includes an intelligent monitoring unit and an audio acquisition module, the output of the audio acquisition module is electrically connected with the input of the audio processing module through a wire, the output of the intelligent monitoring unit is electrically connected with the input of the RGB conversion module through a wire, and the output of the RGB conversion module and the audio preprocessing module are respectively electrically connected with the input of the cutting module through wires.
Preferably, the intelligent monitoring unit in step 1 includes an arm-lifting behavior monitoring module, a facial expression obtaining module and a continuous action monitoring module.
Preferably, the input end of the central processing system in the step 2 is electrically connected to the output end of the cutting module through a wire, and the output end of the central processing system is electrically connected to the output end of the analysis and comparison module through a wire.
Preferably, the analysis comparison module in the step 2 is in bidirectional connection with the algorithm detection module through wireless, and the algorithm detection module is in bidirectional connection with the central processing system through wireless.
Preferably, the alarm module in step 2 is in bidirectional connection with the central processing system through wireless, and the central processing system is in bidirectional connection with the storage module through wireless.
Preferably, the data retrieval and extraction module in step 3 is in bidirectional connection with the central processing system through wireless, the central processing system is in bidirectional connection with the wireless transmission module through wireless, and the wireless transmission module is in bidirectional connection with the monitoring center through wireless.
Preferably, the audio database in step 3 is in bidirectional connection with the central processing system through wireless, the audio database is in bidirectional connection with the automatic updating module through wireless, and the GPS positioning system is in bidirectional connection with the central processing system through wireless.
(III) advantageous effects
The invention provides a human skeletonization frame-hitting identification method based on video stream data. The method has the following beneficial effects: the human skeletonization framing identification method based on the video stream data comprises the following steps of S1, video image and audio data acquisition: firstly, an intelligent camera is arranged in an area needing to be identified for monitoring, the whole area can be monitored through an intelligent detection and monitoring unit in a data acquisition system, the arm-lifting behavior monitoring module in the device can monitor whether the arm-lifting behavior occurs in a video picture, the human face expression change appearing in the video picture can be obtained through the face expression obtaining module, the continuous action picture of people in the picture can be monitored through the continuous action monitoring module, the intelligent monitoring unit can send the collected picture information to the RGB conversion module, the video picture information is RGB converted in the RGB conversion module and converted into corresponding color image, and is sent to the cutting module, the audio information in the current place can be acquired through the audio acquisition module, and can be preprocessed through the audio preprocessing module and then sent to the interior of the cutting module; s2, identification and judgment of fighting behaviors: respectively cutting the video picture information and the audio information collected in the S1 in a cutting module to obtain behavior video and audio containing complete human body behaviors, then sending the behavior video and the audio to a storage module through a central processing system for storage, setting a similarity threshold in an algorithm detection module, sending the obtained audio information and the shelving characteristic audio in the audio data to an analysis comparison module for similarity comparison, judging whether the similarity reaches the threshold or not, and sending an alarm through an alarm module if the similarity is judged to be the shelving audio; s3, monitoring fighting behaviors and subsequent processing: the video pictures stored in the storage module and the acquired shelving audio information can be sent to a monitoring center through the wireless transmission module through the data retrieval and extraction module for a manager to check, the confirmed shelving audio information is sent to the audio database for storage through the automatic updating module, the standards of shelving data are enriched, the monitoring personnel can know the position information of the place where the shelving action appears at the first time through the GRS positioning system so as to facilitate subsequent management work, monitoring of arm lifting action, emotional change and continuous action of the personnel appearing in the video pictures and processing of audio data appearing in the area can be realized through the intelligent monitoring module, and whether abnormal actions exist in the people in the video pictures can be effectively and automatically analyzed through subsequent algorithm detection and analysis processing, so that the monitoring efficiency can be greatly improved through the intelligent monitoring mode, a large amount of manpower and material resources are not required to be consumed, certain potential safety hazards are greatly reduced under the condition that the basic conditions of practical application are met, and the method has wide utilization value.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic block diagram of the architecture of the system of the present invention;
fig. 3 is a schematic block diagram of the structure of the intelligent monitoring unit of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-3, an embodiment of the present invention provides a technical solution: a human skeletonization frame-making identification method based on video stream data specifically comprises the following steps:
s1, acquiring video image and audio data: firstly, an intelligent camera is arranged in an area needing to be identified for monitoring, the whole area can be monitored through an intelligent detection and monitoring unit in a data acquisition system, the arm-lifting behavior monitoring module in the device can monitor whether the arm-lifting behavior occurs in a video picture, the human face expression change appearing in the video picture can be obtained through the face expression obtaining module, the continuous action picture of people in the picture can be monitored through the continuous action monitoring module, the intelligent monitoring unit can send the collected picture information to the RGB conversion module, the video picture information is RGB converted in the RGB conversion module and converted into corresponding color image, and is sent to the cutting module, the audio information in the current place can be acquired through the audio acquisition module, and can be preprocessed through the audio preprocessing module and then sent to the interior of the cutting module;
s2, identification and judgment of fighting behaviors: respectively cutting the video picture information and the audio information collected in the S1 in a cutting module to obtain behavior video and audio containing complete human body behaviors, then sending the behavior video and the audio to a storage module through a central processing system for storage, setting a similarity threshold in an algorithm detection module, sending the obtained audio information and the shelving characteristic audio in the audio data to an analysis comparison module for similarity comparison, judging whether the similarity reaches the threshold or not, and sending an alarm through an alarm module if the similarity is judged to be the shelving audio;
s3, monitoring fighting behaviors and subsequent processing: the video pictures stored in the storage module and the acquired shelf-made audio information can be sent to the monitoring center through the wireless transmission module through the data retrieval and extraction module, so that managers can check the video pictures, the confirmed shelf-made audio information is sent to the audio database through the automatic updating module to be stored, the standards of shelf-made data are enriched, and the monitoring personnel can know the place position information of the shelf-made behavior at the first time through the GRS positioning system so as to facilitate subsequent management work.
In the invention, the data acquisition system in step 1 comprises an intelligent monitoring unit and an audio acquisition module, wherein the output end of the audio acquisition module is electrically connected with the input end of the audio processing module through a wire, the output end of the intelligent monitoring unit is electrically connected with the input end of the RGB conversion module through a wire, and the output end of the RGB conversion module and the audio preprocessing module are respectively electrically connected with the input end of the cutting module through wires.
In the invention, the intelligent monitoring unit in the step 1 comprises an arm lifting behavior monitoring module, a facial expression obtaining module and a continuous action monitoring module.
In the invention, the input end of the central processing system in the step 2 is electrically connected with the output end of the cutting module through a wire, the output end of the central processing system is electrically connected with the output end of the analysis and comparison module through a wire, the central processing system is an ultra-large scale integrated circuit, is an operation core and a control core of a computer, and has the functions of mainly explaining computer instructions and processing data in computer software.
In the invention, the analysis and comparison module in the step 2 realizes bidirectional connection with the algorithm detection module through wireless, and the algorithm detection module realizes bidirectional connection with the central processing system through wireless.
In the invention, the alarm module in the step 2 is in bidirectional connection with the central processing system through wireless, the central processing system is in bidirectional connection with the storage module through wireless, and the alarm is arranged in the alarm module.
In the invention, the data retrieval and extraction module in the step 3 is in bidirectional connection with the central processing system through wireless, the central processing system is in bidirectional connection with the wireless transmission module through wireless, the wireless transmission module is in bidirectional connection with the monitoring center through wireless, the wireless transmission module is a module for performing wireless transmission by utilizing a wireless technology, and is widely applied to the fields of computer wireless network, wireless communication, wireless control and the like, and the wireless transmission module mainly comprises a transmitter, a receiver and a controller.
In the invention, the audio database in the step 3 is in bidirectional connection with the central processing system through wireless, the audio database is in bidirectional connection with the automatic updating module through wireless, the GPS positioning system is in bidirectional connection with the central processing system through wireless, and a system for real-time positioning and navigation in a global range by utilizing a GPS positioning satellite is called a global satellite positioning system, which is called GPS for short.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (8)
1. A human skeletonization frame-making identification method based on video stream data is characterized in that: the method specifically comprises the following steps:
s1, acquiring video image and audio data: firstly, an intelligent camera is arranged in an area needing to be identified for monitoring, the whole area can be monitored through an intelligent detection and monitoring unit in a data acquisition system, the arm-lifting behavior monitoring module in the device can monitor whether the arm-lifting behavior occurs in a video picture, the human face expression change appearing in the video picture can be obtained through the face expression obtaining module, the continuous action picture of people in the picture can be monitored through the continuous action monitoring module, the intelligent monitoring unit can send the collected picture information to the RGB conversion module, the video picture information is RGB converted in the RGB conversion module and converted into corresponding color image, and is sent to the cutting module, the audio information in the current place can be acquired through the audio acquisition module, and can be preprocessed through the audio preprocessing module and then sent to the interior of the cutting module;
s2, identification and judgment of fighting behaviors: respectively cutting the video picture information and the audio information collected in the S1 in a cutting module to obtain behavior video and audio containing complete human body behaviors, then sending the behavior video and the audio to a storage module through a central processing system for storage, setting a similarity threshold in an algorithm detection module, sending the obtained audio information and the shelving characteristic audio in the audio data to an analysis comparison module for similarity comparison, judging whether the similarity reaches the threshold or not, and sending an alarm through an alarm module if the similarity is judged to be the shelving audio;
s3, monitoring fighting behaviors and subsequent processing: the video pictures stored in the storage module and the acquired shelf-made audio information can be sent to the monitoring center through the wireless transmission module through the data retrieval and extraction module, so that managers can check the video pictures, the confirmed shelf-made audio information is sent to the audio database through the automatic updating module to be stored, the standards of shelf-made data are enriched, and the monitoring personnel can know the place position information of the shelf-made behavior at the first time through the GRS positioning system so as to facilitate subsequent management work.
2. The human skeletonization framing identification method based on video stream data as claimed in claim 1, characterized in that: the data acquisition system in step 1 comprises an intelligent monitoring unit and an audio acquisition module, wherein the output end of the audio acquisition module is electrically connected with the input end of an audio processing module through a wire, the output end of the intelligent monitoring unit is electrically connected with the input end of an RGB conversion module through a wire, and the output end of the RGB conversion module and the audio preprocessing module are respectively electrically connected with the input end of a cutting module through wires.
3. The human skeletonization framing identification method based on video stream data as claimed in claim 1, characterized in that: the intelligent monitoring unit in the step 1 comprises an arm lifting behavior monitoring module, a face emotion obtaining module and a continuous action monitoring module.
4. The human skeletonization framing identification method based on video stream data as claimed in claim 1, characterized in that: the input end of the central processing system in the step 2 is electrically connected with the output end of the cutting module through a wire, and the output end of the central processing system is electrically connected with the output end of the analysis comparison module through a wire.
5. The human skeletonization framing identification method based on video stream data as claimed in claim 1, characterized in that: the analysis comparison module in the step 2 is in bidirectional connection with the algorithm detection module through wireless, and the algorithm detection module is in bidirectional connection with the central processing system through wireless.
6. The human skeletonization framing identification method based on video stream data as claimed in claim 1, characterized in that: the alarm module in the step 2 is in bidirectional connection with the central processing system through wireless, and the central processing system is in bidirectional connection with the storage module through wireless.
7. The human skeletonization framing identification method based on video stream data as claimed in claim 1, characterized in that: the data retrieval and extraction module in the step 3 is in bidirectional connection with the central processing system through wireless, the central processing system is in bidirectional connection with the wireless transmission module through wireless, and the wireless transmission module is in bidirectional connection with the monitoring center through wireless.
8. The human skeletonization framing identification method based on video stream data as claimed in claim 1, characterized in that: the audio database in the step 3 is in bidirectional connection with the central processing system through wireless, the audio database is in bidirectional connection with the automatic updating module through wireless, and the GPS is in bidirectional connection with the central processing system through wireless.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010787531.5A CN112580419A (en) | 2020-08-06 | 2020-08-06 | Human skeletonization frame-making identification method based on video stream data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010787531.5A CN112580419A (en) | 2020-08-06 | 2020-08-06 | Human skeletonization frame-making identification method based on video stream data |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112580419A true CN112580419A (en) | 2021-03-30 |
Family
ID=75120270
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010787531.5A Withdrawn CN112580419A (en) | 2020-08-06 | 2020-08-06 | Human skeletonization frame-making identification method based on video stream data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112580419A (en) |
-
2020
- 2020-08-06 CN CN202010787531.5A patent/CN112580419A/en not_active Withdrawn
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109068099B (en) | Virtual electronic fence monitoring method and system based on video monitoring | |
EP2688296A1 (en) | Video monitoring system and method | |
CN105540377A (en) | Internet of things remote elevator monitoring system with human face matching function | |
CN110705482A (en) | Personnel behavior alarm prompt system based on video AI intelligent analysis | |
CN106454282A (en) | Security and protection monitoring method, apparatus and system | |
CN102917207A (en) | Motion sequence based abnormal motion vision monitoring system | |
CN116980958B (en) | Radio equipment electric fault monitoring method and system based on data identification | |
CN105931338A (en) | Intelligent community management system based on face recognition | |
CN110738289A (en) | Multi-dimensional linkage comprehensive studying and judging device for electric power operation standardization and using method thereof | |
CN111191507A (en) | Safety early warning analysis method and system for smart community | |
CN115457446A (en) | Abnormal behavior supervision system based on video analysis | |
CN205862590U (en) | A kind of break in traffic rules and regulations monitoring system | |
CN111652058A (en) | Computer face recognition device | |
CN115393340A (en) | AI vision product quality detection system based on 5G algorithm | |
CN210666820U (en) | Pedestrian abnormal behavior detection system based on DSP edge calculation | |
CN112651273A (en) | AI intelligent camera tracking method | |
CN110705505A (en) | Intelligent identification device, method and system for service specification of electric power business hall | |
CN112966552B (en) | Routine inspection method and system based on intelligent identification | |
CN107124577A (en) | A kind of real-time alarm system for guarding against theft based on moving object detection | |
CN112580419A (en) | Human skeletonization frame-making identification method based on video stream data | |
CN110807444A (en) | Pedestrian abnormal behavior detection system and method based on DSP edge calculation | |
CN115457471A (en) | Intelligent park monitoring method and system | |
CN115733957A (en) | Navigation equipment alarm processing method based on image recognition | |
CN114973135A (en) | Head-shoulder-based sequential video sleep post identification method and system and electronic equipment | |
CN112804492A (en) | Communication prompting method and device for electronic cat eye |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210330 |
|
WW01 | Invention patent application withdrawn after publication |