CN111259839A - Target object behavior monitoring method, device, equipment, system and storage medium - Google Patents

Target object behavior monitoring method, device, equipment, system and storage medium Download PDF

Info

Publication number
CN111259839A
CN111259839A CN202010067607.7A CN202010067607A CN111259839A CN 111259839 A CN111259839 A CN 111259839A CN 202010067607 A CN202010067607 A CN 202010067607A CN 111259839 A CN111259839 A CN 111259839A
Authority
CN
China
Prior art keywords
target object
behavior
image
key points
characteristic parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010067607.7A
Other languages
Chinese (zh)
Inventor
邓立保
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Core Elevator Zhonghe Technology Service Co Ltd
Original Assignee
Core Elevator Zhonghe Technology Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Core Elevator Zhonghe Technology Service Co Ltd filed Critical Core Elevator Zhonghe Technology Service Co Ltd
Priority to CN202010067607.7A priority Critical patent/CN111259839A/en
Publication of CN111259839A publication Critical patent/CN111259839A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Alarm Systems (AREA)

Abstract

The invention discloses a target object behavior monitoring method, a device, equipment, a system and a storage medium, wherein the target object behavior monitoring method comprises the following steps: acquiring an image containing a target object, and extracting characteristic parameters in the image; determining the positions of all key points in the image and the connection relation among the key points according to the characteristic parameters, and connecting all the key points together according to the connection relation among the key points to form a posture skeleton of the target object; and recognizing the gesture skeleton of the target object and determining the behavior of the target object. The method can extract the characteristic parameters from the image containing the target object, obtain the positions of all key points in the image and the connection relation among the key points according to the characteristic parameters, further form the posture framework of the target object, and obtain the behavior of the target object by identifying the posture framework of the target object, thereby solving the problem that the abnormal behavior needs to be manually analyzed and interpreted when being monitored in the prior art.

Description

Target object behavior monitoring method, device, equipment, system and storage medium
Technical Field
The invention relates to the technical field of intelligent monitoring, in particular to a target object behavior monitoring method, device, equipment, system and storage medium.
Background
At present, video monitoring systems make great progress in terms of functions, performance and the like, but also expose some disadvantages in use. The existing video monitoring system is often used for the purposes of intrusion alarm, mobile detection and the like, and when monitoring abnormal behaviors such as fighting in a video, a mode of manual analysis and judgment is often adopted, and security personnel are required to pay attention to monitoring video pictures all the time.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, an apparatus, a device, a system, and a storage medium for monitoring behavior of a target object, so as to solve the problem in the prior art that when abnormal behavior needs to be monitored, manual analysis and interpretation are required.
With reference to the first aspect, an embodiment of the present invention provides a target object behavior monitoring method, including the following steps:
acquiring an image containing a target object, and extracting characteristic parameters in the image;
determining the positions of all key points in the image and the connection relation among the key points according to the characteristic parameters, and connecting all the key points together according to the connection relation among the key points to form a posture skeleton of the target object; wherein the key point is a joint with a degree of freedom on the target object;
and recognizing the gesture skeleton of the target object, and determining the behavior of the target object.
The target object behavior monitoring method provided by the embodiment of the invention can extract the characteristic parameters from the image containing the target object, obtain the positions of all key points in the image and the connection relation among the key points according to the characteristic parameters, further form the posture skeleton of the target object, and obtain the behavior of the target object by identifying the posture skeleton of the target object, thereby solving the problem that the abnormal behavior needs to be manually analyzed and interpreted when being monitored in the prior art.
With reference to the first aspect, in a first implementation manner of the first aspect, the extracting feature parameters in the image includes:
inputting the image into a preset first convolution neural network, and extracting characteristic parameters in the image by using the first convolution neural network.
With reference to the first aspect, in a second implementation manner of the first aspect, the determining the positions of the key points in the image and the connection relationship between the key points according to the feature parameters includes:
inputting the characteristic parameters into a preset second convolutional neural network to obtain the positions of all key points in the image and the connection relation among the key points; wherein the second convolutional neural network comprises a plurality of stages in series, and each stage comprises a plurality of branches.
With reference to the first aspect, in a third implementation manner of the first aspect, the recognizing the gesture skeleton of the target object, and determining the behavior of the target object includes:
inputting the posture skeleton of the target object into a preset space-time diagram convolution network model for recognition to obtain the behavior of the target object; the space-time graph convolution network model comprises at least two layers of space-time graph convolution networks, and the space-time graph convolution networks of two adjacent layers are linked by using residual errors.
With reference to the first aspect and the first to third embodiments of the first aspect, in a fourth embodiment of the first aspect, after determining the behavior of the target object, the method further includes:
and judging whether the behavior of the target object is abnormal or not, and sending an alarm signal when the behavior of the target object is abnormal.
With reference to the first aspect and the first to third embodiments of the first aspect, in a fifth embodiment of the first aspect, before extracting the feature parameters in the image, the method further includes:
preprocessing the image, wherein the preprocessing comprises one or more of the following: data marking, bilinear interpolation, image turning, brightness adjustment, contrast adjustment, hue adjustment, saturation adjustment and image standard normalization.
With reference to the first to third embodiments of the first aspect, in a sixth embodiment of the first aspect, before extracting the feature parameters in the image, the method further includes:
and training the first convolutional neural network, the second convolutional neural network and the space-time diagram convolutional network model by utilizing a target object behavior database.
According to a second aspect, an embodiment of the present invention provides a target object behavior monitoring apparatus, including:
the extraction module is used for acquiring an image containing a target object and extracting characteristic parameters in the image;
the gesture skeleton forming module is used for determining the positions of all key points in the image and the connection relation among the key points according to the characteristic parameters, and connecting all the key points together according to the connection relation among the key points to form a gesture skeleton of the target object; wherein the key point is a joint with a degree of freedom on the target object;
and the behavior recognition module is used for recognizing the gesture skeleton of the target object and determining the behavior of the target object.
With reference to the second aspect, in a first implementation manner of the second aspect, the target object behavior monitoring apparatus further includes an alarm module, configured to determine whether a behavior of the target object is abnormal, and send an alarm signal when the behavior of the target object is abnormal.
According to a third aspect, an embodiment of the present invention provides a target object behavior monitoring server, including: the monitoring method includes an image collector, a memory and a processor, where the image collector, the memory and the processor are communicatively connected to each other, the memory stores computer instructions, and the processor executes the computer instructions to execute the method for monitoring behavior of a target object according to the first aspect or any one of the embodiments of the first aspect.
According to a fourth aspect, an embodiment of the present invention provides a target object behavior monitoring system, including: the target object behavior monitoring server comprises a camera, the target object behavior monitoring server of the third aspect and a security terminal, wherein the camera and the security terminal are both connected with the target object behavior monitoring server;
the camera is used for capturing an image containing a target object and sending the image to the target object behavior monitoring server;
and the security terminal is used for acquiring the behavior/alarm signal of the target object in the target object behavior monitoring server.
With reference to the fourth aspect, in the first embodiment of the fourth aspect, the load balancing device is further included;
the load balancing equipment is used for receiving the image information sent by the camera, carrying out load balancing processing on the image information and then sending the image information to the target object behavior monitoring device.
With reference to the fourth aspect of the first embodiment, in a second embodiment of the fourth aspect, the load balancing device includes a main nginx RTSP load balancer and a sub-nginx RTSP load balancer.
With reference to the fourth aspect, in a third implementation manner of the fourth aspect, the apparatus further includes a storage device, and the storage device is configured to receive and store the image information sent by the camera.
With reference to the third embodiment of the fourth aspect, in the fourth embodiment of the fourth aspect, the storage device employs a distributed file system.
According to a fifth aspect, an embodiment of the present invention provides a computer-readable storage medium, which stores computer instructions for causing a computer to execute the target object behavior monitoring method described in the first aspect or any one of the implementation manners of the first aspect.
Drawings
The features and advantages of the present invention will be more clearly understood by reference to the accompanying drawings, which are illustrative and not to be construed as limiting the invention in any way, and in which:
fig. 1 is a schematic flow chart of a target object behavior monitoring method in embodiment 1 of the present invention;
fig. 2 is a schematic structural diagram of a second convolutional neural network in embodiment 1 of the present invention;
FIG. 3 is a two-dimensional representation of each node generated by two layers of GCNs in embodiment 1 of the present invention;
fig. 4 is a schematic flowchart of a target object behavior monitoring method in embodiment 2 of the present invention;
fig. 5 is a schematic structural diagram of a target object behavior monitoring apparatus in embodiment 3 of the present invention;
fig. 6 is a schematic structural diagram of a target object behavior monitoring server in embodiment 4 of the present invention;
fig. 7 is a schematic structural diagram of a target object behavior monitoring system according to embodiment 5 of the present invention;
fig. 8 is a schematic structural diagram of a behavior detection alarm system based on a deep neural network according to embodiment 6 of the present invention;
FIG. 9 is a diagram of an HDFS architecture in accordance with embodiment 6 of the present invention;
FIG. 10 is a schematic diagram showing the structure of Hbase table in example 6 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
An embodiment 1 of the present invention provides a target object behavior monitoring method, and fig. 1 is a schematic flow chart of the target object behavior monitoring method in the embodiment 1 of the present invention, as shown in fig. 1, the target object behavior monitoring method in the embodiment 1 of the present invention includes the following steps:
s101: acquiring an image containing a target object, and extracting characteristic parameters in the image.
As a specific implementation, the following technical solution may be adopted to extract the feature parameters in the image: inputting the image into a preset first convolution neural network, and extracting characteristic parameters in the image by using the first convolution neural network. Further, before extracting the feature parameters in the image, the method further includes: and training the first convolutional neural network by using a target object behavior database. Preferably, the pictures can be processed by using the first 10 layers of the 19-layer convolutional neural network to obtain the characteristic parameters.
S102: determining the positions of all key points in the image and the connection relation among the key points according to the characteristic parameters, and connecting all the key points together according to the connection relation among the key points to form a posture skeleton of the target object; the key points are joints with freedom degrees on the target object, such as neck, shoulder, elbow, wrist, waist, knee, ankle and the like, and the current posture of the human body is estimated by calculating the relative positions of the key points of the human body in three-dimensional space.
As a specific implementation manner, the following technical solution may be adopted to determine the positions of the key points in the image and the connection relationship between the key points according to the feature parameters: inputting the characteristic parameters into a preset second convolutional neural network to obtain the positions of all key points in the image and the connection relation among the key points; wherein the second convolutional neural network comprises a plurality of stages in series, and each stage comprises a plurality of branches. Fig. 2 is a schematic structural diagram of a second convolutional neural network in embodiment 1 of the present invention, and as shown in fig. 2, the second convolutional neural network includes 2 stages in series, and each stage includes 2 branches.
Further, before extracting the feature parameters in the image, the method further includes: and training the second convolutional neural network by using a target object behavior database.
S103: and recognizing the gesture skeleton of the target object, and determining the behavior of the target object.
As a specific implementation, the following technical solution may be adopted to recognize the gesture skeleton of the target object and determine the behavior of the target object: inputting the posture skeleton of the target object into a preset space-time diagram convolution network model for recognition to obtain the behavior of the target object; the space-time graph convolution network model comprises at least two layers of space-time graph convolution networks, and the space-time graph convolution networks of two adjacent layers are linked by using residual errors.
The space-time graph convolutional network model GCN is a very powerful neural network architecture for graph data, and even a randomly initialized two-layer GCN can generate useful feature characterization of nodes in a graph network. Such two-dimensional representations of each node generated by a two-layer GCN are shown in FIG. 3, which preserve the relative proximity of the nodes in the graph even without any training.
Preferably, before extracting the feature parameters in the image, the method further includes: and training the space-time graph convolution network model GCN by using a target object behavior database.
The target object behavior monitoring method provided in embodiment 1 of the present invention can extract feature parameters from an image including a target object, obtain positions of key points in the image and connection relationships between the key points according to the feature parameters, further form a posture skeleton of the target object, and obtain a behavior of the target object by identifying the posture skeleton of the target object, thereby solving a problem that an abnormal behavior needs to be manually analyzed and interpreted when monitoring is needed in the prior art.
Example 2
Embodiment 2 of the present invention provides a target object behavior monitoring method, fig. 4 is a schematic flow chart of the target object behavior monitoring method in embodiment 2 of the present invention, and as shown in fig. 4, the target object behavior monitoring method in embodiment 2 of the present invention includes the following steps:
s401: acquiring an image containing a target object, and preprocessing the image.
In embodiment 4 of the present invention, the pretreatment includes one or more of the following: data marking, bilinear interpolation, image turning, brightness adjustment, contrast adjustment, hue adjustment, saturation adjustment and image standard normalization.
S402: and extracting the characteristic parameters in the image.
S403: determining the positions of all key points in the image and the connection relation among the key points according to the characteristic parameters, and connecting all the key points together according to the connection relation among the key points to form a posture skeleton of the target object; wherein the key point is a joint having a degree of freedom on the target object.
S404: and recognizing the gesture skeleton of the target object, and determining the behavior of the target object.
S405: and judging whether the behavior of the target object is abnormal or not, and sending an alarm signal when the behavior of the target object is abnormal.
In the target object behavior monitoring method provided in embodiment 2 of the present invention, after the behavior of the target object is obtained, whether the behavior of the target object is abnormal is further determined, and when the behavior of the target object is abnormal, an alarm signal is sent out, so that automatic monitoring and automatic alarm are implemented.
Example 3
Fig. 5 is a schematic structural diagram of the target object behavior monitoring device in embodiment 3 of the present invention, and as shown in fig. 5, the target object behavior monitoring device in embodiment 3 of the present invention includes an extraction module 50, a posture skeleton forming module 52, and a behavior recognition module 54.
The extraction module 50 is configured to acquire an image including a target object, and extract a feature parameter in the image.
A pose skeleton forming module 52, configured to determine positions of the key points in the image and connection relationships between the key points according to the feature parameters, and connect the key points together according to the connection relationships between the key points to form a pose skeleton of the target object; wherein the key point is a joint with a degree of freedom on the target object;
and a behavior recognition module 54, configured to recognize the gesture skeleton of the target object, and determine a behavior of the target object.
Furthermore, the target object behavior monitoring device also comprises an alarm module, wherein the alarm module is used for judging whether the behavior of the target object is abnormal or not, and sending an alarm signal when the behavior of the target object is abnormal.
Example 4
Fig. 6 is a schematic structural diagram of the target object behavior monitoring server in embodiment 4 of the present invention, and as shown in fig. 6, the vehicle terminal may include an image collector 60, a processor 61, and a memory 62, where the processor 61 and the memory 62 may be connected by a bus or in another manner, and fig. 5 illustrates an example of connection by a bus.
The processor 61 may be a Central Processing Unit (CPU). The Processor 61 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or combinations thereof.
The memory 62 is a non-transitory computer readable storage medium, and can be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules (e.g., the extraction module 50, the gesture framework formation module 52, and the behavior recognition module 54 shown in fig. 5) corresponding to the target object behavior monitoring method in the embodiment of the present invention, and the processor 61 executes various functional applications and data processing of the processor by running the non-transitory software programs, instructions, and modules stored in the memory 62, so as to implement the XX method in the foregoing method embodiment.
The memory 62 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor 61, and the like. Further, the memory 62 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 62 may optionally include memory located remotely from the processor 61, and these remote memories may be connected to the processor 61 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 62 and, when executed by the processor 61, perform a target object behavior monitoring method as in the embodiment shown in fig. 1-4.
The specific details of the vehicle terminal may be understood by referring to the corresponding related descriptions and effects in the embodiments shown in fig. 1 to fig. 4, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic Disk, an optical Disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD) or a Solid State Drive (SSD), etc.; the storage medium may also comprise a combination of memories of the kind described above.
Example 5
An embodiment 5 of the present invention provides a target object behavior monitoring system, and fig. 7 is a schematic structural diagram of the target object behavior monitoring system according to the embodiment 5 of the present invention, as shown in fig. 7, the target object behavior monitoring system according to the embodiment 5 of the present invention includes a camera 70, a target object behavior monitoring server 71, and a security terminal 72, where the camera 70 and the security terminal 72 are both connected to the target object behavior monitoring server 71; the camera 70 is configured to capture an image including a target object, and send the image to the target object behavior monitoring server 71; the security terminal 72 is configured to obtain a behavior/alarm signal of the target object from the target object behavior monitoring server 71.
Further, the target object behavior monitoring system further comprises a load balancing device; the load balancing equipment is used for receiving the image information sent by the camera, carrying out load balancing processing on the image information and then sending the image information to the target object behavior monitoring device.
Preferably, the load balancing device comprises a main nginx RTSP load balancer and a secondary nginx RTSP load balancer.
Further, the target object behavior monitoring system further comprises a storage device, and the storage device is used for receiving and storing the image information sent by the camera. Preferably, the storage device adopts a distributed file system.
Example 6
To illustrate the target object behavior monitoring system of the present invention in more detail, a specific example is given. Fig. 8 is a schematic structural diagram of a behavior detection alarm system based on a deep neural network according to embodiment 6 of the present invention, and as shown in fig. 8, the behavior detection alarm system includes a network camera, a load balancing streaming media server, an artificial intelligence server cluster, a security terminal, and a distributed database.
Specifically, the captured image is encoded and compressed by the network camera and then conforms to the onvif (opennetwork Video Interface form) standard, and the Video stream is pushed to the sub-artificial intelligence server cluster according to the rtsp (real Time Streaming protocol)/rtmp (real Time Messaging protocol) protocol. The artificial intelligence server cluster carries out image preprocessing and deep neural network training and learning on the received image data, so that the behavior of the target object can be detected from the video image and sent to the security terminal. When the security terminal judges that the target behavior is abnormal, the security terminal triggers an alarm, distributes alarm information and corresponding video images to management personnel, assists the personnel in making decisions, and provides massive video file storage and query by adopting a distributed database and a distributed file system for storing evidences.
Rtsp (real Time Streaming protocol), RFC2326, and real-Time Streaming protocol are application layer protocols in a TCP/IP protocol system, and define how one-to-many applications can effectively transmit multimedia data through an IP network. RTSP is architecturally located above RTP and RTCP, which use TCP or UDP to complete the data transfer.
Specifically, the artificial intelligence server cluster adopts nginx to realize load balancing and keep hot standby, and provides functions of RTSP/RTMP video stream distribution, on-demand playing and the like. The method for achieving the high-availability load balance through nginx RTSP and keepalive comprises the following steps:
IP configuration, main nginx RTSP load balancer: IP1 (VIP: IP0 configured for external use by keepalived), secondary nginx RTSP load balancer: IP2 (VIP is configured for out-of-service use via keepalived: IP 0);
2. the nginx RTSP and keepalived are required to be installed on the main server and the standby server;
3. configuring main and standby nginx RTSP services;
4. configuring keepalived operation parameters on the main server and the standby server;
5. high availability and load balancing of keepalived and nginx RTSP are checked.
The distributed database receives and stores the image information sent by the camera through the load balancing streaming media server, and specifically, the following technical scheme can be adopted for storing the information in the distributed database: transcoding large file slices, storing the slice files by using an HDFS (Hadoop distributed file system), storing a path by using an Hbase, making an MD5 value as key for each slice, and making an Hbase value as a path of the HDFS, so that the files can be well hashed in a cluster, uploading the received video stream files to the HDFS from the local by using an API (application programming interface) structure provided by Hadoop, continuously storing the received video files to a specified local folder in the process, taking the dynamically changed folder as a buffer area, butting the files in the buffer area with the HDFS in a stream mode, and then uploading the files in the buffer area to the HDFS in a stream mode by calling a writing method. And after the file is uploaded successfully, calling a delete method to delete the uploaded file in the local buffer area in batch. This process is continually looping until all files in the buffer are uploaded to the HDFS and the buffer files are all empty.
Fig. 9 is an architecture diagram of an HDFS in embodiment 6 of the present invention, and as shown in fig. 6, the HDFS stores data by using a Master/Slave architecture, and the architecture mainly includes four parts, which are HDFSClient, NameNode, DataNode, and SecondaryNameNode. The Client is a Client and realizes the interaction of file segmentation and HDFS; the NameNode is mainly a management function and is used for processing client requests; the DataNode is a Slave, the NameNode issues commands, and the DataNode executes actual operation; the Secondary NameNode assists the NameNode to share the workload, and can assist in recovering the NameNode in an emergency.
Fig. 10 is a schematic diagram of the Hbase table structure in embodiment 6 of the present invention, and like the nosql database, row key is a main key for retrieving records, and only three ways are available for accessing a row in the Hbase table: a single row key access, a column family through range full-table scan of the row key; each column in the hbase table belongs to a certain column family, column names take the column family as prefixes, and access control, disk and memory use statistics are performed at the level of the column family; in HBase, a storage unit is determined to be a cell through row and columns, data in the cell is of no type, and all data are stored in a byte code mode; each cell holds multiple versions of the same data. The versions are indexed by timestamps.
The behavior detection alarm system based on the deep neural network in embodiment 6 of the present invention pushes captured images to an artificial intelligence server cluster after being coded and compressed by a front-end network camera, so that load balancing and the artificial intelligence server cluster provide guarantee for high concurrency and high availability, and performs image preprocessing and deep neural network training and learning on received image data, thereby detecting behavior characteristics of a target object from video images.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (16)

1. A target object behavior monitoring method is characterized by comprising the following steps:
acquiring an image containing a target object, and extracting characteristic parameters in the image;
determining the positions of all key points in the image and the connection relation among the key points according to the characteristic parameters, and connecting all the key points together according to the connection relation among the key points to form a posture skeleton of the target object; wherein the key point is a joint with a degree of freedom on the target object;
and recognizing the gesture skeleton of the target object, and determining the behavior of the target object.
2. The method according to claim 1, wherein the extracting the characteristic parameters in the image comprises:
inputting the image into a preset first convolution neural network, and extracting characteristic parameters in the image by using the first convolution neural network.
3. The method for monitoring the behavior of the target object according to claim 1, wherein the determining the positions of the key points in the image and the connection relationship between the key points according to the characteristic parameters comprises:
inputting the characteristic parameters into a preset second convolutional neural network to obtain the positions of all key points in the image and the connection relation among the key points; wherein the second convolutional neural network comprises a plurality of stages in series, and each stage comprises a plurality of branches.
4. The method according to claim 1, wherein the recognizing the gesture skeleton of the target object and the determining the behavior of the target object comprises:
inputting the posture skeleton of the target object into a preset space-time diagram convolution network model for recognition to obtain the behavior of the target object; the space-time graph convolution network model comprises at least two layers of space-time graph convolution networks, and the space-time graph convolution networks of two adjacent layers are linked by using residual errors.
5. The method for monitoring the behavior of the target object according to any one of claims 1 to 4, further comprising, after determining the behavior of the target object:
and judging whether the behavior of the target object is abnormal or not, and sending an alarm signal when the behavior of the target object is abnormal.
6. The method for monitoring the behavior of the target object according to any one of claims 1 to 4, further comprising, before extracting the characteristic parameters in the image:
preprocessing the image, wherein the preprocessing comprises one or more of the following: data marking, bilinear interpolation, image turning, brightness adjustment, contrast adjustment, hue adjustment, saturation adjustment and image standard normalization.
7. The method for monitoring the behavior of the target object according to claim 2, 3 or 4, further comprising, before extracting the characteristic parameters in the image:
and training the first convolutional neural network, the second convolutional neural network and the space-time diagram convolutional network model by utilizing a target object behavior database.
8. A target object behavior monitoring device, comprising:
the extraction module is used for acquiring an image containing a target object and extracting characteristic parameters in the image;
the gesture skeleton forming module is used for determining the positions of all key points in the image and the connection relation among the key points according to the characteristic parameters, and connecting all the key points together according to the connection relation among the key points to form a gesture skeleton of the target object; wherein the key point is a joint with a degree of freedom on the target object;
and the behavior recognition module is used for recognizing the gesture skeleton of the target object and determining the behavior of the target object.
9. The target object behavior monitoring device of claim 8, further comprising:
and the alarm module is used for judging whether the behavior of the target object is abnormal or not and sending an alarm signal when the behavior of the target object is abnormal.
10. A target object behavior monitoring server, comprising:
the system comprises an image collector, a memory and a processor, wherein the image collector, the memory and the processor are connected in communication with each other, the memory stores computer instructions, and the processor executes the computer instructions so as to execute the target object behavior monitoring method according to any one of claims 1 to 8.
11. A target object behavior monitoring system, comprising: the target object behavior monitoring server and the security terminal according to claim 10, wherein the camera and the security terminal are connected to the target object behavior monitoring server;
the camera is used for capturing an image containing a target object and sending the image to the target object behavior monitoring server;
and the security terminal is used for acquiring the behavior/alarm signal of the target object in the target object behavior monitoring server.
12. The target object behavior monitoring system of claim 11, further comprising a load balancing device;
the load balancing equipment is used for receiving the image information sent by the camera, carrying out load balancing processing on the image information and then sending the image information to the target object behavior monitoring device.
13. The target object behavior monitoring system of claim 12, wherein the load balancing device comprises a primary nginx RTSP load balancer and a secondary nginx RTSP load balancer.
14. The target object behavior monitoring system of claim 11, further comprising a storage device;
and the storage equipment is used for receiving and storing the image information sent by the camera.
15. The system for monitoring behavior of a target object of claim 14, wherein the storage device employs a distributed file system.
16. A computer-readable storage medium storing computer instructions for causing a computer to perform the target object behavior monitoring method of any one of claims 1-8.
CN202010067607.7A 2020-01-20 2020-01-20 Target object behavior monitoring method, device, equipment, system and storage medium Pending CN111259839A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010067607.7A CN111259839A (en) 2020-01-20 2020-01-20 Target object behavior monitoring method, device, equipment, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010067607.7A CN111259839A (en) 2020-01-20 2020-01-20 Target object behavior monitoring method, device, equipment, system and storage medium

Publications (1)

Publication Number Publication Date
CN111259839A true CN111259839A (en) 2020-06-09

Family

ID=70949144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010067607.7A Pending CN111259839A (en) 2020-01-20 2020-01-20 Target object behavior monitoring method, device, equipment, system and storage medium

Country Status (1)

Country Link
CN (1) CN111259839A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183317A (en) * 2020-09-27 2021-01-05 武汉大学 Live working field violation behavior detection method based on space-time diagram convolutional neural network
CN114202804A (en) * 2022-02-15 2022-03-18 深圳艾灵网络有限公司 Behavior action recognition method and device, processing equipment and storage medium
WO2023273056A1 (en) * 2021-06-30 2023-01-05 深圳市优必选科技股份有限公司 Robot navigation method, robot and computer-readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108809985A (en) * 2018-06-13 2018-11-13 东营汉威石油技术开发有限公司 A kind of mobile platform system
CN110035295A (en) * 2019-03-06 2019-07-19 深圳市麦谷科技有限公司 Distributed video living transmission system
CN110348335A (en) * 2019-06-25 2019-10-18 平安科技(深圳)有限公司 Method, apparatus, terminal device and the storage medium of Activity recognition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108809985A (en) * 2018-06-13 2018-11-13 东营汉威石油技术开发有限公司 A kind of mobile platform system
CN110035295A (en) * 2019-03-06 2019-07-19 深圳市麦谷科技有限公司 Distributed video living transmission system
CN110348335A (en) * 2019-06-25 2019-10-18 平安科技(深圳)有限公司 Method, apparatus, terminal device and the storage medium of Activity recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YIN ZHENG 等: "Fall detection and recognition based on GCN and 2D Pose" *
我是婉君的: "解读:基于动态骨骼的动作识别方法ST-GCN(时空图卷积网络模型)" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183317A (en) * 2020-09-27 2021-01-05 武汉大学 Live working field violation behavior detection method based on space-time diagram convolutional neural network
WO2023273056A1 (en) * 2021-06-30 2023-01-05 深圳市优必选科技股份有限公司 Robot navigation method, robot and computer-readable storage medium
CN114202804A (en) * 2022-02-15 2022-03-18 深圳艾灵网络有限公司 Behavior action recognition method and device, processing equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111259839A (en) Target object behavior monitoring method, device, equipment, system and storage medium
CN109241897B (en) Monitoring image processing method and device, gateway equipment and storage medium
US20140192192A1 (en) Systems and methods for managing video data
WO2022228204A1 (en) Federated learning method and apparatus
CN110572617B (en) Environment monitoring processing method and device and storage medium
US11983186B2 (en) Predicting potential incident event data structures based on multi-modal analysis
CN104079885A (en) Nobody-monitored and linkage-tracked network camera shooting method and device
KR101832680B1 (en) Searching for events by attendants
CN107992937B (en) Unstructured data judgment method and device based on deep learning
CN105407316A (en) Implementation method for intelligent camera system, intelligent camera system, and network camera
CN104202387A (en) Metadata recovery method and related device
CN113992893A (en) Park inspection method and device, storage medium and electronic device
CN115103157A (en) Video analysis method and device based on edge cloud cooperation, electronic equipment and medium
CN105553700A (en) Intelligent equipment fault recognition detection and solution providing system
US11979660B2 (en) Camera analyzing images on basis of artificial intelligence, and operating method therefor
CN103986882A (en) Method for image classification, transmission and processing in real-time monitoring system
CN112039936B (en) Data transmission method, first data processing equipment and monitoring system
CN116580362B (en) Transmission operation cross-system fusion data acquisition method and digital asset processing system
CN106412492B (en) Video data processing method and device
JP5072880B2 (en) Metadata extraction server, metadata extraction method and program
TWI503759B (en) Cloud-based smart monitoring system
CN113783862B (en) Method and device for checking data in edge cloud cooperation process
CN111541864B (en) Digital retina software defined camera method and system
CN110166561B (en) Data processing method, device, system, equipment and medium for wearable equipment
CN110727532B (en) Data restoration method, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200609