CN110555975A - Drowning prevention monitoring method and system - Google Patents

Drowning prevention monitoring method and system Download PDF

Info

Publication number
CN110555975A
CN110555975A CN201910802791.2A CN201910802791A CN110555975A CN 110555975 A CN110555975 A CN 110555975A CN 201910802791 A CN201910802791 A CN 201910802791A CN 110555975 A CN110555975 A CN 110555975A
Authority
CN
China
Prior art keywords
monitoring
monitoring target
target
image frame
frame set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910802791.2A
Other languages
Chinese (zh)
Inventor
周玲
李湘文
张冬
崔崴
张乐
张辉雨
杨皓麟
刘香伶
王陈熠
杜凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Engineering and Technical College of Chengdu University of Technology
Original Assignee
Engineering and Technical College of Chengdu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Engineering and Technical College of Chengdu University of Technology filed Critical Engineering and Technical College of Chengdu University of Technology
Priority to CN201910802791.2A priority Critical patent/CN110555975A/en
Publication of CN110555975A publication Critical patent/CN110555975A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/08Alarms for ensuring the safety of persons responsive to the presence of persons in a body of water, e.g. a swimming pool; responsive to an abnormal condition of a body of water
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Emergency Management (AREA)
  • Evolutionary Biology (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a drowning prevention monitoring method, which comprises the following steps: acquiring a video frame; performing gesture recognition on at least one monitoring target in the video frame to obtain detection data of the gesture of each monitoring target; comparing the detection data with prestored data, and if the comparison result is not matched, sending alarm information; the embodiment of the invention provides a drowning prevention monitoring system, which comprises: an acquisition module; a first identification module; a comparison module; therefore, by shooting the swimmer, the real-time video of the swimmer is obtained, and then the swimmer in the shot video is identified in posture, so that the posture data of the swimmer during swimming is obtained, the obtained posture data is compared with the pre-stored correct posture data, whether the swimmer is drowned is judged, and the condition that the working personnel need to pay attention to the swimmer during swimming is avoided.

Description

drowning prevention monitoring method and system
Technical Field
The invention relates to a monitoring method and a monitoring system, in particular to a drowning prevention monitoring method and a drowning prevention monitoring system.
Background
Swimming is a very popular sport, which has the advantages of building body, slimming, enhancing resistance and myocardial function, etc. But the problem of drowning is not negligible. At present, in order to enable a swimmer to be capable of timely rescuing when drowning occurs, a camera is generally installed to monitor whether the swimmer is drowned, but when the camera is adopted for monitoring, the situation that the swimmer is observed by a long-time high concentration of monitoring personnel is needed, and meanwhile, the situation that whether the swimmer is drowned or not can be accurately judged by the monitoring personnel is also needed.
Disclosure of Invention
In order to solve the above technical problems, embodiments of the present invention provide a drowning prevention monitoring method and system, which can determine whether a swimmer is drowned by recognizing a swimmer gesture captured by a camera, and send an alarm, so as to reduce workload of monitoring personnel.
In order to achieve the purpose, the technical scheme of the embodiment of the invention is realized as follows:
The embodiment of the invention provides a drowning prevention monitoring method, which comprises the following steps:
Acquiring a video frame;
Performing gesture recognition on at least one monitoring target in the video frame to obtain detection data of the gesture of each monitoring target;
And comparing the detection data with prestored data, and if the comparison result is not matched, sending alarm information.
In an embodiment of the present invention, the performing gesture recognition on at least one monitoring target in the video frame includes:
Extracting a monitoring image frame set in the video frames;
Determining a monitoring target in the set of monitoring image frames;
At least one monitoring target in the monitoring image frame set is subjected to line processing;
And carrying out gesture recognition on the processed monitoring target.
In an embodiment of the present invention, the monitoring image frame set is composed of a plurality of image frames, wherein the determining of the monitoring target in the monitoring image frame set is determining the monitoring target in each image frame in the monitoring image frame set.
in an embodiment of the present invention, the striping at least one monitoring target in the monitoring image frame set includes:
Determining joints, heads and limb parts of monitoring targets in the monitoring image frame set;
And processing the joints and the head of the monitoring target into nodes, processing the limbs of the monitoring target into lines, and connecting the nodes and the lines according to the posture of the monitoring target.
In the embodiment of the present invention, the detection data includes a swing amplitude of the detection target limb, an angle change between limbs, a position of the limb, a position of the joint, and a motion trajectory of the joint within a preset time period.
the pre-stored data comprises the swing amplitude of limbs, the angle change between the limbs, the positions of joints and the movement tracks of the joints of the swimmer in a non-drowned state in a preset time period.
the embodiment of the invention provides a drowning prevention monitoring system, which comprises:
The acquisition module is used for acquiring video frames;
The first recognition module is used for recognizing the gesture of at least one monitoring target in the video frame and acquiring the detection data of the gesture of each monitoring target;
And the comparison module is used for comparing the detection data with prestored data, and if the comparison result is not matched, sending alarm information.
In an embodiment of the present invention, the identification module includes:
the extraction module is used for extracting a monitoring image frame set in the video frames;
A determination module for determining a monitoring target in the monitoring image frame set;
The processing module is used for carrying out line processing on at least one monitoring target in the monitoring image frame set;
and the second recognition module is used for carrying out gesture recognition on the processed monitoring target.
the embodiment of the invention provides a drowning prevention monitoring method, which comprises the following steps: acquiring a video frame; performing gesture recognition on at least one monitoring target in the video frame to obtain detection data of the gesture of each monitoring target; comparing the detection data with prestored data, and if the comparison result is not matched, sending alarm information; the embodiment of the invention provides a drowning prevention monitoring system, which comprises: the acquisition module is used for acquiring video frames; the first recognition module is used for recognizing the gesture of at least one monitoring target in the video frame and acquiring the detection data of the gesture of each monitoring target; the comparison module is used for comparing the detection data with prestored data, and if the comparison result is not matched, alarm information is sent; therefore, by shooting the swimmer, the real-time video of the swimmer is obtained, and then the swimmer in the shot video is identified in posture, so that the posture data of the swimmer during swimming is obtained, the obtained posture data is compared with the pre-stored correct posture data, whether the swimmer is drowned is judged, and the condition that the working personnel need to pay attention to the swimmer during swimming is avoided.
Drawings
Fig. 1 is a schematic block diagram of a monitoring method for preventing drowning according to an embodiment of the present invention;
fig. 2 is a schematic block diagram of a method for performing gesture recognition on at least one monitored target in the video frame according to a second embodiment of the present invention;
Fig. 3 is a schematic diagram of a monitoring target in a picture frame after being processed into a line according to a second embodiment of the present invention;
FIG. 4 is a schematic diagram of a second embodiment of the present invention
FIG. 5 is a schematic diagram of a second embodiment of the present invention
Fig. 6 is a schematic structural diagram of a drowning prevention monitoring system according to a third embodiment of the present invention.
Detailed Description
the technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
Example one
The embodiment of the invention provides a drowning prevention monitoring method, which comprises the following steps as shown in figure 1:
Step S101: acquiring a video frame;
Here, the video frame is a video segment captured by shooting when the swimmer swims by the shooting equipment. In the in-service use process, when the swimmer swims in the swimming pool or other places, the shooting equipment is the camera that sets up above the swimming pool, the camera can be shot a swimmer in the swimming pool, also can shoot a plurality of swimmers in the swimming pool.
When the shot video is obtained, the video sent by the shooting device in real time can be obtained; the video sent by the shooting device may also be received in a certain time period, for example, 5 seconds is taken as a time node, and a video segment is acquired as a video frame every 5 seconds, wherein the duration of the video segment is 5 seconds.
step S201: performing gesture recognition on at least one monitoring target in the video frame to obtain detection data of the gesture of each monitoring target;
here, the monitoring targets are swimmers, where performing gesture recognition on at least one monitoring target in the video frame includes performing gesture recognition on one monitoring target in the video frame, or performing gesture recognition on part of the monitoring targets in the video frame, or performing gesture recognition on all the monitoring targets in the video frame.
The gesture recognition is to analyze the swimming action of the swimmer and judge whether the swimming action of the swimmer is normal, wherein the gesture recognition is to perform 2D gesture recognition on the monitoring target. The posture of the monitoring target refers to the angle between limbs of the swimmer when the swimmer is swimming. The detection data refers to the variation between the limbs of the swimmer, namely the swing amplitude of the detection target limb, the angle variation between the limbs, the position of the limb, the position of the joint and the motion track of the joint.
Specifically, during the identification, the video frame may be divided into a plurality of image frame pictures according to a time sequence, then the position of the posture of the monitoring target in each image frame picture is determined, and the detection data for obtaining the posture change amount of the monitoring target is calculated according to the postures of the monitoring target in the image frame pictures corresponding to different times.
step S301: and comparing the detection data with prestored data, and if the comparison result is not matched, sending alarm information.
After the detection data of the monitoring target are acquired, the detection data are compared with pre-stored data, wherein the pre-stored data refer to the swing amplitude range of limbs, the angle change between the limbs, the positions of joints and the movement tracks of the joints of the swimmer in a non-drowning state in a preset time period, and the preset time period can be set according to needs.
And when the result of comparing the detected data with the pre-stored data is not matched, namely the detected data is not in the range of the pre-stored data, which indicates that the swimmer is in a drowning state at the moment, sending alarm information.
when the result of the comparison between the detected data and the pre-stored data is matched, namely the detected data is within the range of the pre-stored data, the swimmer is in a normal swimming state at the moment, and alarm information does not need to be sent.
example two
further, in the embodiment of the present invention, as shown in the figure, the method for performing gesture recognition on at least one monitoring target in the video frame includes:
Step S211: extracting a monitoring image frame set in the video frames;
here, each of the extracted video frames is parsed and then combined into a monitoring image frame set. Further, in order to save workload, after each extracted image in the video frames is analyzed, each image in the video frames may be extracted according to a preset rule to be screened, for example, it is determined that the starting point shooting time of the video frame is X, the ending point shooting time of the video frame is Y, and the time interval is Z, where Y is greater than X and greater than 1, X, Y, Z are natural numbers, and one image is extracted every interval time Z from the xth frame to the end of the xth frame of the video frame; the extracted image combination is determined as a monitoring image frame set.
Step S221: determining a monitoring target in the set of monitoring image frames;
here, when determining the monitoring target in the monitoring image frame set, image recognition is required for the monitoring target in the monitoring image frame set. Generally for recognizing image objects, bounding boxes may be used to detect all people in a given image. A convolutional neural network can be applied to many different objects on an image using a sliding window technique that classifies and locates images. Since the convolutional neural network recognizes each object in the image as an object or a background, the convolutional neural network needs to be used in a large number of positions and scales, but this requires a large amount of calculation, and therefore, in order to solve this problem, a "speckle" image region that may include the object may be further found in the image, thereby increasing the operation speed. The model that can be used here is a region-based convolutional neural network (R-CNN), the algorithmic principle of which is as follows: in R-CNN, the input image is first scanned using a selective search algorithm for possible objects therein, thereby generating approximately 2000 region suggestions; then, a convolutional neural network is operated on the area suggestions; finally, the output of each convolutional neural network is passed to a Support Vector Machine (SVM), which uses a linear regression to tighten the bounding box of the object.
in addition, when the target has the problems of motion blur, low resolution, occlusion and the like on some video frames, a solution which can be adopted is to use the timing information and the context information in the video to help us to deal with the problems. Such as Motion-guided Propagation (MGP) and Multi-context suppression (MCS) in T-CNNs. Thus, even if a single-frame detection result has many missed-detection targets, the adjacent-frame image detection result may include these missed-detection targets. Therefore, the detection result of the current frame can be propagated forwards and backwards by means of the optical flow information, and the target recall rate can be improved through the MGP processing. The use of image detection algorithms to treat video frames as independent images does not take full advantage of the context information of the entire video. Further, if any category of target may appear in the video, but only a few categories appear in a single video clip, the correct category is earlier and the wrong category is later in the detection result after MCS processing, thereby improving the accuracy of target detection.
in order to further fill the missed targets on the video frames, the missed targets can be corrected by utilizing the tracking information, here, the MGP can fill the missed targets on some video frames, but the MGP is not effective on the targets of continuous missed detection of multiple frames, and the target tracking can well solve the problem. Therefore, an image target detection algorithm can be used for obtaining a better detection result; selecting a target with the highest detection score as an initial anchor point for tracking; tracking the whole video clip forward and backward based on the selected anchor point to generate a tracking track; selecting the highest score from the rest targets for tracking, wherein if the window appears in the previous tracking track, the window is directly skipped over, and the next target is selected for tracking; the algorithm is executed iteratively, and a score threshold may be used as a termination condition. The obtained tracking track can be used for improving the target recall rate and can also be used as long-sequence context information to correct the result.
in view of its advantages in image classification and object detection, CNN has become the dominant depth model for computer vision and visual tracking. In general, large scale convolutional neural networks can be trained as both classifiers and trackers. Representative convolutional neural network-based tracking algorithms include Full Convolutional Network Tracker (FCNT) and multi-domain convolutional neural network (MD Net).
In this way, the monitoring target on each image frame in the monitoring image frame set can be identified, i.e. the detection target on each image frame can be found.
Step S231: at least one monitoring target in the monitoring image frame set is subjected to line processing;
here, the line processing is to simplify the limb line of the monitoring target, that is, as shown in the figure, it specifically includes: determining joints, heads and limb parts of monitoring targets in the monitoring image frame set; and processing the joints and the head of the monitoring target into nodes, processing the limbs of the monitoring target into lines, and connecting the nodes and the lines according to the posture of the monitoring target.
Specifically, as shown in the drawing, each detection target includes 15 joints and 15 line components.
Step S241: and carrying out gesture recognition on the processed monitoring target.
here, the gesture recognition of the monitoring target is to obtain an angle and a position between limbs of the monitoring target on each frame of image.
specifically, an openpos framework, which is a framework for estimating the body, face, and hand shapes of a plurality of persons in real time, may be employed. The multi-user key point real-time detection method utilizes an open source library written by OpenCV and Caffe in C + +, and is used for realizing multi-user key point real-time detection of multiple threads.
openpos provides both 2D and 3D multi-person keypoint detection, along with a calibration toolkit for estimating specific area parameters. Openpos can track not only human facial expressions, torso, and limbs, but also individual fingers. The specific implementation method of the openpos in the neural network model principle may be: on a two-layered dome structure, 500 cameras are deployed, which take body poses from various angles and use the image data to reconstruct a data set tracing the 3D motion trajectory of a particular point. The images captured by the cameras on the dome are 2D and after the images are acquired, the system passes these images through the keypoint detectors to identify and mark specific body parts, help the body tracking algorithm to understand how each gesture appears from different angles and finally appear in 3D.
more specifically, the openpos framework gesture recognition principle is as follows: firstly, taking a color image with the size of w x h as input; obtaining a feature degree F through the front 10 layers of the VGG; the network is divided into two circular branches, one for predicting the confidence map S: key points (human joints), one branch is used to predict L: the trend (limbs) of the pixel points in the skeleton; the first loop branch takes the feature map F as input to obtain a set S1,L1(ii) a The following branches respectively have outputs S of more than one brancht-1,Lt-1and a feature map F as input; netFinally outputting S and L; the loss function calculates the predicted value of S, L and the group route (S)*,L*) L between2and calculating the group treth of the norm S and the norm L according to the marked 2D point, and if a certain key point is marked to be missing, not calculating the point.
here, the problem with supervised machine learning (i.e. the neural network model in the above) is to minimize the error while regularizing the parameters. Because too many parameters result in the complexity of the model rising, the model is easy to be over-fitted, and the training error is small. But further in order to make the test error of the model small, i.e. to be able to predict the new sample accurately. Therefore, supervised learning can be viewed as minimizing the following objective function:
Wherein the first term L (y)i,f(xi(ii) a w)) metric model (classification or regression) on the predicted value f (x) of the ith samplei(ii) a w) and authentic tag yiThe previous error. This term is required to be minimal, i.e. our model is required to fit our training data as closely as possible. To ensure the training error is minimal, a second term needs to be added, that is, the regularization function Ω (w) of the parameter w is constrained to be as simple as possible in our model. For the first item Loss function, if the first item Loss function is Square Loss, the first item Loss function is least Square; if the weight is Hinge Loss, the SVM is obtained; if the log-Loss is the log-Loss, the log-Loss is the Logistic Regression; different fitting characteristics are provided for different loss functions, and the problem is specifically analyzed. There are many choices for the regularization function Ω (w), which is generally a monotonically increasing function of the complexity of the model, with the more complex the model, the larger the regularization value. For example, the regularization term may be a norm of a model parameter vector. However, different choices place different constraints on the parameter w and different effects are obtained. For example: zero norm, one norm, two norms, trace norm, Frobenius norm, nuclear norm, and the like. Here, L0The norm refers to the number of elements in the vector that are not 0. If L is used0If norm regularizes a parameter matrix w, then most of wThe elements are all 0. Sparse is all through L1Norm of, L1norm is the sum of absolute values of the elements in the vector, L1norm and L0The norm may enable parameter sparsity. L is2Norm refers to the sum of the squares of the elements of the vector and then the square root. If it is to be L2Norm rule term | | w | | non-woven phosphor2At a minimum, each element of w can be made small, close to 0, but with L1the norm is different and it does not let it equal 0, but close to 0, and smaller parameters indicate that the model is simpler, the simpler the model is less prone to overfitting. Thus, through L2Norm, the limitation to model space can be realized, thereby avoiding overfitting to a certain extent.
wherein the content of the first and second substances,
the Loss function (Loss function) is the sum of the Loss functions of each layer of the circulation network:
further, the detection method of the joint (node) specifically comprises the following steps: by 2D points X marked in the imagejK calculates the grountruth of S (S)*) // wherein XjK represents the j joint of the k individual in the picture; the calculation method comprises the following steps:conforming to normal distribution when the pixel point P is close to the annotation point Xjwhen k is reached, the peak value of the normal curve is reached, and the S of the j-th joint in each image is the S of k individuals in the imagethe peak of the normal distribution.
Further, the method for connecting limbs (lines) between the joints comprises the following steps: two points of interest X by the kth personj1,k,Xj2,kthe unit vector of any pixel p between calculates the group of LWhere k denotes the kth individual, j1 and j2 denote two joints that can be connected (e.g., the elbow and wrist are connected straight by the arm), c denotes the c-th limb; the calculation method comprises the following steps: calculating a keypoint X of a kth person in an imagej1,kPoint direction Xj2,kUnit vector of(v fixed in size and orientation);
Wherein v in the above formula satisfies:
Whether or not the pixel P falls on the limb requires two conditions to be satisfied:
of the limb in the c-th of each imagevector average for k individuals at location p:
At the same time, the correlation between two key points is evaluated:
Key point dj1,dj2And PAF (partial affinity fields) are known, the integral of the dot product between the two keypoint link vectors and the PAF vector for each pixel on the two keypoint link is calculated as the correlation between the two keypoints.
The pixel p is sampled:
In addition, when multiple persons are detected, when multiple elbows and wrists exist in the picture in the calculation pose skeleton, the wrists and the elbows of each person need to be determined and connected. That is, m wrists (nodes) with n elbows exist on one image, and the elbow label Dj1{dj1^1,dj1^2….,dj1Lambdan }, wrist tag Dj2{dj2^1,dj2^2….,dj2Lambda m, the set of arms (connected wrist and elbow) Zc
and if the relevance PAF between the key points is known, the key points are used as the vertexes of the graph, and the relevance PAF between the key points is considered as the edge weight of the graph, the multi-person detection problem is converted into a bipartite graph matching problem, and the optimal matching of the connected key points is obtained by using a Hungary algorithm.
the algorithm for solving the maximum matching is as follows:
EXAMPLE III
further, an embodiment of the present invention provides a drowning prevention monitoring system, as shown in the drawing, the system includes:
the acquisition module 1 is used for acquiring video frames;
The first recognition module 2 is used for performing gesture recognition on at least one monitoring target in the video frame to acquire detection data of gestures of the monitoring targets;
And the comparison module 3 is used for comparing the detection data with prestored data, and if the comparison result is not matched, sending alarm information.
Still further, the identification module includes:
The extraction module is used for extracting a monitoring image frame set in the video frames;
A determination module for determining a monitoring target in the monitoring image frame set;
the processing module is used for carrying out line processing on at least one monitoring target in the monitoring image frame set;
And the second recognition module is used for carrying out gesture recognition on the processed monitoring target.
The above is only a preferred embodiment of the present invention, and it should be noted that the above preferred embodiment should not be considered as limiting the present invention, and the protection scope of the present invention should be subject to the scope defined by the claims. It will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the spirit and scope of the invention, and these modifications and adaptations should be considered within the scope of the invention.

Claims (8)

1. A method of monitoring for drowning prevention, the method comprising:
Acquiring a video frame;
performing gesture recognition on at least one monitoring target in the video frame to obtain detection data of the gesture of each monitoring target;
And comparing the detection data with prestored data, and if the comparison result is not matched, sending alarm information.
2. The drowning prevention monitoring method according to claim 1, wherein the gesture recognition of at least one monitoring target in the video frame comprises:
Extracting a monitoring image frame set in the video frames;
determining a monitoring target in the set of monitoring image frames;
At least one monitoring target in the monitoring image frame set is subjected to line processing;
And carrying out gesture recognition on the processed monitoring target.
3. the drowning prevention monitoring method according to claim 2, wherein the monitoring image frame set is composed of a plurality of image frames, and the determining of the monitoring target in the monitoring image frame set is determining the monitoring target in each image frame in the monitoring image frame set.
4. the drowning prevention monitoring method according to claim 2, wherein the line processing of at least one monitoring target in the monitoring image frame set comprises:
determining joints, heads and limb parts of monitoring targets in the monitoring image frame set;
And processing the joints and the head of the monitoring target into nodes, processing the limbs of the monitoring target into lines, and connecting the nodes and the lines according to the posture of the monitoring target.
5. The drowning prevention monitoring method according to claim 1, wherein the detection data includes a swing amplitude of the detection target limb, an angle change between limbs, a position of the limb, a position of the joint, and a movement locus of the joint for a preset time period.
6. The drowning prevention monitoring method according to claim 1, wherein the pre-stored data comprises the swing amplitude of the limbs, the angle change between the limbs, the positions of the joints and the movement tracks of the joints of the swimmer in a non-drowned state for a preset time period.
7. a drowning prevention monitoring system, the system comprising:
the acquisition module (1) is used for acquiring a video frame;
The first recognition module (2) is used for recognizing the gesture of at least one monitoring target in the video frame and acquiring the detection data of the gesture of each monitoring target;
And the comparison module (3) is used for comparing the detection data with pre-stored data, and if the comparison result is not matched, sending alarm information.
8. the drowning prevention monitoring system of claim 7, wherein the identification module comprises:
the extraction module is used for extracting a monitoring image frame set in the video frames;
A determination module for determining a monitoring target in the monitoring image frame set;
the processing module is used for carrying out line processing on at least one monitoring target in the monitoring image frame set;
and the second recognition module is used for carrying out gesture recognition on the processed monitoring target.
CN201910802791.2A 2019-08-28 2019-08-28 Drowning prevention monitoring method and system Pending CN110555975A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910802791.2A CN110555975A (en) 2019-08-28 2019-08-28 Drowning prevention monitoring method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910802791.2A CN110555975A (en) 2019-08-28 2019-08-28 Drowning prevention monitoring method and system

Publications (1)

Publication Number Publication Date
CN110555975A true CN110555975A (en) 2019-12-10

Family

ID=68737209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910802791.2A Pending CN110555975A (en) 2019-08-28 2019-08-28 Drowning prevention monitoring method and system

Country Status (1)

Country Link
CN (1) CN110555975A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111223273A (en) * 2020-01-21 2020-06-02 张婧怡 Method, device and system for discriminating and positioning person falling into water in swimming pool
CN111353456A (en) * 2020-03-06 2020-06-30 北京工业大学 Infant drowning judgment method based on video monitoring
CN111476277A (en) * 2020-03-20 2020-07-31 广东光速智能设备有限公司 Alarm method and system based on image recognition
CN112165600A (en) * 2020-08-26 2021-01-01 苏宁云计算有限公司 Drowning identification method and device, camera and computer system
CN112297028A (en) * 2020-11-05 2021-02-02 中国人民解放军海军工程大学 Overwater U-shaped intelligent lifesaving robot control system and method
CN114564699A (en) * 2022-04-28 2022-05-31 成都博瑞科传科技有限公司 Continuous online monitoring method and system for total phosphorus and total nitrogen
TWI790715B (en) * 2021-08-18 2023-01-21 國立勤益科技大學 Intelligent image recognition drowning warning system
CN115877899A (en) * 2023-02-08 2023-03-31 北京康桥诚品科技有限公司 Method and device for controlling liquid in floating cabin, floating cabin and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318716A (en) * 2014-10-21 2015-01-28 中国科学院深圳先进技术研究院 Drowning detection pre-warning system using attitude calculation
CN107566797A (en) * 2017-09-07 2018-01-09 青岛博晶微电子科技有限公司 A kind of drowned monitor and detection device of swimming pool
CN108663686A (en) * 2018-04-17 2018-10-16 中国计量大学 A kind of swimming pool drowning monitoring device and method based on laser radar
US20190122040A1 (en) * 2016-04-29 2019-04-25 Marss Ventures S.A. Method of verifying a triggered alert and alert verification processing apparatus
CN109902669A (en) * 2019-04-19 2019-06-18 田鸣鸣 Artificial intelligence based on image recognition anti-drowned early warning system, device and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318716A (en) * 2014-10-21 2015-01-28 中国科学院深圳先进技术研究院 Drowning detection pre-warning system using attitude calculation
US20190122040A1 (en) * 2016-04-29 2019-04-25 Marss Ventures S.A. Method of verifying a triggered alert and alert verification processing apparatus
CN107566797A (en) * 2017-09-07 2018-01-09 青岛博晶微电子科技有限公司 A kind of drowned monitor and detection device of swimming pool
CN108663686A (en) * 2018-04-17 2018-10-16 中国计量大学 A kind of swimming pool drowning monitoring device and method based on laser radar
CN109902669A (en) * 2019-04-19 2019-06-18 田鸣鸣 Artificial intelligence based on image recognition anti-drowned early warning system, device and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘桥;: "游泳运动员姿势识别校正方法研究与仿真" *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111223273A (en) * 2020-01-21 2020-06-02 张婧怡 Method, device and system for discriminating and positioning person falling into water in swimming pool
CN111353456A (en) * 2020-03-06 2020-06-30 北京工业大学 Infant drowning judgment method based on video monitoring
CN111476277A (en) * 2020-03-20 2020-07-31 广东光速智能设备有限公司 Alarm method and system based on image recognition
CN112165600A (en) * 2020-08-26 2021-01-01 苏宁云计算有限公司 Drowning identification method and device, camera and computer system
CN112165600B (en) * 2020-08-26 2023-02-03 深圳市云网万店科技有限公司 Drowning identification method and device, camera and computer system
CN112297028A (en) * 2020-11-05 2021-02-02 中国人民解放军海军工程大学 Overwater U-shaped intelligent lifesaving robot control system and method
TWI790715B (en) * 2021-08-18 2023-01-21 國立勤益科技大學 Intelligent image recognition drowning warning system
CN114564699A (en) * 2022-04-28 2022-05-31 成都博瑞科传科技有限公司 Continuous online monitoring method and system for total phosphorus and total nitrogen
CN114564699B (en) * 2022-04-28 2022-08-12 成都博瑞科传科技有限公司 Continuous online monitoring method and system for total phosphorus and total nitrogen
CN115877899A (en) * 2023-02-08 2023-03-31 北京康桥诚品科技有限公司 Method and device for controlling liquid in floating cabin, floating cabin and medium

Similar Documents

Publication Publication Date Title
CN110555975A (en) Drowning prevention monitoring method and system
CN110998594B (en) Method and system for detecting motion
WO2020042419A1 (en) Gait-based identity recognition method and apparatus, and electronic device
JP2018538631A (en) Method and system for detecting an action of an object in a scene
WO2018162929A1 (en) Image analysis using neural networks for pose and action identification
CN109584213B (en) Multi-target number selection tracking method
CN109685037B (en) Real-time action recognition method and device and electronic equipment
Singh et al. Action recognition in cluttered dynamic scenes using pose-specific part models
CN109815816B (en) Deep learning-based examinee examination room abnormal behavior analysis method
Hasan et al. Robust pose-based human fall detection using recurrent neural network
CN114611600A (en) Self-supervision technology-based three-dimensional attitude estimation method for skiers
CN114694075B (en) Dangerous behavior identification method based on deep reinforcement learning
Afsar et al. Automatic human action recognition from video using hidden markov model
CN106778576B (en) Motion recognition method based on SEHM characteristic diagram sequence
CN117218709A (en) Household old man real-time state monitoring method based on time deformable attention mechanism
CN117011946B (en) Unmanned rescue method based on human behavior recognition
CN114140721A (en) Archery posture evaluation method and device, edge calculation server and storage medium
JP7488674B2 (en) OBJECT RECOGNITION DEVICE, OBJECT RECOGNITION METHOD, AND OBJECT RECOGNITION PROGRAM
CN112560618A (en) Behavior classification method based on skeleton and video feature fusion
CN114639168B (en) Method and system for recognizing running gesture
CN113408435B (en) Security monitoring method, device, equipment and storage medium
Rostami et al. Skeleton-based action recognition using spatio-temporal features with convolutional neural networks
CN112102358A (en) Non-invasive animal behavior characteristic observation method
CN112926388A (en) Campus violent behavior video detection method based on action recognition
JP2022019339A (en) Information processing apparatus, information processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191210