CN108446669A - motion recognition method, device and storage medium - Google Patents

motion recognition method, device and storage medium Download PDF

Info

Publication number
CN108446669A
CN108446669A CN201810315741.7A CN201810315741A CN108446669A CN 108446669 A CN108446669 A CN 108446669A CN 201810315741 A CN201810315741 A CN 201810315741A CN 108446669 A CN108446669 A CN 108446669A
Authority
CN
China
Prior art keywords
target
video frame
video
movement
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810315741.7A
Other languages
Chinese (zh)
Other versions
CN108446669B (en
Inventor
何长伟
汪铖杰
李季檩
甘振业
王亚彪
赵艳丹
葛彦昊
倪辉
熊意超
吴永坚
黄飞跃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810315741.7A priority Critical patent/CN108446669B/en
Publication of CN108446669A publication Critical patent/CN108446669A/en
Application granted granted Critical
Publication of CN108446669B publication Critical patent/CN108446669B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • Psychiatry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Social Psychology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

This application discloses a kind of motion recognition method, device and storage mediums, belong to motion identification field.The method includes:Multiple target video frames are obtained from video to be identified, target video frame includes the image of target person;The identification probability of each target video frame is obtained, identification probability is the probability that target person is in preset posture in target video frame, and preset posture is body posture when personage carries out target movement;According to the identification probability of each target video frame, screening obtains at least one candidate video frame sequence from multiple target video frames, and each candidate video frame sequence corresponds to a motion segments of the possibility more than default possibility threshold value that target person carries out target movement;Movement identification is carried out to each candidate video frame sequence, and determines whether target person has carried out target movement based on recognition result.Motion recognition method provided by the embodiments of the present application can improve the practical value of movement identification.

Description

Motion recognition method, device and storage medium
Technical field
This application involves motion identification field, more particularly to a kind of motion recognition method, device and storage medium.
Background technology
Movement identification is a kind of technology that the type that the character image based in video moves personage is identified.Example Such as, it is identified using movement identification technology to whether the personage taken in video falls down.
Movement identification is carried out using neural network model, wherein the neural network model can be by a large amount of in the related technology Positive sample and negative sample train to obtain.By taking movement is fallen down in identification as an example, which can be that reflection personage falls down process Personage's sequence of frames of video, the negative sample can be that cannot reflect that personage falls down personage's sequence of frames of video of process.Utilizing nerve During network model carries out movement identification, in order to avoid the background image in video frame makes the identification of neural network model At interference, technical staff needs to demarcate an identification region for not including background image as far as possible in video, for example, the identification Region can be mall entrance region or crossroad region, and terminal, which can obtain in each video frame, to be located in the identification region Image, and by each video frame be located at the identification region in image be input in neural network model, with using nerve Network model carries out movement identification.
Carrying out movement identification using neural network model, there are many restrictions, for example, by taking movement is fallen down in identification as an example, are instructing It needs to acquire when practicing neural network model and largely reflects that personage falls down personage's sequence of frames of video of process as positive sample, together When, technical staff also needs to demarcate identification region in video, leads to the practicality that movement identification is carried out using neural network model It is worth relatively low.
Apply for content
The embodiment of the present application provides a kind of motion recognition method, device and storage medium, and it is real can to solve movement identification With the relatively low problem of value.The technical solution is as follows:
On the one hand, a kind of motion recognition method is provided, the method includes:
Multiple target video frames are obtained from video to be identified, the target video frame includes the image of target person;
The identification probability of each target video frame is obtained, the identification probability is the target person in the target The probability of preset posture is in video frame, the preset posture is body posture when personage carries out target movement;
According to the identification probability of each target video frame, is screened from the multiple target video frame and obtain at least one A candidate video frame sequence, what each candidate video frame sequence corresponded to that the target person carries out the target movement can Energy property is more than a motion segments of default possibility threshold value;
Movement identification is carried out to each candidate video frame sequence, and determines that the target person is based on recognition result It is no to have carried out the target movement.
On the one hand, a kind of motion recognition method is provided, the method includes:
Obtain monitor video;
Multiple target video frames are obtained from the monitor video, the target video frame includes the image of target person;
The identification probability of each target video frame is obtained, the identification probability is the target person in the target The probability of preset posture is in video frame, the preset posture is body posture when personage carries out target movement;
According to the identification probability of each target video frame, is screened from the multiple target video frame and obtain at least one A candidate video frame sequence, what each candidate video frame sequence corresponded to that the target person carries out the target movement can Energy property is more than a motion segments of default possibility threshold value;
Movement identification is carried out to each candidate video frame sequence, and determines that the target person is based on recognition result It is no to have carried out the target movement;
When determining that the target person has carried out the target movement, send a warning message to default terminal.
On the one hand, a kind of movement identification device is provided, described device includes:
Frame acquisition module, for obtaining multiple target video frames from video to be identified, the target video frame includes mesh Mark the image of personage;
Probability acquisition module, the identification probability for obtaining each target video frame, the identification probability is described Target person is in the probability of preset posture in the target video frame, when the preset posture is that personage carries out target movement Body posture;
Screening module, for the identification probability according to each target video frame, from the multiple target video frame Screening obtains at least one candidate video frame sequence, and each candidate video frame sequence corresponds to the target person and carries out institute The possibility for stating target movement is more than a motion segments of default possibility threshold value;
Determining module is moved, for carrying out movement identification to each candidate video frame sequence, and is based on recognition result Determine whether the target person has carried out the target movement.
On the one hand, a kind of movement identification device is provided, described device includes processor and memory, in the memory It is stored at least one instruction, described instruction is loaded by the processor and executed to realize movement provided by the embodiments of the present application Recognition methods.
On the one hand, provide a kind of computer readable storage medium, be stored in the computer readable storage medium to A few instruction, described instruction are loaded by processor and are executed to realize motion recognition method provided by the embodiments of the present application.
The advantageous effect that technical solution provided by the embodiments of the present application is brought is:
By obtaining the multiple target video frames for the image for including target person from video to be identified, and obtain each mesh The identification probability of mark video frame then screens to obtain candidate video frame sequence using the identification probability of each target video frame, and The each candidate video frame sequence obtained to screening carries out movement identification, to determine whether target person carries out based on recognition result Target movement, since movement identification can not be carried out using neural network model, can evade and utilize neural network mould Type carries out many restrictions of movement identification, so as to improve the practical value of movement identification.
Description of the drawings
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, the accompanying drawings in the following description is only some embodiments of the present application, for For those of ordinary skill in the art, without creative efforts, other are can also be obtained according to these attached drawings Attached drawing.
Figure 1A is a kind of schematic diagram of implementation environment provided by the embodiments of the present application.
Figure 1B is a kind of schematic diagram of implementation environment provided by the embodiments of the present application.
Fig. 1 C are a kind of schematic diagrames of implementation environment provided by the embodiments of the present application.
Fig. 2 is a kind of flow chart of motion recognition method provided by the embodiments of the present application.
Fig. 3 A are a kind of flow charts of motion recognition method provided by the embodiments of the present application.
Fig. 3 B are a kind of schematic diagrames of video frame provided by the embodiments of the present application.
Fig. 3 C are a kind of schematic diagrames of video frame provided by the embodiments of the present application.
Fig. 3 D are a kind of schematic diagrames of video frame provided by the embodiments of the present application.
Fig. 3 E are a kind of schematic diagrames of video frame provided by the embodiments of the present application.
Fig. 3 F are a kind of schematic diagrames of video frame provided by the embodiments of the present application.
Fig. 3 G are a kind of schematic diagrames of video frame provided by the embodiments of the present application.
Fig. 3 H are a kind of schematic diagrames of video frame provided by the embodiments of the present application.
Fig. 3 I are a kind of schematic diagrames of video frame provided by the embodiments of the present application.
Fig. 3 J are a kind of schematic diagrames of video frame provided by the embodiments of the present application.
Fig. 3 K are a kind of flow charts of motion recognition method provided by the embodiments of the present application.
Fig. 4 is a kind of block diagram of movement identification device provided by the embodiments of the present application.
Fig. 5 is a kind of block diagram of movement identification device provided by the embodiments of the present application.
Fig. 6 is a kind of block diagram of server provided by the embodiments of the present application.
Fig. 7 is a kind of block diagram of video camera provided by the embodiments of the present application.
Specific implementation mode
To keep the purpose, technical scheme and advantage of the application clearer, below in conjunction with attached drawing to the application embodiment party Formula is described in further detail.
Currently, movement identification technology is increasingly common in daily life.For example, utilizing movement identification Technology is identified to whether the personage taken in video falls down, can be to peace when recognizing personage and falling down Anti- personnel's alarm, so that security personnel can give first aid in time.In another example using movement identification technology to being taken in video Personage whether flee and be identified, when recognizing personage and fleeing, it may be determined that public safety currently has occurred Event, and notify police to be disposed in time.Movement identification technology is usually applied in protection and monitor field, that is to say, The identification of type of sports can be usually carried out to the personage that monitor video takes using movement identification technology.
The relevant technologies carry out movement identification using neural network model (such as Three dimensional convolution neural network model).Due to god It is usually obtained by a large amount of personage's sequence of frames of video sample training through network model, therefore, neural network model is usually only capable of base Movement identification is carried out in personage's sequence of frames of video, so, during moving identification, need to avoid the Background in video frame As being interfered to movement identification, recognition accuracy is influenced.For this purpose, technical staff needs to demarcate one in video as far as possible not Include the identification region of background image, and movement identification is carried out based on the image in the identification region.Utilize neural network model Carrying out movement identification, there are many restrictions, this causes the practical value for carrying out movement identification using neural network model relatively low.
The embodiment of the present application provides a kind of motion recognition method, which can improve the reality of movement identification The property used.In motion recognition method provided by the embodiments of the present application, it includes target person that can be obtained from video to be identified Multiple target video frames of image, and the identification probability of each target video frame is obtained, then, utilize each target video frame Identification probability screens to obtain candidate video frame sequence, and each candidate video frame sequence obtained to screening carries out movement identification, To determine whether target person has carried out target movement based on recognition result, due to movement provided by the embodiments of the present application identification side Method can not carry out movement identification using neural network model, therefore, can evade and be moved using neural network model The many restrictions of identification, so as to improve the practicability of movement identification.
In the following, by being illustrated to the implementation environment involved by motion recognition method provided by the embodiments of the present application.
Figure 1A is a kind of schematic diagram of implementation environment involved by motion recognition method provided by the embodiments of the present application.Such as Shown in Figure 1A, which may include video camera 10, which may include camera assembly and processing component, optional , which can be processing chip.Wherein, camera assembly can shoot monitor video, and the monitoring taken is regarded It keeps pouring in and is handed to processing component, processing component can utilize motion recognition method provided by the embodiments of the present application to shoot monitor video To personage carry out movement identification.
Figure 1B is the schematic diagram of another implementation environment involved by motion recognition method provided by the embodiments of the present application. As shown in Figure 1B, which may include video camera 20 and server 30, can lead between video camera 20 and server 30 It crosses cable network or wireless network is communicated.Wherein, video camera 20 can shoot monitor video, and the monitoring taken is regarded Frequency is sent to server 30, and server 30 can utilize motion recognition method provided by the embodiments of the present application to shoot monitor video To personage carry out movement identification.
Fig. 1 C are the schematic diagram of another implementation environment involved by motion recognition method provided by the embodiments of the present application. As shown in Figure 1 C, which may include server 40, which can obtain the video of user's upload, and utilize Motion recognition method provided by the embodiments of the present application to the video capture to personage carry out movement identification.
Referring to FIG. 2, it illustrates a kind of flow chart of motion recognition method provided by the embodiments of the present application, which knows Other method can be applied in the video camera 10 in Figure 1A, alternatively, the motion recognition method can be applied to the service in Figure 1B In device 30, alternatively, the motion recognition method can be applied in the server 40 in Fig. 1 C.As shown in Fig. 2, the movement identification side Method may comprise steps of:
Step 201 obtains multiple target video frames from video to be identified.
Target video frame can be the video frame for the image for including target person, wherein target person is pending movement The personage of identification.
Step 202, the identification probability for obtaining each target video frame.
Wherein, the identification probability of target video frame is that target person is general in preset posture in the target video frame Rate, wherein the preset posture is body posture when personage carries out target movement.
Optionally, when identifying whether target person is fallen down, target movement can be to fall down movement, this is default Posture can be body posture when personage falls down, and that is to say and fall down posture.
Step 203, according to the identification probability of each target video frame, from multiple target video frames screening obtain at least one A candidate video frame sequence.
Wherein, each candidate video frame sequence is more than default possible corresponding to the possibility of target person progress target movement One motion segments of property threshold value.
Optionally, when identifying whether target person is fallen down, each candidate video frame sequence corresponds to target person The possibility that object is fallen down is more than a motion segments of default possibility threshold value.
Step 204 carries out each candidate video frame sequence movement identification, and determines that target person is based on recognition result It is no to have carried out target movement.
In conclusion motion recognition method provided by the embodiments of the present application, includes mesh by being obtained from video to be identified Multiple target video frames of the image of personage are marked, and obtain the identification probability of each target video frame, then, utilize each target The identification probability of video frame screens to obtain candidate video frame sequence, and each candidate video frame sequence obtained to screening is transported Dynamic identification, to determine whether target person has carried out target movement based on recognition result, since neural network mould can not be utilized Type carries out movement identification, therefore, can evade many restrictions that movement identification is carried out using neural network model, so as to carry The practical value of height movement identification.
A is please referred to Fig.3, it illustrates the flow chart of another motion recognition method provided by the embodiments of the present application, the fortune Dynamic recognition methods can be applied in the video camera 10 in Figure 1A, alternatively, the motion recognition method can be applied in Figure 1B In server 30, alternatively, the motion recognition method can be applied in the server 40 in Fig. 1 C.As shown in Figure 3A, the movement Recognition methods may comprise steps of:
Step 301 obtains video to be identified.
In implementation environment shown in figure 1A, the technical process of movement identification is executed by video camera, which can incite somebody to action The monitor video that the camera assembly of itself takes is as the video to be identified;In the implementation environment shown in Figure 1B, by servicing Device executes the technical process of movement identification, which can will establish the video camera transmission for having communication connection with the server Monitor video is as video to be identified;In the implementation environment shown in Fig. 1 C, the technical process of movement identification is executed by server, The video that the server can upload user is as the video to be identified.
In the following, the embodiment of the present application only by by server execute movement identification technical process for the embodiment of the present application It illustrating, the technical process that identification is moved by video camera execution executes the technical process for moving and identifying similarly with by server, Details are not described herein for the embodiment of the present application.
Step 302 filters out multiple candidate video frames including character image from video to be identified, and obtains each wait The personage in video frame is selected to be in the probability of preset posture in the candidate video frame.
After getting video to be identified, server can carry out multi-class detection to each video frame of video to be identified (English:Multi-class Detection), wherein whether it includes presetting class that multi-class detection is in a kind of detection video frame The method of other subject image.By the multi-class detection, server can determine video to be identified each video frame whether Including character image, the video frame including character image can be retrieved as candidate video frame by server.Meanwhile passing through the multiclass It does not detect, it is general in preset posture in the candidate video frame that server can also obtain the personage in each candidate video frame Rate.
Server can based on to video frame carry out multi-class detection as a result, indicate character image in the video frame, And it is in the probability of preset posture in the video frame based on personage, the possible body posture of personage is indicated in the video frame.With To for whether falling down and be identified, as shown in Fig. 3 B to 3E, server can in video frame P1, P2, P3 and P4 with The form of rectangle frame indicates character image, and the possible body posture of personage is indicated around rectangle frame, such as Fig. 3 B and figure Shown in 3C, the possible body posture of personage is midstance in video frame P1 and video frame P2, as shown in Fig. 3 D and Fig. 3 E, video The possible body posture of personage is to fall down posture in frame P3 and video frame P4.
In actual implementation, server can carry out multi-class detection to video frame in several ways, to identify whether to send out For life is fallen down, the embodiment of the present application provides two kinds of possible multi-class detection modes.
First way, server obtain the image of moving object in video frame, wherein moving object is referred to adjacent Two video frame in the changed object in position.The image of the moving object can be retrieved as character image by server, And the video frame including character image is retrieved as candidate video frame.Meanwhile server can be based on the external square of character image The length-width ratio of shape determines that personage is in the video frame and falls down appearance probability of state, wherein the length of the boundary rectangle of character image Refer to that length of the boundary rectangle on the column direction of video frame, the width of the boundary rectangle of character image refer to that this is external Length of the rectangle on the line direction of video frame.
Fig. 3 F are the schematic diagram of video frame S, and Fig. 3 G are the schematic diagram of the video frame Q adjacent with video frame S.Such as Fig. 3 F and 3G Shown, the position in video frame Q positions of the object A in video frame S and object A simultaneously differs, and therefore, server can will Object A is determined as moving object, and the image T1 and T2 of object A are retrieved as character image, and then, server can will include The video frame S of image T1 and video frame Q including image T2 are retrieved as candidate video frame.Meanwhile server can obtain The length-width ratio for taking the boundary rectangle of image T1 and image T2, as shown in Fig. 3 F and 3G, the length-width ratio of the boundary rectangle of image T1 is 3:1, the length-width ratio of the boundary rectangle of image T2 is also 3:1, server can be corresponding with probability value according to preset length-width ratio Relationship determines length-width ratio 3:1 corresponding probability value, and the probability value is retrieved as object A in video frame S and video frame Q In falling down appearance probability of state.
The second way, server can utilize multiple images in the personage for falling down posture and multiple be in stand The image of the personage of posture is as sample training grader (grader can be neural network model).Classified in training After device, server can obtain multiple video frame regions according to sliding window algorithm from video frame, and then, server can utilize instruction Each video frame region is identified in the grader got, to determine whether the video frame region includes character image, After determining that a certain video frame region includes character image, which can be retrieved as candidate video frame by server, and be obtained The personage of grader output is in the video frame falls down appearance probability of state.
As shown in figure 3h, server can obtain multiple video frame regions according to sliding window algorithm from video frame Q, wherein Video frame region n is some the video frame region obtained in video frame Q according to sliding window algorithm, and server can utilize training Video frame region n is identified in obtained grader, and determines that video frame region n includes character image, at this point, server Video frame Q can be retrieved as to candidate video frame, meanwhile, server can also obtain the personage of grader output in the video It is in the n of frame region and falls down appearance probability of state.
Step 303, screening obtains multiple target video frames in multiple candidate video frames.
Server can utilize multiple target tracking (English:Multiple Object Tracking) method is from multiple time Screening in video frame is selected to obtain multiple target video frames, wherein multi-object tracking method is that one kind is obtained from different video frame The method of the image of same object.For example, Fig. 3 I are the schematic diagram of video frame R, Fig. 3 J are the schematic diagram of video frame Y, video frame R Include the image of two objects (personage) with video frame Y, exists in two subject images that video frame R and video frame Y include The image of identical object V can identify the figure of object V using multi-object tracking method from video frame R and video frame Y Picture.In practical application, common multi-object tracking method includes tracking detection method (English:Detection Based Tracking), exempt from tracking detection method (English:Detection Free Tracking) and probability inference method (English: Probability inference) etc..
It is corresponding with the track that target person is moved to screen obtained multiple target video frame, for example, target person The track moved can be that target person be walked the track fallen down suddenly after a period of time on road, and multiple target regards Frequency frame is corresponding with the track, that is to say, multiple target video frame is the video for this track for having taken the target person Frame.
Step 304, according to the identification probability of each target video frame, from multiple target video frames screening obtain at least one A candidate video frame sequence.
As described above, multiple target video frames that screening obtains are corresponding with the track that target person is moved, and target Usually may include in the track that personage is moved target person carry out target movement the higher motion segments of possibility and Target person carries out the lower motion segments of possibility of target movement.For example, the track that target person is moved is target Personage's body after a period of time of walking on road is staggered, and rocks after staggering the track that several steps are then fallen down, wherein The possibility that target person is fallen down in the motion segments that target person is walked on road is relatively low, and target person body Lie Hesitate to advance, target person is fallen down in swaying and the motion segments fallen down possibility it is higher.
Server can screen from multiple target video frames and obtain at least one candidate video frame sequence, wherein each Candidate video frame sequence corresponds to a movement of the possibility more than default possibility threshold value that target person carries out target movement Segment that is to say that each candidate video frame sequence corresponds to the higher fortune of possibility that target person carries out target movement Movable plate section.Server can carry out movement identification in subsequent step to candidate video frame sequence, and be determined based on recognition result Whether target person has carried out target movement.In this way, server is to all target video frames there is no need to carry out movement identification, So as to reduce the calculation amount of movement identification.
In the following, the embodiment of the present application by from multiple target video frames screen candidate video frame sequence technical process into Row explanation.
Server by utilizing sliding window algorithm obtains n-k+1 mutually different sequence of frames of video from n target video frame, In, each sequence of frames of video includes adjacent k target video frame, and k is the positive integer more than or equal to 2, and n is that multiple targets regard The number of frequency frame, n are the positive integer more than or equal to k.
For example, the number for the target video frame that server is screened from candidate video frame is 5, respectively target regards Frequency frame 1, target video frame 2, target video frame 3, target video frame 4 and target video frame 5, the target that sequence of frames of video includes regard The number of frequency frame is 3, and server can obtain 5-3+1=3 video using sliding window algorithm from 5 target video frames Frame sequence.Wherein, the 1st sequence of frames of video includes target video frame 1, target video frame 2 and target video frame 3, the 2nd video Frame sequence includes target video frame 2, target video frame 3 and target video frame 4, and the 3rd sequence of frames of video includes target video frame 3, target video frame 4 and target video frame 5.
After getting n-k+1 mutually different sequence of frames of video, the sequence of frames of video for meeting formula 1 can be screened For candidate video frame sequence, wherein formula 1 is:
piFor the identification probability for i-th of target video frame that sequence of frames of video includes, 1≤i≤k, k are sequence of frames of video packet The number of the target video frame included, k are the positive integer more than or equal to 2, and Th is default average value threshold value, the default average value threshold Value can be set by technical staff, and the embodiment of the present application is not specifically limited it.
By formula 1 it is found that screening the flat of the identification probability of the target video frame in obtained each candidate video frame sequence Mean value is more than default average value threshold value.
Step 305 carries out each candidate video frame sequence movement identification, and determines that target person is based on recognition result It is no to have carried out target movement.
Optionally, for each candidate video frame sequence, server can be according to each of the candidate video frame sequence The identification probability of target video frame determines whether the candidate video frame sequence meets the decision condition of target movement, as any candidate When sequence of frames of video meets the decision condition of target movement, determine that target person has carried out target movement.
Wherein, the decision condition of target movement can be that the number accounting of the first object video frame in sequence of frames of video is big In the condition of default accounting threshold value, which is the target video that identification probability is more than default identification probability threshold value Frame, the default identification probability threshold value and the default accounting threshold value can be set by technical staff, and the embodiment of the present application is to this It is not especially limited.
For example, a certain candidate video frame sequence includes 5 target video frames, the identification probability point of 5 target video frames Not Wei 20%, 30%, 40%, 50% and 60%, wherein default identification probability threshold value can be 45%, then the candidate video frame Sequence includes 2 first object video frame, and the identification probability of two first object video frame is respectively 50% and 60%, clothes Business device can obtain the number accounting of first object video frame in the candidate video frame sequence, which is 2/5, be that is to say 40%, it can be 30% to preset accounting threshold value, since the number accounting of first object video frame in the candidate video frame sequence is big In default accounting threshold value, therefore, which meets the decision condition of target movement.
Alternatively, target movement decision condition can be by the target video frame in sequence of frames of video according to identification probability by It is high to low be ranked sequentially after, the sum of the identification probability of a target video frame of preceding x (x is more than or equal to 1 positive integer) is more than Default and value threshold value condition, wherein the default and value threshold value can be set by technical staff, and the embodiment of the present application is to this It is not especially limited.
For example, a certain candidate video frame sequence includes 5 target video frames, the identification probability point of 5 target video frames Not Wei 20%, 30%, 40%, 50% and 60%, by the target video frame in the candidate video frame sequence according to identification probability by height To low be ranked sequentially, the identification probability of preceding 3 target video frames is 40%, 50% and 60%, and value is 150%, in advance If being 120% with value threshold value, then the candidate video frame sequence meets the decision condition of target movement.
Step 306, according to target video frame sequence, determine that target person carries out start frame and the motor area of target movement Domain.
Wherein, target video frame sequence is the candidate video frame sequence for the decision condition for meeting target movement, and start frame is Reflection target person proceeds by the video frame of target movement, and moving region is that target person carries out area residing when target movement Domain.Determine target person carry out target movement start frame and moving region contribute to related personnel check in time target person into The case where row target moves, and target person is disposed in time according to the moving region of target person progress target movement.
In the case of a kind of possible, there is only a targets to regard at least one candidate video frame sequence screened Frequency frame sequence, in this case, server can determine the first aim video frame that the target video frame sequence includes For the start frame, and according to the position in each the target video frame of the image of target person in the target video frame sequence Determine the moving region.
In the case of alternatively possible, there are when at least two at least one candidate video frame sequence screened Domain registration is more than the target video frame sequence of default registration threshold value, wherein time domain registration is in two sequence of frames of video The number accounting of identical target video frame.Since the time domain registration of at least two target video frame sequences is more than default overlap Threshold value is spent, illustrates that at least two target videos frame sequence is likely to reflection target user and carries out the mistake with the movement of a target Journey.Therefore, in this case, server can select major heading video frame in at least two target videos frame sequence Sequence, and determine that target person carries out start frame and the moving region of target movement in major heading sequence of frames of video.It needs to refer to Go out, which can be set by technical staff, and details are not described herein for the embodiment of the present application.
Optionally, the first aim video frame which can be included by server is determined as this Beginning frame, and according to the location determination in each the target video frame of the image of target person in the major heading sequence of frames of video The moving region.
For example, at least one candidate video frame sequence that screening obtains, there are target video frame sequence ZZ1 and target to regard Frequency frame sequence ZZ2, wherein target video frame sequence ZZ1 includes 3 target video frames, and respectively target video frame a1, target regards Frequency frame a2 and target video frame a3, target video frame sequence ZZ2 also include 3 target video frames, respectively target video frame a2, Target video frame a3 and target video frame a4, then there are identical mesh by target video frame sequence ZZ1 and target video frame sequence ZZ2 Video frame is marked, which is a2 and a3, and, the number accounting (that is to say 2/3) of identical target video frame More than default registration threshold value 1/3.In this case, server can be in the target video frame sequence ZZ1 and target video Major heading sequence of frames of video is selected in frame sequence ZZ2, then, server can include by the major heading sequence of frames of video One target video frame is determined as the start frame, and each in the major heading sequence of frames of video according to the image of target person Location determination moving region in a target video frame.
In actual implementation, server can be by the identification probability of target video frame in at least two target videos frame sequence The maximum target video frame sequence of average value be selected as the major heading sequence of frames of video.
For example, the average value of the identification probability of target video frame in target video frame sequence ZZ1 is 20%, target video The average value of the identification probability of target video frame in frame sequence ZZ2 is 30%, then server can be by the target video frame sequence Row ZZ2 is retrieved as major heading sequence of frames of video.
In order to make reader should be readily appreciated that, technical solution provided by the embodiments of the present application, the embodiment of the present application will be fallen down with identification For, schematically illustrated to obtaining the technical process executed after target video frame:
As shown in Fig. 3 K, which includes the following steps:
Step 1:Start.
Step 2, candidate video frame sequence generate.
Step 3 carries out candidate video frame sequence to fall down identification.
Step 4 merges target video frame sequence.
Wherein, so-called merging refers to here:There are at least two time domain registrations to be more than default registration threshold value When target video frame sequence, major heading sequence of frames of video is selected in at least two target videos frame sequence, while can be with Non-selected target video frame sequence is abandoned.
Step 5:Terminate.
Step 307, target person carried out target movement when, alerted to security personnel.
For example, when target person is fallen down, server can be alerted to security personnel.
Optionally, the address information of the terminal of security personnel can be stored in server, for example, the address information can be with For IP (Internet Protocol, iso-ip Internetworking protocol ISO-IP) address, server can be based on the address information and be pushed to the terminal Warning information, alternatively, can be stored with the communication number of the terminal of security personnel in server, server can be based on the communication Number sends alarm short message to the terminal.
Similarly, when carrying out movement identification by video camera, which can also be based on the ground of the terminal of security personnel Location information or communication number alert security personnel, in addition, when carrying out movement identification by video camera, video camera can be with Xiang Yuqi foundation has the server of communication connection to send a warning message, and the warning information is forwarded to security personnel by server Terminal.
In conclusion motion recognition method provided by the embodiments of the present application, includes mesh by being obtained from video to be identified Multiple target video frames of the image of personage are marked, and obtain the identification probability of each target video frame, then, utilize each target The identification probability of video frame screens to obtain candidate video frame sequence, and each candidate video frame sequence obtained to screening is transported Dynamic identification, to determine whether target person has carried out target movement based on recognition result, since neural network mould can not be utilized Type carries out movement identification, therefore, can evade many restrictions that movement identification is carried out using neural network model, so as to carry The practical value of height movement identification.
Referring to FIG. 4, it illustrates a kind of block diagram of movement identification device 400 provided by the embodiments of the present application, the movement Identification device 400 can be configured in video camera 10 shown in figure 1A, which can also be configured at Figure 1B institutes In the server 30 shown, which can also be configured in server 40 shown in Fig. 1 C.As shown in figure 4, should Moving identification device 400 may include:Frame acquisition module 401, probability acquisition module 402, screening module 403 and movement determine mould Block 404.
The frame acquisition module 401, for obtaining multiple target video frames from video to be identified, which includes The image of target person.
The probability acquisition module 402, the identification probability for obtaining each target video frame, the identification probability are the mesh The probability that personage is in preset posture in the target video frame is marked, which is body when personage carries out target movement Posture.
The screening module 403, for the identification probability according to each target video frame, from multiple target video frame Screening obtains at least one candidate video frame sequence, and each candidate video frame sequence carries out the target corresponding to the target person The possibility of movement is more than a motion segments of default possibility threshold value.
The movement determining module 404, for carrying out movement identification to each candidate video frame sequence, and based on identification knot Fruit determines whether the target person has carried out target movement.
In one embodiment of the application, which includes a target video frames of adjacent k, and k is Positive integer more than or equal to 2, the screening module 403, is specifically used for:It is obtained from the n target video frames using sliding window algorithm It includes a target video frames of adjacent k to take n-k+1 mutually different sequence of frames of video, each sequence of frames of video, and n is big In or equal to k positive integer;Screening obtains at least one candidate video frame sequence from the n-k+1 sequence of frames of video, often The average value of the identification probability of target video frame in a candidate video frame sequence is more than default average value threshold value.
In one embodiment of the application, which is specifically used for:For each candidate video Frame sequence determines the candidate video frame sequence according to the identification probability of each of candidate video frame sequence target video frame Whether the decision condition of target movement is met.
In one embodiment of the application, the decision condition of target movement regards for the first object in sequence of frames of video The number accounting of frequency frame is more than the condition of default accounting threshold value, wherein the first object video frame is identification probability more than default The target video frame of identification probability threshold value.
In one embodiment of the application, which is specifically used for:When any candidate video frame When sequence meets the decision condition of target movement, determine that the target person has carried out target movement.
In one embodiment of the application, to fall down movement, which is body when falling down for target movement Posture.
In one embodiment of the application, which is specifically used for:It is screened from the video to be identified Go out multiple candidate video frames including character image;The image including the target person is filtered out in multiple candidate video frame Multiple target video frame.
Fig. 5 shows the block diagram of another movement identification device 500 provided by the embodiments of the present application, as shown in figure 5, the fortune Dynamic identification device 500 can also include that region obtains mould other than it may include to move the modules that identification device 400 includes Block 405.
The region acquisition module 405, for according to target video frame sequence, determining that the target person carries out the target movement Start frame and moving region;Wherein, which is the candidate video for the decision condition for meeting target movement Frame sequence, the start frame are to reflect that the target person proceeds by the video frame of target movement, which is the target Personage carries out region residing when target movement.
In one embodiment of the application, which is specifically used for:When at least one candidate regards There are when the target video frame sequence that at least two time domain registrations are more than default registration threshold value in frequency frame sequence, this at least Major heading sequence of frames of video is selected in two target video frame sequences, which is identical in two sequence of frames of video Target video frame number accounting;Determine that the target person carries out rising for the target movement in the major heading sequence of frames of video Beginning frame and moving region.
In one embodiment of the application, which is specifically used for:At least two target is regarded The maximum target video frame sequence of the average value of the identification probability of target video frame, is selected as the major heading in frequency frame sequence Sequence of frames of video.
In conclusion movement identification device provided by the embodiments of the present application, includes mesh by being obtained from video to be identified Multiple target video frames of the image of personage are marked, and obtain the identification probability of each target video frame, then, utilize each target The identification probability of video frame screens to obtain candidate video frame sequence, and each candidate video frame sequence obtained to screening is transported Dynamic identification, to determine whether target person has carried out target movement based on recognition result, since neural network mould can not be utilized Type carries out movement identification, therefore, can evade many restrictions that movement identification is carried out using neural network model, so as to carry The practical value of height movement identification.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method Embodiment in be described in detail, explanation will be not set forth in detail herein.
Fig. 6 shows a kind of structure diagram for server 600 that one exemplary embodiment of the application provides.The server 600 include central processing unit (CPU) 601 including random access memory (RAM) 602 and read-only memory (ROM) 603 System storage 604, and connect the system bus 605 of system storage 604 and central processing unit 601.The server 600 Further include help computer in each device between transmit information basic input/output (I/O systems) 606, and use In the mass-memory unit 607 of storage program area 613, application program 614 and other program modules 615.
The basic input/output 606 includes display 608 for showing information and is used for user's input information Such as mouse, keyboard etc input equipment 609.Wherein the display 608 and input equipment 609 are all by being connected to system The input and output controller 610 of bus 605 is connected to central processing unit 601.The basic input/output 606 can be with Including input and output controller 610 for receiving and handle from multiple other equipments such as keyboard, mouse or electronic touch pen Input.Similarly, input and output controller 610 also provides output to display screen, printer or other kinds of output equipment.
The mass-memory unit 607 is connected by being connected to the bulk memory controller (not shown) of system bus 605 It is connected to central processing unit 601.The mass-memory unit 607 and its associated computer-readable medium are server 600 Non-volatile memories are provided.That is, the mass-memory unit 607 may include such as hard disk or CD-ROM drive Etc computer-readable medium (not shown).
Without loss of generality, which may include computer storage media and communication media.Computer is deposited Storage media includes times of the information such as computer-readable instruction, data structure, program module or other data for storage The volatile and non-volatile of what method or technique realization, removable and irremovable medium.Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid-state storages its technologies, CD-ROM, DVD or other optical storages, tape Box, tape, disk storage or other magnetic storage apparatus.Certainly, skilled person will appreciate that the computer storage media not It is confined to above-mentioned several.Above-mentioned system storage 604 and mass-memory unit 607 may be collectively referred to as memory.
According to the various embodiments of the application, which can also pass through the network connections such as internet to net Remote computer operation on network.Namely server 600 can be by the Network Interface Unit that is connected on the system bus 605 611 are connected to network 612, in other words, other kinds of network or long-range can also be connected to using Network Interface Unit 611 Computer system (not shown).
The memory further includes one, and either more than one program this or more than one program are stored in storage In device, central processing unit 601 moves identification side by executing one or more programs to realize shown in Fig. 2 or Fig. 3 A Method.
In the exemplary embodiment, it includes the non-transitorycomputer readable storage medium instructed, example to additionally provide a kind of Such as include the memory of instruction, above-metioned instruction can be executed by the processor of server to complete shown in each embodiment of the application Motion recognition method.For example, the non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices etc..
Fig. 7 shows a kind of structure diagram for video camera 700 that one exemplary embodiment of the application provides.Such as Fig. 7 institutes Show, which includes processing component 701, camera assembly 702 and storage assembly 703.Wherein, which can be with For processing chip, which can connect with camera assembly 702 and storage assembly 703 respectively, the camera assembly 702 For shooting video, which can be camera, which can store an operating system, application program Or other program modules, processing component 701 realize Fig. 2 or Fig. 3 A by executing the application program stored in storage assembly 703 Shown in motion recognition method.
The embodiment of the present application also provides a kind of computer readable storage medium, which is situated between for non-volatile memories Matter is stored at least one instruction, at least one section of program, code set or instruction set in the storage medium, at least one instruction, At least one section of program, the code set or the instruction set is loaded by processor and is executed to realize that the above embodiments of the present application such as provide Motion recognition method.
The embodiment of the present application also provides a kind of computer program product, it is stored with instruction in the computer program product, When run on a computer so that computer is able to carry out motion recognition method provided by the embodiments of the present application.
The embodiment of the present application also provides a kind of chip, which includes programmable logic circuit and/or program instruction, when The chip is able to carry out motion recognition method provided by the embodiments of the present application when running.
In the embodiment of the present application, relative determinative "and/or" indicates that three kinds of logical relations, A and/or B expressions are individually deposited In A, individualism B and exist simultaneously tri- kinds of logical relations of A and B.
One of ordinary skill in the art will appreciate that realizing that all or part of step of above-described embodiment can pass through hardware It completes, relevant hardware can also be instructed to complete by program, the program can be stored in a kind of computer-readable In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely the preferred embodiments of the application, not to limit the application, it is all in spirit herein and Within principle, any modification, equivalent replacement, improvement and so on should be included within the protection domain of the application.

Claims (15)

1. a kind of motion recognition method, which is characterized in that the method includes:
Multiple target video frames are obtained from video to be identified, the target video frame includes the image of target person;
The identification probability of each target video frame is obtained, the identification probability is the target person in the target video The probability of preset posture is in frame, the preset posture is body posture when personage carries out target movement;
According to the identification probability of each target video frame, is screened from the multiple target video frame and obtain at least one time Sequence of frames of video is selected, each candidate video frame sequence corresponds to the possibility that the target person carries out the target movement More than a motion segments of default possibility threshold value;
Movement identification carried out to each candidate video frame sequence, and based on recognition result determine the target person whether into The target of having gone moves.
2. according to the method described in claim 1, it is characterized in that, the candidate video frame sequence includes adjacent k a described Target video frame, k are the positive integer more than or equal to 2;
The identification probability of each target video frame of the basis, from the multiple target video frame screening obtain at least one A candidate video frame sequence, including:
N-k+1 mutually different sequence of frames of video are obtained from the n target video frames using sliding window algorithm, it is each described Sequence of frames of video includes a target video frames of adjacent k, and n is the positive integer more than or equal to k;
Screening obtains at least one candidate video frame sequence from the n-k+1 sequence of frames of video, each candidate The average value of the identification probability of target video frame in sequence of frames of video is more than default average value threshold value.
3. according to the method described in claim 1, it is characterized in that, described move each candidate video frame sequence Identification, including:
For each candidate video frame sequence, according to each of the candidate video frame sequence target video frame Identification probability determines whether the candidate video frame sequence meets the decision condition of the target movement.
4. according to the method described in claim 3, it is characterized in that, the decision condition of target movement is in sequence of frames of video The number accounting of first object video frame be more than the condition of default accounting threshold value, wherein the first object video frame is to know Other probability is more than the target video frame of default identification probability threshold value.
5. according to the method described in claim 3, it is characterized in that, whether described determine the target person based on recognition result The target movement has been carried out, including:
When any candidate video frame sequence meets the decision condition of the target movement, determine that the target person carries out The target movement.
6. according to the method described in claim 3, it is characterized in that, the method further includes:
According to target video frame sequence, determine that the target person carries out start frame and the moving region of the target movement;
Wherein, the target video frame sequence is the candidate video frame sequence for the decision condition for meeting the target movement, described Start frame is the video frame that the reflection target person proceeds by the target movement, and the moving region is the target person Object carries out region residing when the target movement.
7. according to the method described in claim 6, it is characterized in that, described according to target video frame sequence, the target is determined Personage carries out start frame and the moving region of the target movement, including:
When there are at least two time domain registrations to be more than default registration threshold value at least one candidate video frame sequence When target video frame sequence, major heading sequence of frames of video is selected in at least two target videos frame sequence, when described Domain registration is the number accounting of identical target video frame in two sequence of frames of video;
Determine that the target person carries out start frame and the motor area of the target movement in the major heading sequence of frames of video Domain.
8. the method according to the description of claim 7 is characterized in that described select in at least two target videos frame sequence Major heading sequence of frames of video is selected out, including:
By the maximum target of the average value of the identification probability of target video frame in at least two target videos frame sequence Sequence of frames of video is selected as the major heading sequence of frames of video.
9. method according to any one of claims 1 to 8, which is characterized in that the target movement is described pre- to fall down movement If posture is body posture when falling down.
10. method according to any one of claims 1 to 8, which is characterized in that described to obtain multiple mesh from video to be identified Video frame is marked, including:
Multiple candidate video frames including character image are filtered out from the video to be identified;
The multiple target video frame of the image including the target person is filtered out in the multiple candidate video frame.
11. a kind of motion recognition method, which is characterized in that the method includes:
Obtain monitor video;
Multiple target video frames are obtained from the monitor video, the target video frame includes the image of target person;
The identification probability of each target video frame is obtained, the identification probability is the target person in the target video The probability of preset posture is in frame, the preset posture is body posture when personage carries out target movement;
According to the identification probability of each target video frame, is screened from the multiple target video frame and obtain at least one time Sequence of frames of video is selected, each candidate video frame sequence corresponds to the possibility that the target person carries out the target movement More than a motion segments of default possibility threshold value;
Movement identification carried out to each candidate video frame sequence, and based on recognition result determine the target person whether into The target of having gone moves;
When determining that the target person has carried out the target movement, send a warning message to default terminal.
12. a kind of movement identification device, which is characterized in that described device includes:
Frame acquisition module, for obtaining multiple target video frames from video to be identified, the target video frame includes target person The image of object;
Probability acquisition module, the identification probability for obtaining each target video frame, the identification probability are the target Personage is in the probability of preset posture in the target video frame, and the preset posture is body when personage carries out target movement Body posture;
Screening module is screened for the identification probability according to each target video frame from the multiple target video frame At least one candidate video frame sequence is obtained, each candidate video frame sequence corresponds to the target person and carries out the mesh The possibility of mark movement is more than a motion segments of default possibility threshold value;
Determining module is moved, for carrying out movement identification to each candidate video frame sequence, and is determined based on recognition result Whether the target person has carried out the target movement.
13. a kind of movement identification device, which is characterized in that the movement identification device includes processor and memory, described to deposit At least one instruction is stored in reservoir, described instruction is loaded by the processor and executed to realize such as claims 1 to 10 Any motion recognition method.
14. device according to claim 13, which is characterized in that the movement identification device is video camera or server.
15. a kind of computer readable storage medium, which is characterized in that be stored at least one in the computer readable storage medium Item instructs, and described instruction is loaded by processor and executed to realize the motion recognition method as described in claims 1 to 10 is any.
CN201810315741.7A 2018-04-10 2018-04-10 Motion recognition method, motion recognition device and storage medium Active CN108446669B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810315741.7A CN108446669B (en) 2018-04-10 2018-04-10 Motion recognition method, motion recognition device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810315741.7A CN108446669B (en) 2018-04-10 2018-04-10 Motion recognition method, motion recognition device and storage medium

Publications (2)

Publication Number Publication Date
CN108446669A true CN108446669A (en) 2018-08-24
CN108446669B CN108446669B (en) 2023-01-10

Family

ID=63199542

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810315741.7A Active CN108446669B (en) 2018-04-10 2018-04-10 Motion recognition method, motion recognition device and storage medium

Country Status (1)

Country Link
CN (1) CN108446669B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308460A (en) * 2018-09-06 2019-02-05 深兰科技(上海)有限公司 Article detection method, system and computer readable storage medium
CN109389088A (en) * 2018-10-12 2019-02-26 腾讯科技(深圳)有限公司 Video frequency identifying method, device, machinery equipment and computer readable storage medium
CN109409325A (en) * 2018-11-09 2019-03-01 联想(北京)有限公司 A kind of recognition methods and electronic equipment
CN110037500A (en) * 2019-05-17 2019-07-23 北京硬壳科技有限公司 A kind of Intelligent mirror and its control method
CN110472531A (en) * 2019-07-29 2019-11-19 腾讯科技(深圳)有限公司 Method for processing video frequency, device, electronic equipment and storage medium
CN111126112A (en) * 2018-10-31 2020-05-08 顺丰科技有限公司 Candidate region determination method and device
CN111126115A (en) * 2018-11-01 2020-05-08 顺丰科技有限公司 Violence sorting behavior identification method and device
CN111753587A (en) * 2019-03-28 2020-10-09 杭州海康威视数字技术股份有限公司 Method and device for detecting falling to ground
CN111931679A (en) * 2020-08-21 2020-11-13 腾讯科技(深圳)有限公司 Action recognition method, device, equipment and storage medium
CN112465937A (en) * 2020-11-03 2021-03-09 影石创新科技股份有限公司 Method for generating stop motion animation, computer readable storage medium and computer device
CN112507760A (en) * 2019-09-16 2021-03-16 杭州海康威视数字技术股份有限公司 Method, device and equipment for detecting violent sorting behavior
CN112505049A (en) * 2020-10-14 2021-03-16 上海互觉科技有限公司 Mask inhibition-based method and system for detecting surface defects of precision components
WO2022227490A1 (en) * 2021-04-25 2022-11-03 上海商汤智能科技有限公司 Behavior recognition method and apparatus, device, storage medium, computer program, and program product
CN112507760B (en) * 2019-09-16 2024-05-31 杭州海康威视数字技术股份有限公司 Method, device and equipment for detecting violent sorting behaviors

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104717457A (en) * 2013-12-13 2015-06-17 华为技术有限公司 Video condensing method and device
CN106131529A (en) * 2016-06-30 2016-11-16 联想(北京)有限公司 A kind of method of video image processing and device
CN106951834A (en) * 2017-03-03 2017-07-14 沈阳航空航天大学 It is a kind of that motion detection method is fallen down based on endowment robot platform
CN107194419A (en) * 2017-05-10 2017-09-22 百度在线网络技术(北京)有限公司 Video classification methods and device, computer equipment and computer-readable recording medium
CN107798313A (en) * 2017-11-22 2018-03-13 杨晓艳 A kind of human posture recognition method, device, terminal and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104717457A (en) * 2013-12-13 2015-06-17 华为技术有限公司 Video condensing method and device
CN106131529A (en) * 2016-06-30 2016-11-16 联想(北京)有限公司 A kind of method of video image processing and device
CN106951834A (en) * 2017-03-03 2017-07-14 沈阳航空航天大学 It is a kind of that motion detection method is fallen down based on endowment robot platform
CN107194419A (en) * 2017-05-10 2017-09-22 百度在线网络技术(北京)有限公司 Video classification methods and device, computer equipment and computer-readable recording medium
CN107798313A (en) * 2017-11-22 2018-03-13 杨晓艳 A kind of human posture recognition method, device, terminal and storage medium

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308460B (en) * 2018-09-06 2021-04-02 深兰科技(上海)有限公司 Article detection method, system and computer readable storage medium
CN109308460A (en) * 2018-09-06 2019-02-05 深兰科技(上海)有限公司 Article detection method, system and computer readable storage medium
CN109389088B (en) * 2018-10-12 2022-05-24 腾讯科技(深圳)有限公司 Video recognition method, device, machine equipment and computer readable storage medium
CN109389088A (en) * 2018-10-12 2019-02-26 腾讯科技(深圳)有限公司 Video frequency identifying method, device, machinery equipment and computer readable storage medium
CN111126112B (en) * 2018-10-31 2024-04-16 顺丰科技有限公司 Candidate region determination method and device
CN111126112A (en) * 2018-10-31 2020-05-08 顺丰科技有限公司 Candidate region determination method and device
CN111126115A (en) * 2018-11-01 2020-05-08 顺丰科技有限公司 Violence sorting behavior identification method and device
CN111126115B (en) * 2018-11-01 2024-06-07 顺丰科技有限公司 Violent sorting behavior identification method and device
CN109409325A (en) * 2018-11-09 2019-03-01 联想(北京)有限公司 A kind of recognition methods and electronic equipment
CN111753587A (en) * 2019-03-28 2020-10-09 杭州海康威视数字技术股份有限公司 Method and device for detecting falling to ground
CN111753587B (en) * 2019-03-28 2023-09-29 杭州海康威视数字技术股份有限公司 Ground falling detection method and device
CN110037500A (en) * 2019-05-17 2019-07-23 北京硬壳科技有限公司 A kind of Intelligent mirror and its control method
CN110472531A (en) * 2019-07-29 2019-11-19 腾讯科技(深圳)有限公司 Method for processing video frequency, device, electronic equipment and storage medium
CN110472531B (en) * 2019-07-29 2023-09-01 腾讯科技(深圳)有限公司 Video processing method, device, electronic equipment and storage medium
CN112507760A (en) * 2019-09-16 2021-03-16 杭州海康威视数字技术股份有限公司 Method, device and equipment for detecting violent sorting behavior
CN112507760B (en) * 2019-09-16 2024-05-31 杭州海康威视数字技术股份有限公司 Method, device and equipment for detecting violent sorting behaviors
CN111931679A (en) * 2020-08-21 2020-11-13 腾讯科技(深圳)有限公司 Action recognition method, device, equipment and storage medium
CN112505049B (en) * 2020-10-14 2021-08-03 上海互觉科技有限公司 Mask inhibition-based method and system for detecting surface defects of precision components
CN112505049A (en) * 2020-10-14 2021-03-16 上海互觉科技有限公司 Mask inhibition-based method and system for detecting surface defects of precision components
CN112465937B (en) * 2020-11-03 2024-03-15 影石创新科技股份有限公司 Method for generating stop motion animation, computer readable storage medium and computer equipment
CN112465937A (en) * 2020-11-03 2021-03-09 影石创新科技股份有限公司 Method for generating stop motion animation, computer readable storage medium and computer device
WO2022227490A1 (en) * 2021-04-25 2022-11-03 上海商汤智能科技有限公司 Behavior recognition method and apparatus, device, storage medium, computer program, and program product

Also Published As

Publication number Publication date
CN108446669B (en) 2023-01-10

Similar Documents

Publication Publication Date Title
CN108446669A (en) motion recognition method, device and storage medium
Liu et al. Intelligent video systems and analytics: A survey
JP6425856B1 (en) Video recording method, server, system and storage medium
JP5980148B2 (en) How to measure parking occupancy from digital camera images
US9367733B2 (en) Method and apparatus for detecting people by a surveillance system
US20130216094A1 (en) Systems, methods and computer program products for identifying objects in video data
CN108629284A (en) The method and device of Real- time Face Tracking and human face posture selection based on embedded vision system
Zabłocki et al. Intelligent video surveillance systems for public spaces–a survey
JP6724904B2 (en) Image processing apparatus, image processing method, and image processing system
KR102172239B1 (en) Method and system for abnormal situation monitoring based on video
US9811755B2 (en) Object monitoring system, object monitoring method, and monitoring target extraction program
JP2018173914A (en) Image processing system, imaging apparatus, learning model creation method, and information processing device
US20200145623A1 (en) Method and System for Initiating a Video Stream
CN109410278A (en) A kind of object localization method, apparatus and system
US10657783B2 (en) Video surveillance method based on object detection and system thereof
Tran et al. Anomaly analysis in images and videos: A comprehensive review
Elarbi-Boudihir et al. Intelligent video surveillance system architecture for abnormal activity detection
Sandifort et al. An entropy model for loiterer retrieval across multiple surveillance cameras
JP5917303B2 (en) MOBILE BODY DETECTING DEVICE, MOBILE BODY DETECTING SYSTEM, AND MOBILE BODY DETECTING METHOD
CN114764895A (en) Abnormal behavior detection device and method
Ng et al. Outdoor illegal parking detection system using convolutional neural network on Raspberry Pi
Negri Estimating the queue length at street intersections by using a movement feature space approach
Seidenari et al. Non-parametric anomaly detection exploiting space-time features
CN113221800A (en) Monitoring and judging method and system for target to be detected
Karishma et al. Artificial Intelligence in Video Surveillance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant