CN107454359B - Method and device for playing video - Google Patents

Method and device for playing video Download PDF

Info

Publication number
CN107454359B
CN107454359B CN201710633215.0A CN201710633215A CN107454359B CN 107454359 B CN107454359 B CN 107454359B CN 201710633215 A CN201710633215 A CN 201710633215A CN 107454359 B CN107454359 B CN 107454359B
Authority
CN
China
Prior art keywords
behavior
human behavior
human
video
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710633215.0A
Other languages
Chinese (zh)
Other versions
CN107454359A (en
Inventor
高斯太
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201710633215.0A priority Critical patent/CN107454359B/en
Publication of CN107454359A publication Critical patent/CN107454359A/en
Application granted granted Critical
Publication of CN107454359B publication Critical patent/CN107454359B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The disclosure relates to a method and a device for playing videos, and belongs to the technical field of video monitoring. The method comprises the following steps: acquiring association information, wherein the association information indicates a corresponding relation between a first human behavior and a target moment; receiving a second human body behavior input by a user in the process of playing the video; positioning playing time corresponding to the second human behavior based on the associated information; and playing a target video according to the playing time, wherein the target video is a video obtained through video recording. The method and the device solve the problems that in the related technology, a user needs to play back the recorded video every time in order to find the content which the user is interested in, the searching process is complicated, and the searching efficiency is low, simplify the searching process, improve the searching efficiency, and are used for playing the video.

Description

Method and device for playing video
Technical Field
The present disclosure relates to the field of video monitoring technologies, and in particular, to a method and an apparatus for playing a video.
Background
With the large-scale application of video surveillance, cameras become popular, the storage capacity of the cameras becomes large, and more users record life through the cameras (such as home cameras).
In the related art, a camera is usually installed at a preset position, the camera is configured according to the type of the camera (such as an analog monitoring camera or a digital monitoring camera), and then the camera is started, so that the camera can start to work. The camera is usually continuously recorded during operation, for example, from eight am to eight pm. When a user needs to check the video recorded by the camera, the playing equipment acquires the video and plays the video.
In implementing the present disclosure, the inventors found that the related art has at least the following problems:
in order to find the content which is interested by the user, the user needs to play back the video recorded by the camera on the playing equipment every time, the searching process is complicated, and the searching efficiency is low.
Disclosure of Invention
In order to solve the problems that in the related art, a user needs to play back a recorded video every time in order to find out contents which the user is interested in, the searching process is complicated, and the searching efficiency is low, the disclosure provides a method and a device for playing the video. The technical scheme is as follows:
according to a first aspect of the present disclosure, there is provided a method of playing a video, the method comprising:
acquiring association information indicating a corresponding relationship between a first human behavior and a target moment;
receiving a second human body behavior input by a user in the process of playing the video;
positioning a playing time corresponding to the second human behavior based on the associated information;
and playing the target video according to the playing time, wherein the target video is the video obtained by video recording.
Optionally, positioning the playing time corresponding to the second human behavior based on the associated information includes:
acquiring a first human behavior matched with a second human behavior in the associated information;
acquiring a target moment corresponding to the first human behavior from the associated information;
and taking the target time as the playing time corresponding to the second human behavior.
Optionally, in the process of playing the video, receiving a second human behavior input by the user, including:
receiving a third human body behavior input by a user in the process of playing the video;
in response to receiving the third human behavior, obtaining a predetermined behavior set;
when the predetermined behavior set includes a third human behavior, the third human behavior is determined as the second human behavior.
Optionally, the first human behavior comprises at least one of: facial expressions, limb movements, vocal expressions.
According to a second aspect of the present disclosure, there is provided a method of playing a video, the method comprising:
detecting a first human behavior during a video recording process;
in response to the detection of the first human behavior, acquiring a target time of the first human behavior, wherein the target time takes the time of starting video recording as a starting time;
and acquiring association information, wherein the association information indicates a corresponding relation between the first human behavior and a target moment so as to position a playing moment corresponding to the input second human behavior based on the association information when a target video is played, and the target video is a video acquired through video recording.
Optionally, in the video recording process, detecting a first human behavior includes:
detecting a fourth human behavior in the video recording process;
in response to detecting the fourth human behavior, acquiring a predetermined behavior set;
when the predetermined set of behaviors includes a fourth human behavior, the fourth human behavior is determined to be the first human behavior.
Optionally, the first human behavior comprises at least one of: facial expressions, limb movements, vocal expressions.
According to a third aspect of the present disclosure, there is provided an apparatus for playing a video, the apparatus comprising:
an acquisition module configured to acquire association information indicating a correspondence between the first human behavior and a target time;
the receiving module is configured to receive a second human body behavior input by a user in the process of playing the video;
the positioning module is configured to position the playing time corresponding to the second human behavior received by the receiving module based on the association information acquired by the acquiring module;
and the playing module is configured to play the target video according to the playing time positioned by the positioning module, wherein the target video is obtained through video recording.
Optionally, the positioning module is configured to:
acquiring a first human behavior matched with a second human behavior in the associated information;
acquiring a target moment corresponding to the first human behavior from the associated information;
and taking the target time as the playing time corresponding to the second human behavior.
Optionally, the receiving module is configured to:
receiving a third human body behavior input by a user in the process of playing the video;
in response to receiving the third human behavior, obtaining a predetermined behavior set;
when the predetermined behavior set includes a third human behavior, the third human behavior is determined as the second human behavior.
Optionally, the first human behavior comprises at least one of: facial expressions, limb movements, vocal expressions.
According to a fourth aspect of the present disclosure, there is provided an apparatus for playing a video, the apparatus comprising:
a detection module configured to detect a first human behavior during a video recording process;
the first acquisition module is configured to respond to the detection module detecting the first human behavior, and acquire a target time of detecting the first human behavior, wherein the target time takes the time of starting video recording as a starting time;
and the second acquisition module is configured to acquire associated information, wherein the associated information indicates a corresponding relationship between the first human body behavior detected by the detection module and the target moment acquired by the first acquisition module, so that when a target video is played, the playing moment corresponding to the input second human body behavior is positioned based on the associated information, and the target video is a video acquired through video recording.
Optionally, the detection module is configured to:
detecting a fourth human behavior in the video recording process;
in response to detecting the fourth human behavior, acquiring a predetermined behavior set;
when the predetermined set of behaviors includes a fourth human behavior, the fourth human behavior is determined to be the first human behavior.
Optionally, the first human behavior comprises at least one of: facial expressions, limb movements, vocal expressions.
According to a fifth aspect of the present disclosure, there is provided an apparatus for playing a video, the apparatus comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to:
acquiring associated information, wherein the associated information indicates a corresponding relation between the first human behavior and a target moment;
receiving a second human body behavior input by a user in the process of playing the video;
positioning a playing time corresponding to the second human behavior based on the associated information;
and playing the target video according to the playing time, wherein the target video is the video obtained by video recording.
According to a sixth aspect of the present disclosure, there is provided an apparatus for playing a video, the apparatus comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to:
detecting a first human behavior during a video recording process;
in response to the detection of the first human behavior, acquiring a target time of the first human behavior, wherein the target time takes the time of starting video recording as a starting time;
and acquiring association information, wherein the association information indicates a corresponding relation between the first human behavior and a target moment so as to position a playing moment corresponding to the input second human behavior based on the association information when a target video is played, and the target video is a video acquired through video recording.
According to a seventh aspect of the present disclosure, there is provided a storage medium having instructions that, when executed by a processor of an apparatus for playing video, enable the apparatus to perform a method of playing video, the method comprising:
acquiring associated information, wherein the associated information indicates a corresponding relation between the first human behavior and a target moment;
receiving a second human body behavior input by a user in the process of playing the video;
positioning a playing time corresponding to the second human behavior based on the associated information;
and playing the target video according to the playing time, wherein the target video is the video obtained by video recording.
According to an eighth aspect of the present disclosure, there is provided a storage medium having instructions that, when executed by a processor of an apparatus for playing video, enable the apparatus to perform a method of playing video, the method comprising:
detecting a first human behavior during a video recording process;
in response to the detection of the first human behavior, acquiring a target time of the first human behavior, wherein the target time takes the time of starting video recording as a starting time;
and acquiring association information, wherein the association information indicates a corresponding relation between the first human behavior and a target moment so as to position a playing moment corresponding to the input second human behavior based on the association information when a target video is played, and the target video is a video acquired through video recording.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the method and the device for playing the video, provided by the embodiment of the disclosure, can acquire the associated information, receive the second human behavior input by the user in the process of playing the video, position the playing time corresponding to the second human behavior based on the associated information, and then play the target video according to the playing time, wherein the target video is the video acquired through the video. Wherein the association information indicates a correspondence between the first human behavior and the target time. Therefore, when searching for the content of interest of the user, the user does not need to play back the recorded video through the playing equipment, so that the searching process is simplified, and the searching efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure, the drawings that are needed to be used in the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present disclosure, and that other drawings can be obtained by those skilled in the art without inventive effort.
Fig. 1 is a schematic diagram of an implementation environment related to a method for playing a video according to some embodiments of the present disclosure;
FIG. 2 is a flow diagram illustrating a method of playing a video in accordance with an exemplary embodiment;
FIG. 3-1 is a flow diagram illustrating another method of playing a video in accordance with an exemplary embodiment;
FIG. 3-2 is a flow chart of a second human behavior for receiving user input in the embodiment shown in FIG. 3-1;
3-3 is a schematic illustration of a third human behavior for receiving user input in the embodiment of FIG. 3-1;
3-4 are schematic diagrams of a third human behavior for receiving user input in the embodiment of FIG. 3-1;
3-5 are schematic diagrams of a third human behavior for receiving user input in yet another embodiment of the method of FIG. 3-1;
3-6 are flow diagrams of locating a play-out time corresponding to a second human behavior in the embodiment of FIG. 3-1;
FIG. 4 is a flow diagram illustrating yet another method of playing a video in accordance with an exemplary embodiment;
FIG. 5 is a flow diagram illustrating yet another method of playing a video in accordance with an exemplary embodiment;
FIG. 6 is a block diagram illustrating an apparatus for playing video in accordance with an exemplary embodiment;
FIG. 7 is a block diagram illustrating another apparatus for playing video in accordance with an illustrative embodiment;
fig. 8 is a block diagram illustrating yet another apparatus for playing video according to an example embodiment.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more clear, the present disclosure will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. All other embodiments, which can be derived by one of ordinary skill in the art from the embodiments disclosed herein without making any creative effort, shall fall within the scope of protection of the present disclosure.
Fig. 1 is a schematic diagram of an implementation environment related to a method for playing a video according to some embodiments of the present disclosure. The environment of implementation may be an indoor environment or an outdoor environment. For example, the indoor environment may be an office, a family room, a hotel lobby, etc., and the outdoor environment may be a hotel doorway. The implementation environment may include: a monitoring device 10, a user 20 and a playback device 30. The monitoring device 10 has a monitoring video recording function, and for example, the monitoring device may be a camera, and the camera may be a camera arranged in an office, a camera arranged in a family living room, or a camera arranged at a hotel door. The playing device 30 may be a mobile phone, a tablet computer, or a notebook computer. The connection between the monitoring device 10 and the playback device 30 may be established via a wireless network or a wired network. In the embodiment of the present disclosure, the monitoring device 10 can detect a human behavior (such as a facial expression) of the user 20 during the video recording process, obtain a target time at which the human behavior is detected, and then obtain association information indicating a correspondence between the human behavior and the target time. When playing the video, the playing device 30 can locate the playing time corresponding to the input human behavior based on the associated information acquired by the monitoring device 10, and then play the target video according to the playing time.
Fig. 2 is a flowchart illustrating a method for playing a video according to an exemplary embodiment, which is illustrated by applying the method to the playing device 30 in the implementation environment shown in fig. 1, and the method may include the following steps:
in step 201, association information is acquired, the association information indicating a correspondence between the first human behavior and the target time.
In step 202, a second human behavior input by the user is received during the playing of the video.
In step 203, a playing time corresponding to the second human behavior is located based on the association information.
In step 204, a target video is played according to the playing time, where the target video is a video obtained by video recording.
To sum up, the method for playing a video provided by the embodiment of the present disclosure can obtain the associated information, receive the second human behavior input by the user during the video playing process, then position the playing time corresponding to the second human behavior based on the associated information, and then play the target video according to the playing time, where the target video is the video obtained by the video. Wherein the association information indicates a correspondence between the first human behavior and the target time. Therefore, when searching for the content of interest of the user, the user does not need to play back the recorded video through the playing equipment, so that the searching process is simplified, and the searching efficiency is improved.
Fig. 3-1 is a flow chart illustrating another method for playing back video according to an exemplary embodiment, which is illustrated by applying the method to the playback device 30 in the implementation environment shown in fig. 1, and which may include the following steps:
in step 301, association information is obtained, which indicates a correspondence between the first human behavior and the target time.
The first human behavior may include at least one of: facial expressions, limb movements, vocal expressions.
The playing device obtains associated information, and the associated information is used for indicating a corresponding relation between the first human behavior and the target time. The associated information is obtained by the monitoring device during the video recording process.
For example, the association information may be as shown in table 1, and the first human behavior may be a V-shaped gesture, and the target time corresponding to the V-shaped gesture is 14: 00. The first human behavior may be a vertical thumb gesture, which corresponds to a target time of 15: 30. The first human behavior may be a smiling face corresponding to a target moment of 16: 00. In addition, the first human behavior may also be a vocal expression, and may also be a behavior combining facial expressions and body movements, such as a V-character gesture and a smiling face, and the form of the first human behavior is not limited in the embodiments of the present disclosure.
TABLE 1
First human behavior Target time
V-shaped gesture 14:00
Vertical thumb gestures 15:30
Smiling face 16:00
In step 302, a second human behavior input by the user is received during the playing of the video.
Alternatively, as shown in fig. 3-2, step 302 may include the following sub-steps:
in sub-step 3021, a third human behavior input by the user is received during the playing of the video.
In the process of playing the video, when the user wants to search the content which the user is interested in, the playing device can input the third human behavior.
For example, as shown in fig. 3-3, the user may enter a third human behavior name into a text entry box displayed by the playback device. For example, the user inputs "smiley face", "V-character gesture", "thumbs up gesture", "nodding", and the like.
For example, as shown in fig. 3 to 4, the playing device may also display a plurality of human behavior names, and the user selects one human behavior from the plurality of human behavior names as the third human behavior. In addition, the playing device may further display a plurality of human behavior icons, as shown in fig. 3 to 5, and the user selects one human behavior from the plurality of human behavior icons as a third human behavior.
For example, the playing device may further collect a third human behavior of the user through a behavior collection module (such as a camera). For example, the user may make a gesture or smile with the camera. In addition, the acquisition module can also acquire the voice signal of the user.
The embodiment of the present disclosure does not limit the manner in which the playing device receives the third human behavior input by the user.
In sub-step 3022, in response to receiving the third human behavior, a predetermined set of behaviors is obtained.
And after receiving a third human behavior input by the user, the playing device responds to the third human behavior to obtain a preset behavior set. The predetermined set of behaviors includes a plurality of human behaviors. Whether the third human behavior is the second human behavior can be judged through the preset behavior set.
In sub-step 3023, when the predetermined set of behaviors includes a third human behavior, the third human behavior is determined to be a second human behavior.
Illustratively, the set of predetermined behaviors includes a human behavior a, a human behavior B, and a human behavior C. Assuming that the third human behavior is human behavior B, the playback device determines the third human behavior as the second human behavior. Assuming that the third human behavior is human behavior F, the playing device determines that the third human behavior is not the second human behavior.
In step 303, a playing time corresponding to the second human behavior is located based on the association information.
Alternatively, as shown in fig. 3-6, step 303 may include the following substeps:
in sub-step 3031, a first human behavior matching the second human behavior in the associated information is obtained.
For example, when the second human behavior is a V-character gesture, the playing device may obtain a first human behavior matched with the V-character gesture from table 1: a V-shaped gesture. When the second human behavior is smiling face, the playback device can acquire the first human behavior matched with smiling face from table 1: a smiling face.
In sub-step 3032, a target time corresponding to the first human behavior is acquired from the related information.
For example, after obtaining the first human behavior matched with the second human behavior from table 1, the playing device obtains the target time corresponding to the first human behavior from table 1. For example, in sub-step 3031, the first human behavior obtained by the playback device from table 1 is a V-character gesture, and then the playback device obtains a target time corresponding to the V-character gesture from table 1: 14:00.
In sub-step 3033, the target time is used as the playing time corresponding to the second human behavior.
Illustratively, the playback device takes 14:00 obtained in the sub-step 3032 as the playback time corresponding to the second human behavior.
In step 304, a target video is played according to the playing time, where the target video is a video obtained by video recording.
Taking table 1 as an example, assuming that the user wants to view the target video starting at the target time 14:00, the user may input a "V-gesture" in the text input box shown in fig. 3-3, and the playback device determines the "V-gesture" as a behavior in the predetermined behavior set. The playing device obtains a first human behavior matched with the V-shaped gesture from the table 1: and (3) acquiring a V-shaped gesture, and then acquiring a target moment corresponding to the V-shaped gesture from the table 1: 14:00. Then, the playing device plays the target video with the time of starting recording being 14: 00.
To sum up, the method for playing a video provided by the embodiment of the present disclosure can obtain the associated information, receive the second human behavior input by the user during the video playing process, then position the playing time corresponding to the second human behavior based on the associated information, and then play the target video according to the playing time, where the target video is the video obtained by the video. Wherein the association information indicates a correspondence between the first human behavior and the target time. Therefore, when searching for the content of interest of the user, the user does not need to play back the recorded video through the playing equipment, so that the searching process is simplified, and the searching efficiency is improved.
Fig. 4 is a flowchart illustrating another method for playing video according to an exemplary embodiment, which is illustrated by the monitoring device 10 in the implementation environment shown in fig. 1. The method may include the steps of:
in step 401, during a video recording process, a first human behavior is detected.
In step 402, in response to detecting the first human behavior, a target time at which the first human behavior is detected is obtained, where the target time is a starting time of a start of video recording.
In step 403, association information is obtained, where the association information indicates a corresponding relationship between the first human behavior and the target time, so that when a target video is played, a playing time corresponding to the input second human behavior is located based on the association information, and the target video is a video obtained through video recording.
To sum up, the method for playing a video provided by the embodiment of the present disclosure can detect a first human behavior during a video recording process, then obtain a target time at which the first human behavior is detected in response to detecting the first human behavior, and then obtain associated information. Wherein the association information indicates a correspondence between the first human behavior and the target time. Therefore, when the playing device plays the target video, the playing time corresponding to the input second human behavior can be positioned based on the associated information, so that when a user searches for the content which is interested by the user, the recorded video does not need to be played back through the playing device, the searching process is simplified, and the searching efficiency is improved.
Fig. 5 is a flowchart illustrating another method for playing video according to an exemplary embodiment, which is illustrated by the monitoring device 10 in the implementation environment shown in fig. 1. The method may include the steps of:
in step 501, a fourth human behavior is detected during the recording.
The fourth human behavior may include at least one of: facial expressions, limb movements, vocal expressions. For example, the monitoring device may be a camera. For example, the user may make a gesture or smile with the camera. Alternatively, the monitoring device detects a voice signal of the user.
In step 502, in response to detecting the fourth human behavior, a predetermined set of behaviors is obtained.
The monitoring device acquires a predetermined behavior set comprising a plurality of human behaviors in response to detecting the fourth human behavior. It may be determined from the set of predetermined behaviors whether the fourth human behavior is the first human behavior.
In step 503, when the predetermined set of behaviors includes a fourth human behavior, the fourth human behavior is determined to be the first human behavior.
The first human behavior may include at least one of: facial expressions, limb movements, vocal expressions.
Illustratively, the set of predetermined behaviors includes a human behavior L, a human behavior M, and a human behavior N. Assuming that the fourth human behavior is human behavior M, the monitoring device determines the fourth human behavior as the first human behavior. Assuming that the fourth human behavior is human behavior P, the monitoring device determines that the fourth human behavior is not the first human behavior.
In this embodiment of the disclosure, when the predetermined behavior set includes the fourth human behavior, the monitoring device obtains a target time at which the first human behavior is detected, so as to obtain association information indicating a correspondence between the first human behavior and the target time, so that when the playing device plays the target video, the playing time corresponding to the input second human behavior is located based on the association.
In the embodiment of the present disclosure, in order to effectively identify the human behavior and further obtain the associated information, optionally, the monitoring device may determine whether the detected human behavior is a predetermined human behavior during the video recording process. For example, it may be determined whether the detected human behavior is a predetermined human behavior based on the behavior feature. For example, the fourth human behavior is a gesture, and the monitoring device may extract a key point of the gesture, and then compare the extraction result with key point data of each predetermined gesture in the predetermined behavior set. When the extraction result is the same as the key point data of a certain gesture in the predetermined behavior set, the monitoring device determines that the fourth human behavior is the predetermined gesture, that is, determines that the fourth human behavior is the first human behavior.
In addition, in order to meet the personalized requirements of users and provide help for special crowds, the monitoring device can also take the detected human behaviors as the first human behaviors, then obtain the target moment of detecting the first human behaviors and obtain the associated information. For example, a long-term bedridden patient cannot complete the activities in the predetermined set of activities, in which case the monitoring device may use the limb movements of the patient (e.g., leg-lifting movements, arm-swinging movements, etc.) as the first human activities to help the patient mark the target video. The monitoring device may also use some detected abnormal behaviors as the first human body behaviors, for example, in a monitorable area of the monitoring device, the user falls down to the ground with sudden illness, and at this time, the monitoring device may use the detected human body behaviors as the first human body behaviors to help the user mark the target video.
In step 504, in response to detecting the first human behavior, a target time at which the first human behavior is detected is obtained.
The target time is the starting time of the start of recording.
Optionally, when the predetermined behavior set includes a fourth human behavior, the monitoring device determines the fourth human behavior as the first human behavior, and then obtains a target time at which the first human behavior is detected in response to detecting the first human behavior. For example, if the first human behavior is a V-character gesture, the monitoring device acquires a target moment at which the V-character gesture is detected: 14:00. For another example, if the first human behavior is a vertical thumb gesture, the monitoring device acquires a target moment at which the vertical thumb gesture is detected: 15:30.
In step 505, association information is obtained, where the association information indicates a corresponding relationship between the first human body behavior and the target time, so that when the target video is played, a playing time corresponding to the input second human body behavior is located based on the association information.
The target video is a video obtained by video recording.
For example, the association information obtained by the monitoring device may be as shown in table 1. In table 1, the V-shaped gesture corresponds to 14:00, the upright thumb gesture corresponds to 15:30, and the smiley face corresponds to 16: 00. In this way, when playing the target video, the playing device can locate the playing time corresponding to the input second human body behavior (e.g., V-character gesture) based on table 1, for example, locate the playing time corresponding to the input V-character gesture: 14:00, and then playing the target video according to the playing time.
To sum up, the method for playing a video provided by the embodiment of the present disclosure can detect a first human behavior during a video recording process, then obtain a target time at which the first human behavior is detected in response to detecting the first human behavior, and then obtain associated information. Wherein the association information indicates a correspondence between the first human behavior and the target time. Therefore, when the playing device plays the target video, the playing time corresponding to the input second human behavior can be positioned based on the associated information, so that when a user searches for the content of interest, the recorded video does not need to be played back through the playing device, the searching process is simplified, the searching efficiency is improved, and the personalized requirements of the user can be met.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 6 is a block diagram illustrating an apparatus for playing video, which may be implemented by software, hardware or a combination of the two to be part or all of the monitoring device 30 in the implementation environment shown in fig. 1, according to an exemplary embodiment, where the apparatus 600 may include:
an obtaining module 610 configured to obtain association information indicating a correspondence between the first human behavior and the target time.
And the receiving module 620 is configured to receive a second human behavior input by the user during the video playing process.
A positioning module 630 configured to position a playing time corresponding to the second human behavior received by the receiving module based on the association information acquired by the acquiring module.
The playing module 640 is configured to play the target video according to the playing time located by the locating module, where the target video is a video obtained through video recording.
To sum up, the apparatus for playing a video provided by the embodiment of the present disclosure can obtain the associated information, receive the second human behavior input by the user during the process of playing the video, then position the playing time corresponding to the second human behavior based on the associated information, and then play the target video according to the playing time, where the target video is the video obtained by the video. Wherein the association information indicates a correspondence between the first human behavior and the target time. Therefore, when searching for the content of interest of the user, the user does not need to play back the recorded video through the playing equipment, so that the searching process is simplified, and the searching efficiency is improved.
Optionally, the positioning module 630 is configured to:
acquiring a first human behavior matched with a second human behavior in the associated information;
acquiring a target moment corresponding to the first human behavior from the associated information;
and taking the target time as the playing time corresponding to the second human behavior.
Optionally, the receiving module 620 is configured to:
receiving a third human body behavior input by a user in the process of playing the video;
in response to receiving the third human behavior, obtaining a predetermined behavior set;
when the predetermined behavior set includes a third human behavior, the third human behavior is determined as the second human behavior.
Optionally, the first human behavior comprises at least one of: facial expressions, limb movements, vocal expressions.
To sum up, the apparatus for playing a video provided by the embodiment of the present disclosure can obtain the associated information, receive the second human behavior input by the user during the process of playing the video, then position the playing time corresponding to the second human behavior based on the associated information, and then play the target video according to the playing time, where the target video is the video obtained by the video. Wherein the association information indicates a correspondence between the first human behavior and the target time. Therefore, when searching for the content of interest of the user, the user does not need to play back the recorded video through the playing equipment, so that the searching process is simplified, and the searching efficiency is improved.
Fig. 7 is a block diagram illustrating an apparatus for playing video, which may be implemented by software, hardware or a combination of both, as part or all of the monitoring device 10 in the implementation environment shown in fig. 1 according to an exemplary embodiment. The apparatus 700 may include:
a detection module 710 configured to detect a first human behavior during the recording.
A first obtaining module 720, configured to, in response to the detection module detecting the first human behavior, obtain a target time at which the first human behavior is detected, the target time starting from a time at which the video recording is started.
The second obtaining module 730 is configured to obtain association information indicating a corresponding relationship between the first human body behavior detected by the detecting module and the target time obtained by the first obtaining module, so that when the target video is played, the playing time corresponding to the input second human body behavior is located based on the association information, and the target video is a video obtained through video recording.
To sum up, the apparatus for playing a video provided by the embodiment of the present disclosure can detect a first human behavior during a video recording process, then obtain a target time at which the first human behavior is detected in response to detecting the first human behavior, and then obtain associated information. Wherein the association information indicates a correspondence between the first human behavior and the target time. Therefore, when the playing device plays the target video, the playing time corresponding to the input second human behavior can be positioned based on the associated information, so that when a user searches for the content which is interested by the user, the recorded video does not need to be played back through the playing device, the searching process is simplified, and the searching efficiency is improved.
Optionally, the detecting module 710 is configured to:
detecting a fourth human behavior in the video recording process;
in response to detecting the fourth human behavior, acquiring a predetermined behavior set;
when the predetermined set of behaviors includes a fourth human behavior, the fourth human behavior is determined to be the first human behavior.
Optionally, the first human behavior comprises at least one of: facial expressions, limb movements, vocal expressions.
To sum up, the apparatus for playing a video provided by the embodiment of the present disclosure can detect a first human behavior during a video recording process, then obtain a target time at which the first human behavior is detected in response to detecting the first human behavior, and then obtain associated information. Wherein the association information indicates a correspondence between the first human behavior and the target time. Therefore, when the playing device plays the target video, the playing time corresponding to the input second human behavior can be positioned based on the associated information, so that when a user searches for the content which is interested by the user, the recorded video does not need to be played back through the playing device, the searching process is simplified, and the searching efficiency is improved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The disclosed embodiment provides a device for playing video, which may include:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to:
acquiring associated information, wherein the associated information indicates a corresponding relation between the first human behavior and a target moment;
receiving a second human body behavior input by a user in the process of playing the video;
positioning a playing time corresponding to the second human behavior based on the associated information;
and playing the target video according to the playing time, wherein the target video is the video obtained by video recording.
Fig. 8 is a block diagram illustrating an apparatus 1000 for playing video according to an example embodiment. The apparatus 1000 may be applied to the playback device 30 in the implementation environment shown in fig. 1.
Referring to fig. 8, the apparatus 1000 may include one or more of the following components: processing component 1002, memory 1004, power component 1006, multimedia component 1008, input/output (I/O) interface 1012, sensor component 1014, and communications component 1016.
The processing component 1002 generally controls the overall operation of the device 1000, such as data communications, camera operations, and operations associated with recording operations. The processing components 1002 may include one or more processors 1020 to execute instructions to perform all or a portion of the steps of the methods shown in fig. 2 or 3-1. Further, processing component 1002 may include one or more modules that facilitate interaction between processing component 1002 and other components. For example, the processing component 1002 may include a multimedia module to facilitate interaction between the multimedia component 1008 and the processing component 1002.
The memory 1004 is configured to store various types of data to support operations at the apparatus 1000. Examples of such data include instructions, messages, pictures, videos, etc. for any application or method operating on device 1000. The memory 1004 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 1006 provides power to the various components of the device 1000. The power components 1006 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 1000.
The multimedia component 1008 includes a screen that provides an output interface between the device 1000 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 1010 is configured to output and/or input audio signals. For example, audio component 1010 includes a Microphone (MIC) configured to receive external audio signals when apparatus 1000 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 1004 or transmitted via the communication component 1016. In some embodiments, audio component 710 also includes a speaker for outputting audio signals.
I/O interface 1012 provides an interface between processing component 1002 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1014 includes one or more sensors for providing various aspects of status assessment for the device 1000. For example, sensor assembly 1014 may detect an open/closed state of device 1000, the relative positioning of components, such as a display and keypad of device 1000, the change in position of device 1000 or a component of device 1000, the presence or absence of user contact with device 1000, the orientation or acceleration/deceleration of device 1000, and the change in temperature of device 1000. The sensor assembly 1014 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1014 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1014 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1016 is configured to facilitate communications between the apparatus 1000 and other devices in a wired or wireless manner. The device 1000 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1016 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 1016 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 1000 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the method of playing video shown in fig. 2 or 3-1.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 1004 comprising instructions, executable by the processor 1020 of the apparatus 1000 to perform the method of playing video shown in fig. 2 or fig. 3-1 described above is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium, wherein instructions in the storage medium, when executed by a processor of device 1000, enable device 1000 to perform the method of playing video shown in fig. 2 or fig. 3-1.
To sum up, the apparatus for playing a video provided by the embodiment of the present disclosure can obtain the associated information, receive the second human behavior input by the user during the process of playing the video, then position the playing time corresponding to the second human behavior based on the associated information, and then play the target video according to the playing time, where the target video is the video obtained by the video. Wherein the association information indicates a correspondence between the first human behavior and the target time. Therefore, when searching for the content of interest of the user, the user does not need to play back the recorded video through the playing equipment, so that the searching process is simplified, and the searching efficiency is improved.
The disclosed embodiment provides a device for playing video, which may include:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to:
detecting a first human behavior during a video recording process;
in response to the detection of the first human behavior, acquiring a target time of the first human behavior, wherein the target time takes the time of starting video recording as a starting time;
and acquiring association information indicating the corresponding relation between the first human body behavior and the target time so as to position the playing time corresponding to the input second human body behavior based on the association information when the target video is played, wherein the target video is the video acquired through video recording.
The disclosed embodiment also provides a video playing apparatus, which can be applied to the monitoring device 10 in the implementation environment shown in fig. 1, and referring to fig. 8, the apparatus can include a processing component, a memory, a power supply component, a multimedia component, an input/output (I/O) interface, a sensor component, and a communication component.
The processing components may include one or more processors executing instructions to perform all or part of the steps of the method of playing video shown in fig. 4 or 5.
In an exemplary embodiment, the apparatus may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the method of playing video shown in fig. 4 or 5.
In an exemplary embodiment, a non-transitory computer readable storage medium including instructions, such as a memory including instructions, executable by a processor of a device to perform the method of playing video shown in fig. 4 or fig. 5 described above is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium, wherein instructions of the storage medium, when executed by a processor of an apparatus, enable the apparatus to perform the method of playing a video shown in fig. 4 or fig. 5.
To sum up, the apparatus for playing a video provided by the embodiment of the present disclosure can detect a first human behavior during a video recording process, then obtain a target time at which the first human behavior is detected in response to detecting the first human behavior, and then obtain associated information. Wherein the association information indicates a correspondence between the first human behavior and the target time. Therefore, when the playing device plays the target video, the playing time corresponding to the input second human behavior can be positioned based on the associated information, so that when a user searches for the content which is interested by the user, the recorded video does not need to be played back through the playing device, the searching process is simplified, and the searching efficiency is improved.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. A method of playing a video, the method comprising:
acquiring association information, wherein the association information indicates a corresponding relationship between a first human behavior and a target time, the association information is established according to the detected first human behavior and the time when the first human behavior is detected in a video recording process, the first human behavior is a human behavior which is judged to be included in a preset behavior set based on behavior characteristics of a fourth human behavior, and the fourth human behavior is a human behavior detected in a video recording process;
in the video playing process, receiving a third human behavior input by a user, obtaining the preset behavior set in response to the fact that the third human behavior is received, judging whether the preset behavior set comprises the third human behavior or not based on behavior characteristics of the third human behavior, and determining the third human behavior as the second human behavior when the preset behavior set comprises the third human behavior, wherein the third human behavior is the human behavior of the user acquired by a camera or the human behavior corresponding to a human behavior name input by the user in a displayed text input box;
positioning playing time corresponding to the second human behavior based on the associated information;
playing a target video according to the playing time, wherein the target video is a video obtained through video recording;
wherein the determining whether the predetermined behavior set includes the third human behavior based on the behavior feature of the third human behavior comprises:
extracting key point data of the third human body behavior; judging whether the key point data is the same as the key point data of the human body behaviors in the preset behavior set or not; and if the key point data is the same as the key point data of the human body behaviors in the preset behavior set, determining that the preset behavior set comprises the third human body behaviors.
2. The method of claim 1, wherein locating the playing time corresponding to the second human behavior based on the association information comprises:
acquiring a first human behavior matched with the second human behavior in the associated information;
acquiring a target moment corresponding to the first human behavior from the associated information;
and taking the target moment as a playing moment corresponding to the second human behavior.
3. The method of claim 1, wherein the first human behavior comprises at least one of: facial expressions, limb movements, vocal expressions.
4. A method of playing a video, the method comprising:
detecting a fourth human behavior in a video recording process, acquiring a preset behavior set in response to the fourth human behavior, judging whether the preset behavior set comprises the fourth human behavior based on behavior characteristics of the fourth human behavior, and determining the fourth human behavior as the first human behavior when the preset behavior set comprises the fourth human behavior;
in response to the detection of the first human behavior, acquiring a target time when the first human behavior is detected, wherein the target time takes the time when video recording starts as a starting time;
acquiring association information, wherein the association information indicates a corresponding relationship between the first human behavior and the target time, so that when a target video is played, the play time corresponding to an input second human behavior is located based on the association information, the target video is a video acquired through video recording, the second human behavior is a human behavior included in the preset behavior set and judged based on behavior characteristics of a third human behavior, and the third human behavior is a human behavior of a user acquired by a camera or a human behavior corresponding to a human behavior name input by the user in a displayed text input box;
wherein determining whether the predetermined behavior set includes a human behavior based on the behavior feature of the fourth human behavior comprises:
extracting key point data of the fourth human behavior; judging whether the key point data is the same as the key point data of the human body behaviors in the preset behavior set or not; and if the key point data is the same as the key point data of the human body behaviors in the preset behavior set, determining that the preset behavior set comprises the fourth human body behaviors.
5. The method of claim 4, wherein the first human behavior comprises at least one of: facial expressions, limb movements, vocal expressions.
6. An apparatus for playing video, the apparatus comprising:
an obtaining module configured to obtain associated information indicating a correspondence between a first human behavior and a target time, the associated information being established during a video recording process according to the detected first human behavior and a time at which the first human behavior is detected, the first human behavior being a human behavior included in a predetermined behavior set that is determined based on a behavior feature of a fourth human behavior, the fourth human behavior being a human behavior detected during the video recording process;
the receiving module is configured to receive a third human behavior input by a user in a video playing process, acquire the predetermined behavior set in response to receiving the third human behavior, judge whether the predetermined behavior set includes the third human behavior based on behavior characteristics of the third human behavior, and determine the third human behavior as a second human behavior when the predetermined behavior set includes the third human behavior, wherein the third human behavior is the human behavior of the user acquired by a camera or a human behavior corresponding to a human behavior name input by the user in a displayed text input box;
a positioning module configured to position a play time corresponding to the second human behavior received by the receiving module based on the association information acquired by the acquiring module;
the playing module is configured to play a target video according to the playing time positioned by the positioning module, wherein the target video is a video obtained through video recording;
the apparatus is further configured to perform the following:
extracting key point data of the third human body behavior; judging whether the key point data is the same as the key point data of the human body behaviors in the preset behavior set or not; and if the key point data is the same as the key point data of the human body behaviors in the preset behavior set, determining that the preset behavior set comprises the third human body behaviors.
7. The apparatus of claim 6, wherein the positioning module is configured to:
acquiring a first human behavior matched with the second human behavior in the associated information;
acquiring a target moment corresponding to the first human behavior from the associated information;
and taking the target moment as a playing moment corresponding to the second human behavior.
8. The apparatus of claim 6, wherein the first human behavior comprises at least one of: facial expressions, limb movements, vocal expressions.
9. An apparatus for playing video, the apparatus comprising:
the detection module is configured to detect a fourth human behavior in a video recording process, obtain a predetermined behavior set in response to detecting the fourth human behavior, judge whether the predetermined behavior set comprises the fourth human behavior based on behavior characteristics of the fourth human behavior, and determine the fourth human behavior as the first human behavior when the predetermined behavior set comprises the fourth human behavior;
a first obtaining module configured to obtain a target time when the first human behavior is detected in response to the detection module detecting the first human behavior, the target time taking a time when video recording starts as a starting time;
a second obtaining module configured to obtain association information, where the association information indicates a correspondence between the first human behavior detected by the detecting module and the target time obtained by the first obtaining module, so that when a target video is played, a playing time corresponding to an input second human behavior is located based on the association information, where the target video is a video obtained through video recording, the second human behavior is a human behavior included in the predetermined behavior set determined based on a behavior feature of a third human behavior, and the third human behavior is a human behavior of a user acquired by a camera or a human behavior corresponding to a human behavior name input by the user in a displayed text input box;
the apparatus is further configured to perform the following:
extracting key point data of the fourth human behavior; judging whether the key point data is the same as the key point data of the human body behaviors in the preset behavior set or not; and if the key point data is the same as the key point data of the human body behaviors in the preset behavior set, determining that the preset behavior set comprises the fourth human body behaviors.
10. The apparatus of claim 9, wherein the first human behavior comprises at least one of: facial expressions, limb movements, vocal expressions.
11. An apparatus for playing video, the apparatus comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to:
acquiring association information, wherein the association information indicates a corresponding relationship between a first human behavior and a target time, the association information is established according to the detected first human behavior and the time when the first human behavior is detected in a video recording process, the first human behavior is a human behavior which is judged to be included in a preset behavior set based on behavior characteristics of a fourth human behavior, and the fourth human behavior is a human behavior detected in a video recording process;
in the video playing process, receiving a third human behavior input by a user, obtaining the preset behavior set in response to the fact that the third human behavior is received, judging whether the preset behavior set comprises the third human behavior or not based on behavior characteristics of the third human behavior, and determining the third human behavior as the second human behavior when the preset behavior set comprises the third human behavior, wherein the third human behavior is the human behavior of the user acquired by a camera or the human behavior corresponding to a human behavior name input by the user in a displayed text input box;
positioning playing time corresponding to the second human behavior based on the associated information;
playing a target video according to the playing time, wherein the target video is a video obtained through video recording;
the processor is further configured to:
extracting key point data of the third human body behavior; judging whether the key point data is the same as the key point data of the human body behaviors in the preset behavior set or not; and if the key point data is the same as the key point data of the human body behaviors in the preset behavior set, determining that the preset behavior set comprises the third human body behaviors.
12. An apparatus for playing video, the apparatus comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to:
detecting a fourth human behavior in a video recording process, acquiring a preset behavior set in response to the fourth human behavior, judging whether the preset behavior set comprises the fourth human behavior based on behavior characteristics of the fourth human behavior, and determining the fourth human behavior as the first human behavior when the preset behavior set comprises the fourth human behavior;
in response to the detection of the first human behavior, acquiring a target time when the first human behavior is detected, wherein the target time takes the time when video recording starts as a starting time;
acquiring association information, wherein the association information indicates a corresponding relationship between the first human behavior and the target time, so that when a target video is played, the play time corresponding to an input second human behavior is located based on the association information, the target video is a video acquired through video recording, the second human behavior is a human behavior included in the preset behavior set and judged based on behavior characteristics of a third human behavior, and the third human behavior is a human behavior of a user acquired by a camera or a human behavior corresponding to a human behavior name input by the user in a displayed text input box;
the processor is further configured to:
extracting key point data of the fourth human behavior; judging whether the key point data is the same as the key point data of the human body behaviors in the preset behavior set or not; and if the key point data is the same as the key point data of the human body behaviors in the preset behavior set, determining that the preset behavior set comprises the fourth human body behaviors.
13. A computer-readable storage medium having instructions stored thereon, which when executed by a processor of an apparatus for playing video, enable the apparatus to perform a method of playing video, the method comprising:
acquiring association information, wherein the association information indicates a corresponding relationship between a first human behavior and a target time, the association information is established according to the detected first human behavior and the time when the first human behavior is detected in a video recording process, the first human behavior is a human behavior which is judged to be included in a preset behavior set based on behavior characteristics of a fourth human behavior, and the fourth human behavior is a human behavior detected in a video recording process;
in the video playing process, receiving a third human behavior input by a user, obtaining the preset behavior set in response to the fact that the third human behavior is received, judging whether the preset behavior set comprises the third human behavior or not based on behavior characteristics of the third human behavior, and determining the third human behavior as the second human behavior when the preset behavior set comprises the third human behavior, wherein the third human behavior is the human behavior of the user acquired by a camera or the human behavior corresponding to a human behavior name input by the user in a displayed text input box;
positioning playing time corresponding to the second human behavior based on the associated information;
playing a target video according to the playing time, wherein the target video is a video obtained through video recording;
wherein the determining whether the predetermined behavior set includes the third human behavior based on the behavior feature of the third human behavior comprises:
extracting key point data of the third human body behavior; judging whether the key point data is the same as the key point data of the human body behaviors in the preset behavior set or not; and if the key point data is the same as the key point data of the human body behaviors in the preset behavior set, determining that the preset behavior set comprises the third human body behaviors.
14. A computer-readable storage medium having instructions stored thereon, which when executed by a processor of an apparatus for playing video, enable the apparatus to perform a method of playing video, the method comprising:
detecting a fourth human behavior in a video recording process, acquiring a preset behavior set in response to the fourth human behavior, judging whether the preset behavior set comprises the fourth human behavior based on behavior characteristics of the fourth human behavior, and determining the fourth human behavior as the first human behavior when the preset behavior set comprises the fourth human behavior;
in response to the detection of the first human behavior, acquiring a target time when the first human behavior is detected, wherein the target time takes the time when video recording starts as a starting time;
acquiring association information, wherein the association information indicates a corresponding relationship between the first human behavior and the target time, so that when a target video is played, the play time corresponding to an input second human behavior is located based on the association information, the target video is a video acquired through video recording, the second human behavior is a human behavior included in the preset behavior set and judged based on behavior characteristics of a third human behavior, and the third human behavior is a human behavior of a user acquired by a camera or a human behavior corresponding to a human behavior name input by the user in a displayed text input box;
wherein determining whether the predetermined behavior set includes a human behavior based on the behavior feature of the fourth human behavior comprises:
extracting key point data of the fourth human behavior; judging whether the key point data is the same as the key point data of the human body behaviors in the preset behavior set or not; and if the key point data is the same as the key point data of the human body behaviors in the preset behavior set, determining that the preset behavior set comprises the fourth human body behaviors.
CN201710633215.0A 2017-07-28 2017-07-28 Method and device for playing video Active CN107454359B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710633215.0A CN107454359B (en) 2017-07-28 2017-07-28 Method and device for playing video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710633215.0A CN107454359B (en) 2017-07-28 2017-07-28 Method and device for playing video

Publications (2)

Publication Number Publication Date
CN107454359A CN107454359A (en) 2017-12-08
CN107454359B true CN107454359B (en) 2020-12-04

Family

ID=60490440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710633215.0A Active CN107454359B (en) 2017-07-28 2017-07-28 Method and device for playing video

Country Status (1)

Country Link
CN (1) CN107454359B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109350084A (en) * 2018-12-04 2019-02-19 安徽阳光心健科技发展有限公司 A kind of psychological test device and its test method
CN112019789B (en) * 2019-05-31 2022-05-31 杭州海康威视数字技术股份有限公司 Video playback method and device
CN114157914A (en) * 2021-11-30 2022-03-08 深圳Tcl数字技术有限公司 Multimedia playing method, device, storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102413321A (en) * 2011-12-26 2012-04-11 浙江省电力公司 Automatic image-recording system and method
CN104580974A (en) * 2015-01-30 2015-04-29 成都华迈通信技术有限公司 Intelligent monitoring video playback method
CN104837059A (en) * 2014-04-15 2015-08-12 腾讯科技(北京)有限公司 Video processing method, device and system
CN104980677A (en) * 2014-04-02 2015-10-14 联想(北京)有限公司 Method and device for adding label into video
US9465444B1 (en) * 2014-06-30 2016-10-11 Amazon Technologies, Inc. Object recognition for gesture tracking

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106303331B (en) * 2016-08-18 2020-01-10 腾讯科技(深圳)有限公司 Video recording method, terminal, system and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102413321A (en) * 2011-12-26 2012-04-11 浙江省电力公司 Automatic image-recording system and method
CN104980677A (en) * 2014-04-02 2015-10-14 联想(北京)有限公司 Method and device for adding label into video
CN104837059A (en) * 2014-04-15 2015-08-12 腾讯科技(北京)有限公司 Video processing method, device and system
US9465444B1 (en) * 2014-06-30 2016-10-11 Amazon Technologies, Inc. Object recognition for gesture tracking
CN104580974A (en) * 2015-01-30 2015-04-29 成都华迈通信技术有限公司 Intelligent monitoring video playback method

Also Published As

Publication number Publication date
CN107454359A (en) 2017-12-08

Similar Documents

Publication Publication Date Title
CN105845124B (en) Audio processing method and device
CN103885588B (en) Automatic switching method and device
CN105302315A (en) Image processing method and device
CN106331761A (en) Live broadcast list display method and apparatuses
CN106921791B (en) Multimedia file storage and viewing method and device and mobile terminal
CN103944804A (en) Contact recommending method and device
CN107666536B (en) Method and device for searching terminal
CN103955481A (en) Picture displaying method and device
CN106101629A (en) The method and device of output image
CN106409317B (en) Method and device for extracting dream speech
CN107454359B (en) Method and device for playing video
CN105550643A (en) Medical term recognition method and device
CN106550252A (en) The method for pushing of information, device and equipment
CN106331328B (en) Information prompting method and device
CN105100193A (en) Cloud business card recommendation method and device
CN106341712A (en) Processing method and apparatus of multimedia data
CN106130873A (en) Information processing method and device
CN106406175A (en) Prompting method and apparatus for door opening
US10810439B2 (en) Video identification method and device
CN104836721A (en) Group session message reminding method and group session message reminding device
CN105163141B (en) The mode and device of video recommendations
CN110673917A (en) Information management method and device
US20170034347A1 (en) Method and device for state notification and computer-readable storage medium
CN104166692A (en) Method and device for adding labels on photos
CN104240274B (en) Face image processing process and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant