CN112492096A - Video playing control method and device, electronic equipment and storage medium - Google Patents

Video playing control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112492096A
CN112492096A CN202011346499.3A CN202011346499A CN112492096A CN 112492096 A CN112492096 A CN 112492096A CN 202011346499 A CN202011346499 A CN 202011346499A CN 112492096 A CN112492096 A CN 112492096A
Authority
CN
China
Prior art keywords
video
user
function library
expression
video player
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011346499.3A
Other languages
Chinese (zh)
Inventor
朴惠姝
邓竹立
彭飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing 58 Information Technology Co Ltd
Original Assignee
Beijing 58 Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing 58 Information Technology Co Ltd filed Critical Beijing 58 Information Technology Co Ltd
Priority to CN202011346499.3A priority Critical patent/CN112492096A/en
Publication of CN112492096A publication Critical patent/CN112492096A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Abstract

The invention provides a video playing control method and device, electronic equipment and a storage medium. The method comprises the following steps: responding to the fact that any video player triggered to be controlled through face recognition is in a starting state, and monitoring the expression of a user through a camera to acquire the facial action of the user in real time; responding to the facial action type of the currently detected facial action contained in the target expression function library, acquiring a control operation type associated with the facial action based on the target expression function library, and executing a video control operation corresponding to the control operation type aiming at the video player; the target expression function library is an expression function library associated with the video player, and the expression function library comprises a mapping relation between at least one facial action type and a control operation type. Therefore, a more convenient video player control mode is provided, the mode is not interfered by played video, and more convenient and accurate video playing control is facilitated.

Description

Video playing control method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a video playing control method and apparatus, an electronic device, and a storage medium.
Background
At present, mobile terminals such as mobile phones and the like can provide richer multimedia content for users, and users can watch streaming media programs through the mobile terminals or play video programs and the like stored locally. Most apps currently have their own control interfaces for playing video, and the control interfaces include function keys such as play/pause, progress bar, forward/fast rewind, and next video. If the user encounters content which is not interesting to the user when watching the video, the user needs to actively close the video, adjust the video progress or switch the video and the like.
However, it may be inconvenient for some people with inconvenient movements to operate the "small" function buttons in the video page. However, if the video player is operated by voice recognition, the voice input by the user may collide with the audio of the video, and the recognition result may not be accurate, thereby affecting the accuracy of the video control response.
Disclosure of Invention
Embodiments of the present invention provide a video playing control method and apparatus, an electronic device, and a storage medium, so as to solve the problems that an existing video playing control method does not utilize control operations of some people with inconvenience in behavior, and response accuracy in a video control process is easily affected.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a video playing control method, including:
responding to the fact that any video player triggered to be controlled through face recognition is in a starting state, and monitoring the expression of a user through a camera to acquire the facial action of the user in real time;
responding to a facial action type of a currently detected facial action contained in a target expression function library, acquiring a control operation type associated with the facial action based on the target expression function library, and executing a video control operation corresponding to the control operation type aiming at the video player;
the target expression function library is an expression function library associated with the video player, and the expression function library comprises a mapping relation between at least one facial action type and a control operation type.
Optionally, before the step of responding to the facial action type of the currently detected facial action contained in the target expression function library, obtaining a control operation type associated with the facial action based on the target expression function library, and executing a video control operation corresponding to the control operation type for the video player, the method further includes:
acquiring a general expression function library associated with the video player, and constructing a target expression function library associated with the video player by combining a user-defined expression function library locally set in a mobile terminal where the video player is located;
the universal expression function library is an expression function library which is uniformly set and issued through a server, and the user-defined expression function library is an expression function library which is set by a user in a user-defined mode aiming at the video player.
Optionally, the step of obtaining a control operation type associated with the facial action based on the target expression function library, and executing a video control operation corresponding to the control operation type for the video player includes:
obtaining a control operation type associated with the facial action from the target expression function library;
and calling a video control method corresponding to the control operation type, and executing the video control operation on the video player through the video control method.
Optionally, the step of monitoring the expression of the user through a camera to obtain the facial action of the user in real time in response to the video player being in a start state includes:
after the video player is started, responding to the fact that the video player enters a video playing interface every time after the video player is started, and requesting to acquire expression monitoring permission;
in response to the expression monitoring authority authorized by the user, starting a camera to monitor the expression of the user so as to acquire the facial action of the user in real time;
the method further comprises the following steps:
and responding to the fact that the video in the video playing interface is played completely and the video is not triggered to continue to be played within a specified time period, and stopping monitoring the expression of the user.
Optionally, the method further comprises:
and in response to receiving a triggering instruction for any video operation control in the video player, executing a video control operation corresponding to the video operation control for the video player.
Optionally, the facial action type in the expression function library includes at least one of specified durations of N consecutive blinks, mouth opening, left shaking, right shaking, head raising, head lowering and continuous eye closing, and N is a positive integer greater than 1; the control operation type comprises at least one of switching operation between a pause state and a play state, switching operation between a horizontal screen state and a vertical screen state, video playing progress adjusting operation and video volume adjusting operation.
In a second aspect, an embodiment of the present invention provides a video playback control apparatus, including:
the system comprises a user expression monitoring module, a camera and a control module, wherein the user expression monitoring module is used for responding to the starting state of any video player triggered to be controlled by facial recognition and monitoring the expression of a user through the camera so as to acquire the facial action of the user in real time;
the first video playing control module is used for responding to a facial action type containing a currently detected facial action in a target expression function library, acquiring a control operation type associated with the facial action based on the target expression function library, and executing a video control operation corresponding to the control operation type aiming at the video player;
the target expression function library is an expression function library associated with the video player, and the expression function library comprises a mapping relation between at least one facial action type and a control operation type.
Optionally, the apparatus further comprises:
the expression function library setting module is used for acquiring a general expression function library associated with the video player, and constructing a target expression function library associated with the video player by combining a user-defined expression function library locally set in a mobile terminal where the video player is located;
the universal expression function library is an expression function library which is uniformly set and issued through a server, and the user-defined expression function library is an expression function library which is set by a user in a user-defined mode aiming at the video player.
Optionally, the first video playback control module includes:
an operation type acquisition sub-module, configured to acquire a control operation type associated with the facial action from the target expression function library;
and the video playing control sub-module is used for calling a video control method corresponding to the control operation type and executing the video control operation on the video player through the video control method.
Optionally, the user expression monitoring module includes:
the monitoring authority acquisition sub-module is used for responding to the fact that the video player enters a video playing interface every time after being started and requesting to acquire expression monitoring authority after the video player is started;
the user expression monitoring submodule is used for responding to the obtained expression monitoring authority authorized by the user, and starting a camera to monitor the user expression so as to obtain the facial action of the user in real time;
the device further comprises:
and the stop monitoring module is used for responding to the completion of the video playing in the video playing interface and stopping monitoring the expression of the user when the video is not triggered to continue playing within a specified time period.
Optionally, the apparatus further comprises:
and the second video playing control module is used for responding to a received triggering instruction aiming at any video operation control in the video player and executing the video control operation corresponding to the video operation control aiming at the video player.
Optionally, the facial action type in the expression function library includes at least one of specified durations of N consecutive blinks, mouth opening, left shaking, right shaking, head raising, head lowering and continuous eye closing, and N is a positive integer greater than 1; the control operation type comprises at least one of switching operation between a pause state and a play state, switching operation between a horizontal screen state and a vertical screen state, video playing progress adjusting operation and video volume adjusting operation.
In a third aspect, an embodiment of the present invention additionally provides an electronic device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the video playback control method according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program implements the steps of the video playback control method according to the first aspect.
In the embodiment of the invention, the expression function library is preset for the video player, in the whole video playing process, the camera is opened to carry out facial expression recognition so as to monitor the facial actions of a user, and when the user inputs the facial actions in one expression function library, the corresponding functions are called to operate the video player. Therefore, a convenient video player control entrance is provided for people with inconvenient actions, and video playing control is facilitated. In addition, the embodiment of the invention has fewer interference factors when operating the video player in a face motion control mode than in a voice recognition mode, so that the video player is operated in a face motion mode more accurately.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without inventive labor.
Fig. 1 is a flowchart illustrating steps of a video playback control method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating steps of another video playback control method according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of video playing control through facial movements according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a video playback control apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another video playback control apparatus according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of an electronic device in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart illustrating steps of a video playing control method according to an embodiment of the present invention is shown.
Step 110, responding to that any video player triggered to be controlled by facial recognition is in a starting state, and monitoring the expression of a user through a camera to acquire the facial action of the user in real time;
step 120, in response to that the target expression function library contains the face action type of the currently detected face action, acquiring a control operation type associated with the face action based on the target expression function library, and executing a video control operation corresponding to the control operation type for the video player;
the target expression function library is an expression function library associated with the video player, and the expression function library comprises a mapping relation between at least one facial action type and a control operation type.
In the embodiment of the invention, in order to facilitate people who are inconvenient to move to trigger the control operation aiming at the video at any time in the video playing process, through the face recognition function, when a user inputs a certain face action, the control operation can be carried out on a certain function of the video player, so that the video player can execute the functions of playing, pausing, advancing, full-screen and the like.
Firstly, control operations corresponding to facial movements need to be set and expression function libraries for corresponding video players need to be generated, and in order to avoid misoperation, facial movements which may occur frequently by some users, such as blinking once, can be avoided as much as possible when the expression function libraries are set.
The control operation corresponding to the facial action may be defined by a developer first, and optionally, the user may also customize the control operation corresponding to the facial action, which is not limited in the embodiment of the present invention. Different video players may share the same expression function library, or may define different expression function libraries, respectively, which is not limited in the embodiment of the present invention. In the video playing control process, the expression function library can be stored locally in mobile terminals such as mobile phones and computers, and can also be placed on line all the time, and interaction is performed through a network, so that the embodiment of the invention is not limited.
Each expression function library may include at least one mapping relationship between facial motion types and control operation types.
For any video player triggered to be controlled by face recognition, if the video player is in a starting state, the expression of the user can be monitored through a camera to acquire the facial action of the user in real time, if the facial action type of the currently detected facial action is contained in a target expression function library, a control operation type associated with the facial action can be acquired based on the target expression function library, and a video control operation corresponding to the control operation type is executed for the video player
The video player may be any independent APP (Application) capable of playing a video in the mobile terminal, or a component or plug-in a certain APP, and the embodiment of the present invention is not limited thereto. Furthermore, the control of the video player through face recognition may be triggered in any available manner, and the embodiments of the present invention are not limited thereto.
For example, the user may be asked whether to allow operation of the video player by facial recognition expressions first when the user starts the video player for the first time and enters a video page. If the user allows the video player to be triggered and controlled through face recognition, in addition, the face recognition function can be started when the video starts to be loaded in the running process of the video player, namely, the expression of the user is monitored through a camera so as to obtain the facial action of the user in real time. If the user does not allow, the video player is operated by controlling the controls in the page as in a normal video page.
When the fact that the face recognition is allowed, namely the video player is triggered to be controlled through the face recognition, the face recognition function can be started in the whole process of playing the video, namely the user expression is monitored through the camera in the whole process to obtain the facial action of the user in real time, when the fact that the user inputs an expression function library corresponding to the video player, namely a certain facial action in a target expression function library is monitored, the control operation type related to the facial action can be obtained from the corresponding target expression function library, and then the video control operation corresponding to the corresponding control operation type is carried out on the corresponding video player.
In the embodiment of the present invention, the expression of the user may be recognized in any available manner, which is not limited in the embodiment of the present invention.
In addition, the facial action in the embodiment of the present invention may be understood as a static facial expression, such as a mouth-opening expression, a smiling expression, a single-eye closed and single-eye open expression, and the like; the present invention can also be understood as a dynamic facial expression or a facial movement generated during a facial position changing process, such as a left-right shaking motion, a nodding head raising motion, a continuous eye closing motion, a continuous mouth opening motion, etc., which is not limited to the embodiments of the present invention
Referring to fig. 2, in the embodiment of the present invention, before the step 120, the method may further include:
step 10, acquiring a general expression function library associated with the video player, and constructing a target expression function library associated with the video player by combining a user-defined expression function library locally set in a mobile terminal where the video player is located;
the universal expression function library is an expression function library which is uniformly set and issued through a server, and the user-defined expression function library is an expression function library which is set by a user in a user-defined mode aiming at the video player.
In the embodiment of the present invention, in order to meet personalized requirements of different users, a user of a mobile terminal where a video player is located may optionally customize a mapping relationship between a facial action type and a control operation type, for example: and (3) mapping relation between the face action type of closing eyes for 5s or more and the control operation type of pausing playing the video.
In addition, if different users need to completely define the mapping relationship between their own facial action types and control operation types, the construction process of the expression function library will take a long time, and the process of repeatedly setting the mapping relationship with a high partial overlapping rate will also cause resource waste. Therefore, in the embodiment of the present invention, the developer of the video player may also define a general expression function library in advance through the server, where the general expression function library may include a mapping relationship between at least one general facial action type and a control operation type.
When a target expression function library of a video player is constructed, in order to improve efficiency and meet personalized requirements of a user, a general expression function library associated with the video player can be obtained, and the target expression function library associated with the video player is constructed by combining a user-defined expression function library locally set in a mobile terminal where the video player is located; the universal expression function library is an expression function library which is uniformly set and issued through a server, and the user-defined expression function library is an expression function library which is set by a user in a user-defined mode aiming at the video player.
Moreover, if a target expression function library is constructed, conflicts may occur that different facial action types associated with the same control operation type exist in the general expression function library and the custom expression function library at the same time, or multiple different control operation types associated with the same facial action type exist in the general expression function library and the custom expression function library at the same time, and the like, which may cause confusion in video playing control.
Therefore, in the embodiment of the present invention, to avoid the above problems, priorities of the general expression function library and the user-defined expression function library may be set, and when the target expression function library is constructed, mapping relationships in the general expression function library and the user-defined expression function library may be merged, and if a conflict occurs, a mapping relationship existing in the expression function library with a high priority may be regarded as the standard, and a mapping relationship in the expression function library with a low priority that conflicts with the corresponding mapping relationship may be ignored.
For example, assuming that the priority of the custom expression function library is higher than that of the general expression function library, if the mapping relationship 1 between the facial action type a1 and the control operation type B1, the mapping relationship 2 between the facial action type a2 and the control operation type B2 are included in the custom expression function library, and meanwhile, the mapping relationship 1 between the facial action type a1 and the control operation type B2 exists in the general expression function library, when the target expression function library is constructed, since the mapping relationship 1 included in the general expression function library conflicts with the mapping relationship 1 and the mapping relationship 2 included in the custom expression function library, the mapping relationship 1 and the mapping relationship 2 in the custom expression function library can be ignored, and only the target expression function library retains the mapping relationship 1 and the mapping relationship 2 in the custom expression function library.
In addition, in practical application, if the conflict of the mapping relationship is different face action types associated with the same control operation type, that is, the same control operation can be triggered by a plurality of different face actions, which does not cause operation confusion for users, so that the conflict can be directly merged without processing.
For example, it is assumed that the custom emoticon library is set to have higher priority than the general emoticon library, and the mapping relationship 1 between the facial motion type a1 and the control operation type B1, the mapping relationship 2 between the facial motion type a2 and the control operation type B2 are included in the custom emoticon library, while the mapping relationship 3 between the facial motion type A3 and the control operation type B1, and the mapping relationship 4 between the facial motion type a1 and the control operation type B2 are included in the general emoticon library. Then, when the target expression function library is constructed, the mapping relation 1 in the custom expression function library and the mapping relation 3 in the general expression function library may be combined to obtain the mapping relations between the facial action types a1 and A3 and the control operation type B1, and the mapping relation 2 in the custom expression function library is retained, while the mapping relation 4 in the general expression function library is ignored.
If the priority of the user-defined expression function library is lower than that of the general expression function library, the process of constructing the target expression function library can be determined by referring to the process, and the specific process is similar to the process and is not repeated herein.
Of course, in the embodiment of the present invention, the target expression function library associated with the video player may also be constructed only based on the general expression function library associated with the video player, or only based on the user-defined expression function library locally set in the mobile terminal where the video player is located, which is not limited in the embodiment of the present invention.
Referring to fig. 2, in the embodiment of the present invention, the step 120 may further include:
step 121, obtaining a control operation type associated with the facial action from the target expression function library;
and step 122, calling a video control method corresponding to the control operation type, and executing the video control operation on the video player through the video control method.
In practical applications, each video player itself generally has a video control method for performing video control operations of play, pause, forward, full screen, and the like. Therefore, in the embodiment of the present invention, in order to avoid repeatedly setting a video control method for the above various video control operations when performing video play control based on a facial motion, various existing video control methods corresponding to a corresponding video player may be directly called, that is, various video control methods that may be called through a video play control in the video player.
Specifically, a control operation type associated with a currently detected facial action may be acquired from the target expression function library, a video control method corresponding to the control operation type is called, and the video control operation is executed for the video player by the video control method.
Referring to fig. 2, in an embodiment of the present invention, the step 110 may further include:
step 111, after the video player is started, responding to each time of entering a video playing interface after the video player is started, and requesting to acquire expression monitoring permission;
step 112, in response to the expression monitoring authority authorized by the user, starting a camera to monitor the expression of the user so as to obtain the facial action of the user in real time;
referring to fig. 2, in the embodiment of the present invention, the method may further include:
step 130, responding to that the video in the video playing interface is played completely and the video is not triggered to continue to be played within a specified time period, and stopping monitoring the expression of the user.
In practical applications, a user may not start watching a video immediately after starting a video player, but may stop on any non-playing page such as a home page, a list page, and the like, and at this time, if starting a camera to monitor a user expression, resource waste may be caused, and it is also easy to cause that the video player cannot correctly execute a video playing control operation.
Therefore, in the embodiment of the present invention, after the video player is started each time, the camera may be set to be started only after the video player enters the video playing interface each time to monitor the expression of the user so as to obtain the facial action of the user in real time, or after the video player is started and when the video player enters the video playing interface each time, the expression monitoring authority may be requested to be obtained, and the camera may be started to monitor the expression of the user so as to obtain the facial action of the user in real time under the condition that the user is authorized.
The expression monitoring authority can be requested to be acquired in any available mode, and the user can authorize the expression monitoring authority in any available mode. For example, when the video player enters the video playing interface each time after being started, the authorization prompt of the expression monitoring authority in the video playing interface through the pop-up window or other modes is triggered, and the user can complete authorization through clicking the pop-up window or voice control or other modes, and certainly in the embodiment of the present invention, for the convenience of people who are inconvenient to act, the user may be set to authorize by a specified facial action, at which point the default activation of the camera for a certain period of time after each entry into the video playback interface may be set to monitor the user's expression, and if an authorized specified facial action is detected within the default activation period, that is, obtaining authorization may maintain the current startup state of the camera and monitor the expression of the user, and if no authorized designated facial motion is detected within the default startup duration, indicating that the user has not completed authorization, the corresponding camera may be turned off. The default time length for starting the camera can be set by user according to requirements, and the embodiment of the invention is not limited. For example, the default time period for activating the camera may be set to 30 seconds or the like. And during the authorization verification, the countdown of the default starting camera can be synchronously displayed in the video playing interface, so that the related user can conveniently verify the facial action in time.
After the camera is started each time, if the video playing in the video playing interface is finished and the video is not triggered to continue playing within a specified time period, the camera can be closed in time and the monitoring of the expression of the user is stopped in order to avoid resource waste.
Referring to fig. 2, in the embodiment of the present invention, the method may further include:
step 140, in response to receiving a trigger instruction for any video operation control in the video player, executing a video control operation corresponding to the video operation control for the video player.
In addition, in the embodiment of the present invention, in order to meet personalized requirements of different users for video playing control modes, at least one video operation control may be set in the video player, and a user may trigger any video operation control in the video player during the process of watching a video, so as to execute a video control operation corresponding to the corresponding video operation control for the video player.
In addition, in order to avoid that the video control operation is triggered by the video operation control and different video control operations are also triggered by the face action at the same time, so that the video control operation conflict affects the video control effect, the priority of the video control operation triggered by the video operation control and the priority of the video control operation triggered by the face action can be set according to the requirement, and the video control operation in the mode of high priority can be executed under the condition that the conflict occurs, so that the embodiment of the invention is not limited. Of course, the video control operations in the two modes may be directly executed in sequence without setting the priority, and the embodiment of the present invention is not limited to this.
Optionally, in an embodiment of the present invention, the facial action types in the expression function library include at least one of N consecutive blinks, mouth opening, head shaking left, head shaking right, head raising, head lowering, and eye closing lasting specified time durations, where N is a positive integer greater than 1; the control operation type comprises at least one of switching operation between a pause state and a play state, switching operation between a horizontal screen state and a vertical screen state, video playing progress adjusting operation and video volume adjusting operation.
The switching operation between the pause state and the play state may include, but is not limited to, at least one of switching from the pause state to the play state and switching from the play state to the pause state, the switching operation between the landscape screen state and the portrait screen state may include, but is not limited to, at least one of switching from the landscape screen state to the portrait screen state and switching from the portrait screen state to the landscape screen state, the video play progress adjustment operation may include, but is not limited to, at least one of video play progress forward adjustment, video play progress backward adjustment, video play speed increase and video play speed decrease, and the video volume adjustment operation may include, but is not limited to, at least one of video volume increase and video volume decrease. Furthermore, the mapping relationship between the facial action type and the control operation type may be set according to the requirement, and the embodiment of the present invention is not limited thereto.
Fig. 3 is a schematic flow chart illustrating a video playing control by facial movements. And generating a complete expression function library by combining the expression function library defined by the developer and the expression function customized by the user, storing the complete expression function library to the local, and operating the video player according to the expression function library. The first time a user enters a video page (e.g., a video play page, a video player home page, or other designated page) the user is asked whether the user is allowed to operate the video player by recognizing facial expressions. If the user allows, the face recognition function is turned on when the video starts to be loaded, and if the user does not allow, the video player is operated by controlling buttons in the page like a common video page. And under the condition that the face recognition is allowed, opening a face recognition function in the whole process of playing the video, monitoring the expression of the user in real time, and calling a method of a function corresponding to the facial action to operate the video player when the user inputs a certain facial action in an expression function library. Listening to expressions at the end of the video will also stop.
The specific steps can be as follows:
the method comprises the following steps: the developer can provide an expression function library firstly, the expressions are not set as expressions which are likely to appear frequently by the user as far as possible, and optionally the user can customize the expressions of partial functions. And generating a complete expression function library by combining the expression function library provided by the developer and the functional expression set by the user, and storing the complete expression function library to the local. And provides an entrance of the expression function library which can be modified at any time, so that the user can modify the expression function library conveniently.
Step two: inquiring whether the user allows identifying the facial expression to operate the video player when entering the video page for the first time, if the user selects to allow, jumping to the step three, and if the user selects not to allow, jumping to the step five. A portal may be provided where the user can modify the options of whether or not to allow recognition of facial expressions to operate the video player at any time.
The third step to the fourth step: and entering a video page to open a face recognition function and monitoring the facial expression of the user. In the monitoring process, if the facial action in one expression function library is acquired, the corresponding function, namely the corresponding control operation type is acquired, and the step is skipped to. And if the user does not input any facial motion in the expression function library until the video is finished, finishing the monitoring.
Step five: if the user does not allow the player to be operated by face recognition, the player is operated by clicking a button (i.e., a video operation control) in the function of the control interface in a conventional manner. If the user does not click the button and quits after the video is known to be finished, the control interface is destroyed.
Step (c): and after the control operation type required to be called is obtained by clicking a button or a facial action input by a user, calling a corresponding video control method to operate the video player.
Step (c): and C, judging whether to continue monitoring the facial expression or not according to the selection of the user in the step II, if so, continuing to jump to the step IV, and if not, waiting for the next button operation of the user. And if the video playing is finished, stopping all monitoring and destroying the page.
In the embodiment of the invention, a complete expression function library is generated according to the expression function library provided by a developer, the user-defined expression function and the like and stored as a local expression function library for operating the video player in the video playing process. And in the whole process of video playing, a camera is opened to perform facial expression recognition so as to monitor facial actions of a user, and when the user inputs facial actions in an expression function library, corresponding functions are called to operate a video player. Therefore, a convenient video player control entrance is provided for people with inconvenient actions, and video playing control is facilitated. In addition, the embodiment of the invention has fewer interference factors when operating the video player in a face motion control mode than in a voice recognition mode, so that the video player is operated in a face motion mode more accurately.
Referring to fig. 4, a schematic structural diagram of a video playback control apparatus in an embodiment of the present invention is shown.
The video playing control device of the embodiment of the invention comprises: a user expression monitoring module 210 and a first video playing control module 220.
The functions of the modules and the interaction relationship between the modules are described in detail below.
The user expression monitoring module 210 is configured to, for any video player triggered to be controlled by facial recognition, respond to that the video player is in a start state, and monitor a user expression through a camera to obtain a facial action of a user in real time;
a first video playing control module 220, configured to, in response to a facial action type including a currently detected facial action in a target expression function library, obtain, based on the target expression function library, a control operation type associated with the facial action, and execute, for the video player, a video control operation corresponding to the control operation type;
the target expression function library is an expression function library associated with the video player, and the expression function library comprises a mapping relation between at least one facial action type and a control operation type.
Referring to fig. 5, in an embodiment of the present invention, the apparatus may further include:
the expression function library setting module 200 is configured to acquire a general expression function library associated with the video player, and construct a target expression function library associated with the video player in combination with a custom expression function library locally set in a mobile terminal where the video player is located;
the universal expression function library is an expression function library which is uniformly set and issued through a server, and the user-defined expression function library is an expression function library which is set by a user in a user-defined mode aiming at the video player.
Referring to fig. 5, in the embodiment of the present invention, the first video playing control module 220 may further include:
an operation type obtaining sub-module 221, configured to obtain a control operation type associated with the facial action from the target expression function library;
and the video playing control sub-module 222 is configured to invoke a video control method corresponding to the control operation type, and execute the video control operation on the video player through the video control method.
Referring to fig. 5, in the embodiment of the present invention, the user expression monitoring module 210 may further include:
the monitoring authority obtaining sub-module 211 is configured to, after the video player is started, request to obtain expression monitoring authority in response to entering a video playing interface each time after the video player is started;
the user expression monitoring submodule 212 is used for responding to the obtained expression monitoring authority authorized by the user, and starting a camera to monitor the expression of the user so as to obtain the facial action of the user in real time;
accordingly, the apparatus may further include:
and a stop monitoring module 230, configured to stop monitoring the expression of the user in response to that the video in the video playing interface is played completely and the video is not triggered to continue to be played within a specified time period.
Referring to fig. 5, in an embodiment of the present invention, the apparatus may further include:
the second video playing control module 240 is configured to, in response to receiving a trigger instruction for any video operation control in the video player, execute a video control operation corresponding to the video operation control for the video player.
Optionally, the facial action type in the expression function library includes at least one of specified durations of N consecutive blinks, mouth opening, left shaking, right shaking, head raising, head lowering and continuous eye closing, and N is a positive integer greater than 1; the control operation type comprises at least one of switching operation between a pause state and a play state, switching operation between a horizontal screen state and a vertical screen state, video playing progress adjusting operation and video volume adjusting operation.
The video playing control device provided in the embodiment of the present invention can implement each process implemented in the method embodiments of fig. 1 to fig. 2, and is not described herein again to avoid repetition.
Preferably, an embodiment of the present invention further provides an electronic device, including: the processor, the memory, and the computer program stored in the memory and capable of running on the processor, when executed by the processor, implement each process of the above-mentioned video playing control method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements each process of the video playing control method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.
The electronic device 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and a power supply 511. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 6 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 502, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the electronic apparatus 500 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used to receive an audio or video signal. The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphic processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. The microphone 5042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 501 in case of the phone call mode.
The electronic device 500 also includes at least one sensor 505, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 5061 and/or a backlight when the electronic device 500 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 506 is used to display information input by the user or information provided to the user. The Display unit 506 may include a Display panel 5061, and the Display panel 5061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 5071 using a finger, stylus, or any suitable object or attachment). The touch panel 5071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of the touch event. Although in fig. 6, the touch panel 5071 and the display panel 5061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 508 is an interface for connecting an external device to the electronic apparatus 500. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic apparatus 500 or may be used to transmit data between the electronic apparatus 500 and external devices.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 509 and calling data stored in the memory 509, thereby performing overall monitoring of the electronic device. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The electronic device 500 may further include a power supply 511 (e.g., a battery) for supplying power to various components, and preferably, the power supply 511 may be logically connected to the processor 510 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system.
In addition, the electronic device 500 includes some functional modules that are not shown, and are not described in detail herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (14)

1. A video playback control method, comprising:
responding to the fact that any video player triggered to be controlled through face recognition is in a starting state, and monitoring the expression of a user through a camera to acquire the facial action of the user in real time;
responding to a facial action type of a currently detected facial action contained in a target expression function library, acquiring a control operation type associated with the facial action based on the target expression function library, and executing a video control operation corresponding to the control operation type aiming at the video player;
the target expression function library is an expression function library associated with the video player, and the expression function library comprises a mapping relation between at least one facial action type and a control operation type.
2. The method of claim 1, wherein prior to the step of responding to the target emoticon library to include a facial action type of the currently detected facial action, obtaining a control operation type associated with the facial action based on the target emoticon library, and executing a video control operation corresponding to the control operation type for the video player, further comprising:
acquiring a general expression function library associated with the video player, and constructing a target expression function library associated with the video player by combining a user-defined expression function library locally set in a mobile terminal where the video player is located;
the universal expression function library is an expression function library which is uniformly set and issued through a server, and the user-defined expression function library is an expression function library which is set by a user in a user-defined mode aiming at the video player.
3. The method of claim 1, wherein the step of obtaining a control operation type associated with the facial action based on the target expression function library and executing a video control operation corresponding to the control operation type for the video player comprises:
obtaining a control operation type associated with the facial action from the target expression function library;
and calling a video control method corresponding to the control operation type, and executing the video control operation on the video player through the video control method.
4. The method of claim 1, wherein the step of monitoring the expression of the user via the camera to obtain the facial movement of the user in real time in response to the video player being in a startup state comprises:
after the video player is started, responding to the fact that the video player enters a video playing interface every time after the video player is started, and requesting to acquire expression monitoring permission;
in response to the expression monitoring authority authorized by the user, starting a camera to monitor the expression of the user so as to acquire the facial action of the user in real time;
the method further comprises the following steps:
and responding to the fact that the video in the video playing interface is played completely and the video is not triggered to continue to be played within a specified time period, and stopping monitoring the expression of the user.
5. The method according to any one of claims 1-4, further comprising:
and in response to receiving a triggering instruction for any video operation control in the video player, executing a video control operation corresponding to the video operation control for the video player.
6. The method of any one of claims 1-4, wherein the facial motion types in the emoticon library include at least one of N consecutive blinks, mouth open, head left shake, head right shake, head up, head down, eye closure lasting for a specified duration, N being a positive integer greater than 1; the control operation type comprises at least one of switching operation between a pause state and a play state, switching operation between a horizontal screen state and a vertical screen state, video playing progress adjusting operation and video volume adjusting operation.
7. A video playback control apparatus, comprising:
the system comprises a user expression monitoring module, a camera and a control module, wherein the user expression monitoring module is used for responding to the starting state of any video player triggered to be controlled by facial recognition and monitoring the expression of a user through the camera so as to acquire the facial action of the user in real time;
the first video playing control module is used for responding to a facial action type containing a currently detected facial action in a target expression function library, acquiring a control operation type associated with the facial action based on the target expression function library, and executing a video control operation corresponding to the control operation type aiming at the video player;
the target expression function library is an expression function library associated with the video player, and the expression function library comprises a mapping relation between at least one facial action type and a control operation type.
8. The apparatus of claim 7, further comprising:
the expression function library setting module is used for acquiring a general expression function library associated with the video player, and constructing a target expression function library associated with the video player by combining a user-defined expression function library locally set in a mobile terminal where the video player is located;
the universal expression function library is an expression function library which is uniformly set and issued through a server, and the user-defined expression function library is an expression function library which is set by a user in a user-defined mode aiming at the video player.
9. The apparatus of claim 7, wherein the first video playback control module comprises:
an operation type acquisition sub-module, configured to acquire a control operation type associated with the facial action from the target expression function library;
and the video playing control sub-module is used for calling a video control method corresponding to the control operation type and executing the video control operation on the video player through the video control method.
10. The apparatus of claim 7, wherein the user expression monitoring module comprises:
the monitoring authority acquisition sub-module is used for responding to the fact that the video player enters a video playing interface every time after being started and requesting to acquire expression monitoring authority after the video player is started;
the user expression monitoring submodule is used for responding to the obtained expression monitoring authority authorized by the user, and starting a camera to monitor the user expression so as to obtain the facial action of the user in real time;
the device further comprises:
and the stop monitoring module is used for responding to the completion of the video playing in the video playing interface and stopping monitoring the expression of the user when the video is not triggered to continue playing within a specified time period.
11. The apparatus according to any one of claims 7-10, further comprising:
and the second video playing control module is used for responding to a received triggering instruction aiming at any video operation control in the video player and executing the video control operation corresponding to the video operation control aiming at the video player.
12. The apparatus of any one of claims 7-10, wherein the facial motion types in the emoticon library comprise at least one of N consecutive blinks, mouth open, head shake left, head shake right, head raise, head lower, and eye closure lasting for a specified duration, N being a positive integer greater than 1; the control operation type comprises at least one of switching operation between a pause state and a play state, switching operation between a horizontal screen state and a vertical screen state, video playing progress adjusting operation and video volume adjusting operation.
13. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the video playback control method according to any one of claims 1 to 6.
14. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the video playback control method according to any one of claims 1 to 6.
CN202011346499.3A 2020-11-25 2020-11-25 Video playing control method and device, electronic equipment and storage medium Pending CN112492096A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011346499.3A CN112492096A (en) 2020-11-25 2020-11-25 Video playing control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011346499.3A CN112492096A (en) 2020-11-25 2020-11-25 Video playing control method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112492096A true CN112492096A (en) 2021-03-12

Family

ID=74935467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011346499.3A Pending CN112492096A (en) 2020-11-25 2020-11-25 Video playing control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112492096A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113556603A (en) * 2021-07-21 2021-10-26 维沃移动通信(杭州)有限公司 Method and device for adjusting video playing effect and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130238995A1 (en) * 2012-03-12 2013-09-12 sCoolTV, Inc Apparatus and method for adding content using a media player
CN103914142A (en) * 2013-01-04 2014-07-09 三星电子株式会社 Apparatus and method for providing control service using head tracking technology in electronic device
CN104007826A (en) * 2014-06-17 2014-08-27 合一网络技术(北京)有限公司 Video control method and system based on face movement identification technology
CN107295409A (en) * 2017-08-08 2017-10-24 广东小天才科技有限公司 A kind of method, device, terminal device and the storage medium of control video playback
US20180330756A1 (en) * 2016-11-19 2018-11-15 James MacDonald Method and apparatus for creating and automating new video works
CN110636383A (en) * 2019-09-20 2019-12-31 北京达佳互联信息技术有限公司 Video playing method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130238995A1 (en) * 2012-03-12 2013-09-12 sCoolTV, Inc Apparatus and method for adding content using a media player
CN103914142A (en) * 2013-01-04 2014-07-09 三星电子株式会社 Apparatus and method for providing control service using head tracking technology in electronic device
CN104007826A (en) * 2014-06-17 2014-08-27 合一网络技术(北京)有限公司 Video control method and system based on face movement identification technology
US20180330756A1 (en) * 2016-11-19 2018-11-15 James MacDonald Method and apparatus for creating and automating new video works
CN107295409A (en) * 2017-08-08 2017-10-24 广东小天才科技有限公司 A kind of method, device, terminal device and the storage medium of control video playback
CN110636383A (en) * 2019-09-20 2019-12-31 北京达佳互联信息技术有限公司 Video playing method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113556603A (en) * 2021-07-21 2021-10-26 维沃移动通信(杭州)有限公司 Method and device for adjusting video playing effect and electronic equipment
CN113556603B (en) * 2021-07-21 2023-09-19 维沃移动通信(杭州)有限公司 Method and device for adjusting video playing effect and electronic equipment

Similar Documents

Publication Publication Date Title
US11327574B2 (en) Method for controlling play of multimedia file and terminal device
US10445482B2 (en) Identity authentication method, identity authentication device, and terminal
CN109710132B (en) Operation control method and terminal
CN107613131B (en) Application program disturbance-free method, mobile terminal and computer-readable storage medium
CN108089891B (en) Application program starting method and mobile terminal
CN107786827B (en) Video shooting method, video playing method and device and mobile terminal
CN107707817B (en) video shooting method and mobile terminal
WO2019196929A1 (en) Video data processing method and mobile terminal
CN108135033B (en) Bluetooth connection method and mobile terminal
CN108628515B (en) Multimedia content operation method and mobile terminal
CN110062273B (en) Screenshot method and mobile terminal
CN108650408B (en) Screen unlocking method and mobile terminal
CN107870674B (en) Program starting method and mobile terminal
CN110221795B (en) Screen recording method and terminal
CN110855921B (en) Video recording control method and electronic equipment
CN111124207B (en) Multimedia file playing method and electronic equipment
CN110908513B (en) Data processing method and electronic equipment
CN111443815A (en) Vibration reminding method and electronic equipment
CN111415722B (en) Screen control method and electronic equipment
CN111078186A (en) Playing method and electronic equipment
CN109164908B (en) Interface control method and mobile terminal
CN111367483A (en) Interaction control method and electronic equipment
CN108307048B (en) Message output method and device and mobile terminal
CN109729431B (en) Video privacy protection method and terminal equipment
CN112492096A (en) Video playing control method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210312