CN110557683B - Video playing control method and electronic equipment - Google Patents

Video playing control method and electronic equipment Download PDF

Info

Publication number
CN110557683B
CN110557683B CN201910888082.0A CN201910888082A CN110557683B CN 110557683 B CN110557683 B CN 110557683B CN 201910888082 A CN201910888082 A CN 201910888082A CN 110557683 B CN110557683 B CN 110557683B
Authority
CN
China
Prior art keywords
video
target
name
input
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910888082.0A
Other languages
Chinese (zh)
Other versions
CN110557683A (en
Inventor
杨涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910888082.0A priority Critical patent/CN110557683B/en
Publication of CN110557683A publication Critical patent/CN110557683A/en
Application granted granted Critical
Publication of CN110557683B publication Critical patent/CN110557683B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2387Stream processing in response to a playback request from an end-user, e.g. for trick-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Abstract

The invention provides a video playing control method and electronic equipment, wherein the method comprises the following steps: receiving a first input of a user; in response to the first input, playing a first video segment of a target video at a first play speed, the first video segment including target video content, the target video content being associated with the first input; wherein the first playback speed is different from a second playback speed of a second video segment in the target video, the second video segment not including the target video content. Therefore, the scheme of the invention solves the problem that the playing speed and the progress need to be frequently adjusted when the user only needs to watch the picture of the specific object in the video file.

Description

Video playing control method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a video playing control method and electronic equipment.
Background
With the development of video coding and decoding and networks, people can be exposed to more and more videos in life, and therefore most people can view a large amount of videos every day.
In the existing video progress control mode, when a user only wants to watch a picture with a leading role or a leading role of a woman in a video, the user can only control the video playing speed and progress by sliding a screen, a progress bar, a button and the like, and the picture played by the leading role of the man and the woman is normally played again when the user finds the picture played by the leading role of the man and the woman. The interactive mode needs the user to frequently adjust the playing speed and the playing progress, and also needs to pay attention to whether the picture has a man and woman lead in the adjusting process, so that the control mode is complicated.
Disclosure of Invention
The embodiment of the invention provides a video playing control method and electronic equipment, and aims to solve the problem that a user only needs to watch a picture of a specific object in a video file, and the playing speed and the playing progress need to be frequently adjusted.
In a first aspect, an embodiment of the present invention provides a video playback control method, including:
receiving a first input of a user;
in response to the first input, playing a first video segment of a target video at a first play speed, the first video segment including target video content, the target video content being associated with the first input;
wherein the first playback speed is different from a second playback speed of a second video segment in the target video, the second video segment not including the target video content.
In a second aspect, an embodiment of the present invention provides an electronic device, including:
the input receiving module is used for receiving a first input of a user;
a playing module, configured to play a first video segment in a target video at a first playing speed in response to the first input, where the first video segment includes target video content, and the target video content is associated with the first input;
wherein the first playback speed is different from a second playback speed of a second video segment in the target video, the second video segment not including the target video content.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor; the processor implements the video playing control method when executing the program.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the video playback control method described above.
In the embodiment of the present invention, the first input of the user is associated with the target video content, so that the first video segment including the target video content can be determined according to the first input of the user, and the first video segment including the target video content and the second video segment not including the target video content can be played at different speeds, therefore, the embodiment of the present invention can directly select the video segment including the target content from the target video and play at a playing speed different from that of the video segment not including the target video content, thereby implementing automatic adjustment of the playing speed of the specific video content, and enabling the user to focus on the specific video content, and therefore, the embodiment of the present invention can focus on the specific video content without frequently adjusting the playing speed and the playing speed by the user, and simplifies the operation process, the user can conveniently watch the video file.
Drawings
Fig. 1 is a flowchart illustrating a video playback control method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a target display interface according to an embodiment of the invention;
FIG. 3 is a second schematic diagram of a target display interface according to an embodiment of the invention;
FIG. 4 is a schematic diagram illustrating an interface display of a circle object during a target video playing process according to an embodiment of the present invention;
FIG. 5 shows a block diagram of an electronic device of an embodiment of the invention;
fig. 6 is a schematic diagram showing a hardware configuration of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An embodiment of the present invention provides a video playing control method, as shown in fig. 1, the method includes the following steps:
step 101: a first input is received from a user.
Wherein the first input is used to determine target content in a target video. I.e., the target video content is associated with the first input, different target video content may be determined based on the different first input.
Step 102: in response to the first input, a first video segment in the target video is played at a first play speed.
Wherein the first video segment includes target video content, the target video content being associated with the first input; the first playback speed is different from a second playback speed of a second video segment in the target video, the second video segment not including the target video content.
In addition, the target content may be a content that the user needs to pay attention to, or may be a content that the user does not need to pay attention to. When the target video content is the content which needs to be paid attention by the user, the first video clip is the video clip which needs to be paid attention by the user, the second video clip is the video clip which does not need to be paid attention by the user, the first video clip can be played normally, and the second video clip can be played fast forward; or the first video clip is played slowly, and the second video clip is played normally; or the first video clip is played slowly, and the second video clip is played quickly, namely the first playing speed is lower than the second playing speed.
When the target video content is the content which does not need to be paid attention by the user, the first video clip is a video frame which does not need to be paid attention by the user, the second video clip is a video frame which needs to be paid attention by the user, the second video clip can be played normally, and the first video clip can be played fast forward; or the second video clip is played slowly, and the first video clip is played normally; or the second video clip is played slowly, and the first video clip is played quickly, namely the first playing speed is higher than the second playing speed.
The normal playing is that the playing speed is a default preset value, the slow playing is that the playing speed is less than the default preset value, and the fast playing is that the playing speed is greater than the default preset value.
Therefore, in the embodiment of the invention, the user can directly determine the target content which needs to be paid attention by himself or determine the content which does not need to be paid attention by himself, and then play the target video according to the fact that the playing speed of the video clip which needs to be paid attention is smaller than the playing speed of the video clip which does not need to be paid attention.
As can be seen from the above description, in the embodiment of the present invention, the first input of the user is associated with the target video content, so that the first video clip including the target video content can be determined according to the first input of the user, and thus the first video clip including the target video content and the second video clip not including the target video content can be played at different speeds, and therefore, the embodiment of the present invention can directly select the video clip including the target content from the target video and play at a playing speed different from that of the video clip not including the target video content, thereby implementing automatic adjustment of the playing speed of the specific video content, and enabling the user to focus on the specific video content, and thus, the embodiment of the present invention can focus on the specific video content without the user frequently adjusting the playing speed and the progress, simplifying the operation process, the user can conveniently watch the video file.
Optionally, the playing a first video segment in the target video at a first playing speed in response to the first input includes:
determining target video content in the target video based on the first input;
determining a first video segment associated with the target video content based on a mapping relation between pre-stored video content and a video frame timestamp;
and playing the first video clip according to the first playing speed.
That is, in the embodiment of the present invention, the mapping relationship between the video content of the target video and the video frame timestamp is stored in advance, so that after the target video content is determined according to the first input, which video frames include the target video content and which video frames do not include the target video content can be determined according to the mapping relationship, and thus, a first video segment including the target video content and a second video segment not including the target video content can be obtained.
Optionally, the determining target video content in the target video based on the first input includes:
acquiring the input content of the first input, wherein the input content of the first input comprises at least one target keyword;
determining the target video content based on the at least one target keyword.
In other words, in the embodiment of the invention, the target video content can be determined by inputting the keywords, so that the user can input different keywords according to the actual requirements of the user, and the user can further conveniently watch different target videos.
Optionally, the determining the target video content based on the at least one target keyword includes:
determining at least one first target object name according to the at least one target keyword based on the pre-stored object name and object characteristics appearing in the target video;
and determining the at least one first target object name as the target video content.
In the embodiment of the present invention, names and features of objects appearing in the target video may be stored in advance, so that a user may input the names and/or features of the objects as keywords, and then search for a first target object name matching the names and/or features of the objects according to the input object names and/or features, and further determine the searched first target object name as the target video content, where the first video segment is a video segment including the first target object.
Optionally, the determining the target video content based on the at least one target keyword includes:
determining at least one first target scene name according to the at least one target keyword based on a pre-stored name of a scene appearing in the target video;
determining the at least one first target scene name as the target video content.
In the embodiment of the present invention, the names of scenes appearing in the target video may be pre-stored, the user may input the scene names as keywords, then search for the first target scene name matching the input scene name in the pre-stored names of scenes appearing in the target video, and further determine the searched first target scene name as the target video content, so that the first video clip is a video clip including the first target scene.
Further, in a case that a target video content is a first target scene name, the mapping relationship is a mapping relationship between names of scenes appearing in the target video and video frame timestamps, and determining a first video segment associated with the target video content based on a pre-stored mapping relationship between the video content and the video frame timestamps includes:
acquiring a target video frame timestamp associated with the at least one first target scene name based on a mapping relation between the name of a scene appearing in the target video and a video frame timestamp;
determining the first video segment based on the target video frame timestamp.
When the target video content includes two or more first target scene names, the target video frame timestamp is a video frame timestamp having a mapping relationship with all the first target scene names included in the target video content, that is, the target video frame is a video frame including all the first target scenes included in the target video content.
In addition, the mapping relationship between the name of the scene appearing in the target video and the video frame timestamp may be specifically represented in the form of a corresponding relationship table, for example, if the target video is a movie clip, the mapping relationship may be specifically a relationship table between a scenario keyword included in the movie clip and the video frame timestamp, as shown in table 1. It can be understood that the mapping relationship between the name of the scene appearing in the target video and the video frame timestamp may also adopt other data storage forms, which are not listed here.
TABLE 1 plot keywords and video frame timestamp correspondence table
As can be seen from the above, after a corresponding relationship between a scenario and a frame picture playing timestamp appearing in a video is established for a certain video segment, it can be calculated which video frames corresponding to the timestamps are video frames including target content, and which video frames are video frames not including the target content (for example, when a user needs to pay attention to the scenario of a first keyword and a second keyword, as shown in table 1, for 01 minutes and 25 seconds when a start timestamp 0 appears and for 05 minutes and 25 seconds when an end timestamp 0 appears, a playing frame corresponding to a time between the two timestamps both belongs to the scenario pictures of the first keyword and the second keyword), so that it can be determined which video frames need to be played at a first preset speed, and which video frames need to be played at a second preset speed.
Optionally, names of objects and names of scenes appearing in the target video are stored in advance, and the at least one target keyword includes a first object name and a first scene name;
the determining the target video content based on the at least one target keyword comprises:
searching a second target object name matched with the first object name in the prestored names of objects appearing in the target video;
searching a second target scene name matched with the first scene name in the prestored names of scenes appearing in the target video;
and determining the second target object name and the second target scene name as the target video content.
In the embodiment of the present invention, the name of an object appearing in the target video and the name of a scene may be pre-stored, the user may input the name of the scene and the name of the object together as a keyword, then, in the pre-stored name of the scene appearing in the target video, a second target object name matching the input first object name and a second target scene name matching the input second scene name are distributed and searched, and the searched second target object name and second target scene name are determined as the target video content, so that the first video clip is a video clip including the second target object and the second target scene.
In addition, it should be noted that the first object name may be one object name or a plurality of object names, and the second target object name may be one object name or a plurality of object names; the second scene name may be one scene name or a plurality of scene names, and the second target scene name may be one scene name or a plurality of scene names.
Further, the mapping relationship is a mapping relationship among names of objects appearing in the target video, names of scenes and video frame timestamps, and the determining the first video segment associated with the target video content based on the pre-stored mapping relationship between the video content and the video frame timestamps includes:
acquiring a first video frame timestamp associated with the name of the second target object based on the mapping relation among the name of the object appearing in the target video, the name of the scene and the video frame timestamp;
filtering a second video frame timestamp associated with the second target scene name in the first video frame timestamp;
determining the second video frame timestamp as a target video frame timestamp;
determining the first video segment based on the target video frame timestamp.
When the target video content includes at least two scene names and at least two object names, the target video frame timestamp is a video frame timestamp having a mapping relationship with all the scene names and the object names included in the target video content, that is, the target video frame is a video frame including all the scenes and all the objects included in the target video content.
As can be seen from the above, in the embodiment of the present invention, at least one of the name of the object appearing in the target video, the feature of the object, and the name of the scene may be stored, so that the user may determine the target content in the target video by inputting the search keyword. In this case, an input box of a search keyword needs to be displayed in a display interface of the electronic device that plays the target video, as shown in fig. 2.
For example, after a user inputs a search keyword, the category of the content of the search keyword input by the user can be detected, and if the content input by the user only contains an object name, the playing progress time of the corresponding object can be obtained according to the index table; if the object name input by the user also comprises the scene name, firstly selecting an item comprising the object name from the object name, then finding an item comprising the scene name input by the user from the scene name, and directly searching the scene name if the object name does not exist. If a plurality of items are matched, the optimal item can be selected according to the matching degree, or a plurality of results are presented to the user for selection.
In addition, the mapping relationship among the name of the object appearing in the target video, the name of the scene, and the video frame timestamp may be specifically represented in the form of a corresponding relationship table, for example, if the target video is a movie segment, the mapping relationship may be specifically represented in the form of a relationship table among the three of a scenario character, a scenario keyword, and a video frame timestamp included in the movie segment, as shown in table 2. It can be understood that the mapping relationship among the name of the object appearing in the target video, the name of the scene, and the video frame timestamp may also adopt other data storage forms, which are not listed here.
Table 2 correspondence table of scenario characters, scenario keywords, and video frame timestamps
As can be seen from the above description, after a corresponding relationship between a character and a scenario and a frame picture playing time stamp appearing in a video is established for a certain video segment, it can be calculated which video frames corresponding to the time stamps are video frames including target content, and which video frames are video frames not including target content (for example, when a user needs to pay attention to a scenario of an actor a, a first keyword and a second keyword, as shown in table 2, for a time of 01 minutes and 25 seconds of an appearance start time stamp 0 and a time of 05 minutes and 25 seconds of an appearance end time stamp 0 corresponding thereto, a playing frame corresponding to a time between the two time stamps both includes the actor a and belongs to the scenario pictures of the first keyword and the second keyword), so that it can be determined which video frames need to be played at a first playing speed, and which video frames need to be played at a second playing speed.
Optionally, the receiving a first input of a user includes:
receiving a first input of object information displayed in a target display interface by a user, wherein the object information comprises at least one of an object name and an object image;
the determining target video content in the target video based on the first input comprises:
determining at least one first target object name according to the object information aimed at by the first input;
and determining the at least one first target object name as the target video content.
That is, the embodiment of the present invention may further display at least one of a name and an image of an object appearing in the target video on the electronic device playing the target video, and then determine the target content in the target video by a user autonomously selecting at least one object name and/or image in the display interface.
Specifically, for example, if a certain video file is a movie fragment, names of actors in the movie fragment (for example, names of actors in a scenario and/or real names of actors) and head images of the actors in the movie fragment may be displayed on an electronic device playing the movie fragment as shown in fig. 2, or names of actors in the movie fragment are displayed in a list as shown in fig. 3, and then a user may select actors needing attention or actors not needing attention on the interface.
Optionally, in a case that the target video content includes the at least one first target object name, the mapping relationship is a mapping relationship between names of objects appearing in the target video and video frame timestamps, and the determining a first video segment associated with the target video content based on a pre-stored mapping relationship between video content and video frame timestamps includes:
acquiring a target video frame timestamp associated with the at least one first target object name based on a mapping relation between the name of an object appearing in the target video and a video frame timestamp;
determining the first video segment based on the target video frame timestamp.
When the target video content includes two or more first target object names, the target video frame timestamp is a video frame timestamp having a mapping relationship with all the first target object names included in the target video content, that is, the target video frame is a video frame including all the first target objects included in the target video content.
In addition, the mapping relationship between the name of the object appearing in the target video and the video frame timestamp may be specifically represented in the form of a corresponding relationship table, for example, if the target video is a movie fragment, the mapping relationship may be specifically a relationship table between a scenario character included in the movie fragment and the video frame timestamp, as shown in table 3. It should be understood that the mapping relationship between the names of the objects appearing in the target video and the video frame timestamps may also adopt other data storage forms, which are not listed here.
TABLE 3 plot figure and video frame time stamp corresponding relation table
ID Plot figure Occurrence start time stamp Time stamp for end of occurrence
1 Actor A 0 hour 01 min 25 s 0 hour 05 min 25 sec
2 Actor A 1 hour 01 min 25 s 1 hour 03 minutes 25 seconds
3 Actor B 2 hours 01 minutes 25 seconds 2 hours 50 minutes 25 seconds
As can be seen from the above description, after a certain video segment is established according to the corresponding relationship between a person and the playing time stamps of the frame pictures appearing in the video, it can be calculated which video frames corresponding to the time stamps are video frames including the target content, and which video frames are video frames not including the target content (for example, when a user needs to pay attention to actor a, as shown in table 3, for 01 minutes and 25 seconds when the starting time stamp 0 appears and 05 minutes and 25 seconds when the ending time stamp 0 appears corresponding to the starting time stamp 0 appears, the playing frames corresponding to the time between the two time stamps both contain the pictures of the actor a), so that it can be determined which video frames need to be played at the first playing speed, and which video frames need to be played at the second playing speed.
Optionally, the receiving a first input of a user includes:
receiving a first input of a target object appearing in a first video frame during the playing process of the target video by a user;
the playing a first video segment in the target video at a first play speed in response to the first input, comprising:
identifying the target object in a second video frame, and acquiring characteristic information of the target object, wherein the second video frame comprises the first video frame and at least one frame in the video frames of which the interval time with the first video frame is less than a first preset time;
acquiring a timestamp of a video frame of an object matched with the characteristic information of the target object, and determining the timestamp as a target video frame timestamp;
determining the first video segment based on the target video frame timestamp;
and playing the first video clip according to the first playing speed.
Wherein the first video frame is any video frame of the target video.
Therefore, in the embodiment of the present invention, in the playing process of the target video, an object appearing in the video picture may also be directly selected, for example, an object in the currently playing picture is clicked or circled (the object circled by the dotted line shown in fig. 4 is an object selected by the user), and the object is taken as the target video content. The method for determining the target video content can be carried out at any time in the video playing process, and further facilitates the operation of a user.
In addition, after the user selects the target object, object recognition may be performed according to a first frame of video frame when the user selects the target object or a plurality of frames of video frames near the first frame of video, so as to acquire feature information of the target object, and then a video frame including the target object may be selected from the video frames of the target video according to the feature information.
The feature information of the target object is obtained according to the first video frame and at least one frame of the video frames, the interval time between the first video frame and the video frames is less than the first preset time, the feature information which better accords with the target object can be obtained, and then the accuracy of screening the video frames including the target object in the follow-up process is improved.
In addition, when the target object is a person, the step of acquiring a time stamp of a video frame including an object matching the feature information of the target object includes:
performing face recognition on a second video frame to acquire face characteristics, wherein the second video frame comprises a first video frame and at least one frame of the video frames, and the interval time between the first video frame and the second video frame is less than first preset time;
and acquiring the time stamp of the video frame comprising the face matched with the face feature.
Optionally, the determining the first video segment based on the target video frame timestamp includes:
determining at least one continuous video frame according to the target video frame timestamp, wherein the interval of each adjacent video frame timestamp in the continuous video frame is less than second preset time;
determining one of the consecutive video frames as one of the first video segments.
If the time stamp interval is smaller than a second preset time (for example, 1 second) among the time stamps of all the acquired target video frames, it can be considered that the previous frame is a continuous frame, and the time stamps corresponding to the first frame and the last frame of the continuous frame are the occurrence start time stamp and the occurrence end time stamp. For example, when the target video content appears at the video playing time stamp 0 hour 01 min 25 sec and disappears at 0 hour 05 min 25 sec, and the target video content also appears at 1 hour 01 min 25 sec and ends at 1 hour 03 min 25 sec, the target video content is played at the first playing speed with the time stamps from 0 hour 01 min 25 sec to 0 hour 05 min 25 sec and from 1 hour 01 min 25 sec to 1 hour 03 min 25 sec, and the target video content disappears at the second playing speed with the time stamps from 0 hour 05 min 25 sec and 1 hour 01 min 25 sec.
As can be seen from the above, if the target video content starts to appear at the first time stamp, and the target video content disappears until the second time stamp, and then reappears at the third time stamp, and disappears until the fourth time stamp, the video frames in the time period between the first time stamp and the second time stamp are played at the first play speed, the video frames in the time period between the second time stamp and the third time stamp are played at the second play speed, and the video frames in the time period between the third time stamp and the fourth time stamp are played at the first play speed. That is, the embodiment of the present invention can automatically play the video clips including the target video content and the video clips not including the target video content at different speeds during the playing process of the target video.
In summary, the embodiments of the present invention provide a method for a user to play only a video frame including a target video content in a video at a normal play speed, and fast-forward play other frames, or fast-forward play a video frame including a target video content, and normally play other frames, that is, the user only needs to select a target content that the user wants to watch or does not want to watch in a video file, so that the video frame that the user wants to watch can be normally played, and the other frames are fast-forwarded play, which is convenient for the user to operate.
An embodiment of the present invention also provides an electronic device, as shown in fig. 5, including:
an input receiving module 501, configured to receive a first input of a user;
a playing module 502, configured to play a first video segment in a target video at a first playing speed in response to the first input, where the first video segment includes target video content, and the target video content is associated with the first input;
wherein the first playback speed is different from a second playback speed of a second video segment in the target video, the second video segment not including the target video content.
Optionally, the playing module 502 includes:
a first content determination unit configured to determine a target video content in the target video based on the first input;
the first video determining unit is used for determining a first video segment associated with the target video content based on the mapping relation between the pre-stored video content and the video frame time stamp;
and the first playing unit is used for playing the first video clip according to the first playing speed.
Optionally, the first content determination unit includes:
a keyword obtaining subunit, configured to obtain input content of the first input, where the input content of the first input includes at least one target keyword;
a content determination subunit, configured to determine the target video content based on the at least one target keyword.
Optionally, the content determination subunit is specifically configured to:
determining at least one first target object name according to the at least one target keyword based on the pre-stored object name and object characteristics appearing in the target video;
and determining the at least one first target object name as the target video content.
Optionally, the content determination subunit is specifically configured to:
determining at least one first target scene name according to the at least one target keyword based on a pre-stored name of a scene appearing in the target video;
determining the at least one first target scene name as the target video content.
Optionally, the mapping relationship is a mapping relationship between a name of a scene appearing in the target video and a video frame timestamp, and the first video determining unit is specifically configured to:
acquiring a target video frame timestamp associated with the at least one first target scene name based on a mapping relation between the name of a scene appearing in the target video and a video frame timestamp;
determining the first video segment based on the target video frame timestamp.
Optionally, names of objects and names of scenes appearing in the target video are stored in advance, and the at least one target keyword includes a first object name and a first scene name; the content determination subunit is specifically configured to:
searching a second target object name matched with the first object name in the prestored names of objects appearing in the target video;
searching a second target scene name matched with the first scene name in the prestored names of scenes appearing in the target video;
and determining the second target object name and the second target scene name as the target video content.
Optionally, the mapping relationship is a mapping relationship among a name of an object appearing in the target video, a name of a scene, and a video frame timestamp, and the first video determining unit is specifically configured to:
acquiring a first video frame timestamp associated with the name of the second target object based on the mapping relation among the name of the object appearing in the target video, the name of the scene and the video frame timestamp;
filtering a second video frame timestamp associated with the second target scene name in the first video frame timestamp;
determining the second video frame timestamp as a target video frame timestamp;
determining the first video segment based on the target video frame timestamp.
Optionally, the input receiving module 501 includes:
a first receiving unit, configured to receive a first input of object information displayed in a target display interface by a user, where the object information includes at least one of an object name and an object image;
the first content determination unit includes:
the target object determining subunit is used for determining at least one first target object name according to the object information aimed at by the first input;
and the second content determining subunit is used for determining the at least one first target object name as the target video content.
Optionally, the mapping relationship is a mapping relationship between a name of an object appearing in the target video and a video frame timestamp, and the first video determining unit is specifically configured to:
acquiring a target video frame timestamp associated with the at least one first target object name based on a mapping relation between the name of an object appearing in the target video and a video frame timestamp;
determining the first video segment based on the target video frame timestamp.
Optionally, the input receiving module comprises:
the second receiving unit is used for receiving a first input of a target object appearing in the first video frame during the playing process of the target video by a user;
the playing module comprises:
the characteristic information acquisition unit is used for identifying the target object in a second video frame and acquiring the characteristic information of the target object, wherein the second video frame comprises the first video frame and at least one frame of the video frames, and the interval time between the first video frame and the second video frame is less than a first preset time;
a time stamp obtaining unit configured to obtain a time stamp of a video frame including an object matching the feature information of the target object, and determine the time stamp as a target video frame time stamp;
a first video determination unit to determine the first video segment based on the target video frame timestamp;
and the second playing unit is used for playing the first video clip according to the first playing speed.
Optionally, when the first video determining unit determines the first video segment based on the target video frame timestamp, the first video determining unit is specifically configured to:
determining at least one continuous video frame according to the target video frame timestamp, wherein the interval of each adjacent video frame timestamp in the continuous video frame is less than second preset time;
determining one of the consecutive video frames as one of the first video segments.
It can be seen that, in the embodiment of the present invention, the first input of the user is associated with the target video content, so that the first video segment including the target video content can be determined according to the first input of the user, and thus the first video segment including the target video content and the second video segment not including the target video content can be played at different speeds, and therefore, the embodiment of the present invention can directly select the video segment including the target content from the target video and play at a playing speed different from that of the video segment not including the target video content, thereby implementing automatic adjustment of the playing speed of the specific video content, and enabling the user to focus on the specific video content, and thus, the embodiment of the present invention can focus on the specific video content without the user frequently adjusting the playing speed and the progress, simplifying the operation process, the user can conveniently watch the video file.
It can be understood that the electronic device provided in the embodiment of the present invention can implement each process of the foregoing video playing control method, and the relevant descriptions about the video playing control method are applicable to the electronic device, and are not described herein again.
Embodiments of the present invention also provide an electronic device, as shown in fig. 6, the electronic device 600 includes but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and a power supply 611. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 6 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, a pedometer, and the like.
Wherein, the processor 610 is configured to control the user input unit 607 to receive a first input of a user; the processor 610 is further configured to, in response to the first input, play a first video segment of a target video at a first play speed, the first video segment including target video content, the target video content being associated with the first input; wherein the first playback speed is different from a second playback speed of a second video segment in the target video, the second video segment not including the target video content.
Therefore, in the electronic device 600 according to the embodiment of the present invention, the first input of the user is associated with the target video content, so that the first video segment including the target video content can be determined according to the first input of the user, and the first video segment including the target video content and the second video segment not including the target video content can be played at different speeds, so that the embodiment of the present invention can directly select the video segment including the target content from the target video and play at a playing speed different from that of the video segment not including the target video content, thereby achieving automatic adjustment of the playing speed of the specific video content and enabling the user to focus on the specific video content, and it can be known that the embodiment of the present invention can focus on the specific video content without the user frequently adjusting the playing speed and the progress, the operation process is simplified, and the user can watch the video file conveniently.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 601 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 610; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 601 may also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 602, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 603 may convert audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into an audio signal and output as sound. Also, the audio output unit 603 may also provide audio output related to a specific function performed by the electronic apparatus 600 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.
The input unit 604 is used to receive audio or video signals. The input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics processor 6041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 606. The image frames processed by the graphic processor 6041 may be stored in the memory 609 (or other storage medium) or transmitted via the radio frequency unit 601 or the network module 602. The microphone 6042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 601 in case of the phone call mode.
The electronic device 600 also includes at least one sensor 605, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 6061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 6061 and/or the backlight when the electronic apparatus 600 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 605 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 606 is used to display information input by the user or information provided to the user. The Display unit 606 may include a Display panel 6061, and the Display panel 6061 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 607 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 607 includes a touch panel 6071 and other input devices 6072. Touch panel 6071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 6071 using a finger, stylus, or any suitable object or accessory). The touch panel 6071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 610, receives a command from the processor 610, and executes the command. In addition, the touch panel 6071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 607 may include other input devices 6072 in addition to the touch panel 6071. Specifically, the other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 6071 can be overlaid on the display panel 6061, and when the touch panel 6071 detects a touch operation on or near the touch panel 6071, the touch operation is transmitted to the processor 610 to determine the type of the touch event, and then the processor 610 provides a corresponding visual output on the display panel 6061 according to the type of the touch event. Although the touch panel 6071 and the display panel 6061 are shown in fig. 6 as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 6071 and the display panel 6061 may be integrated to implement the input and output functions of the electronic device, and this is not limited here.
The interface unit 608 is an interface for connecting an external device to the electronic apparatus 600. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 608 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic device 600 or may be used to transmit data between the electronic device 600 and external devices.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 609 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 610 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 609, and calling data stored in the memory 609, thereby performing overall monitoring of the electronic device. Processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The electronic device 600 may further include a power supply 611 (e.g., a battery) for supplying power to the various components, and preferably, the power supply 611 may be logically connected to the processor 610 via a power management system, such that the power management system may be used to manage charging, discharging, and power consumption.
In addition, the electronic device 600 includes some functional modules that are not shown, and are not described in detail herein.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video playing control method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
An embodiment of the present invention further provides an electronic device, including:
a touch screen, wherein the touch screen comprises a touch sensitive surface and a display screen;
one or more processors;
one or more memories;
one or more sensors;
and one or more computer programs, wherein the one or more computer programs are stored in the one or more memories, the one or more computer programs comprising instructions which, when executed by the electronic device, cause the electronic device to perform the steps of the video playback control method described above.
An embodiment of the present invention further provides a computer non-transitory storage medium, where a computer program is stored, and when the computer program is executed by a computing device, the method for controlling video playback is implemented.
The embodiment of the present invention further provides a computer program product, when the computer program product runs on a computer, the computer is enabled to execute the method for controlling video playing.
Further, it should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (14)

1. A video playback control method, comprising:
receiving a first input of a user;
in response to the first input, playing a first video segment of a target video at a first play speed, the first video segment including target video content, the target video content being associated with the first input;
wherein the first playback speed is different from a second playback speed of a second video segment in the target video, the second video segment not including the target video content;
the receiving a first input of a user comprises:
receiving a first input of a target object appearing in a first video frame during the playing process of the target video by a user;
the playing a first video segment in the target video at a first play speed in response to the first input, comprising:
identifying the target object in a second video frame, and acquiring characteristic information of the target object, wherein the second video frame comprises the first video frame and at least one frame in the video frames of which the interval time with the first video frame is less than a first preset time;
acquiring a timestamp of a video frame of an object matched with the characteristic information of the target object, and determining the timestamp as a target video frame timestamp;
determining the first video segment based on the target video frame timestamp;
and playing the first video clip according to the first playing speed.
2. The video playback control method of claim 1, wherein playing back a first video segment of a target video at a first playback speed in response to the first input comprises:
determining target video content in the target video based on the first input;
determining a first video segment associated with the target video content based on a mapping relation between pre-stored video content and a video frame timestamp;
and playing the first video clip according to the first playing speed.
3. The method of claim 2, wherein the determining the target video content in the target video based on the first input comprises:
acquiring the input content of the first input, wherein the input content of the first input comprises at least one target keyword;
determining the target video content based on the at least one target keyword.
4. The video playback control method according to claim 3,
the determining the target video content based on the at least one target keyword comprises:
determining at least one first target object name according to the at least one target keyword based on the pre-stored object name and object characteristics appearing in the target video;
and determining the at least one first target object name as the target video content.
5. The video playback control method according to claim 3,
the determining the target video content based on the at least one target keyword comprises:
determining at least one first target scene name according to the at least one target keyword based on a pre-stored name of a scene appearing in the target video;
determining the at least one first target scene name as the target video content.
6. The video playing control method according to claim 5, wherein the mapping relationship is a mapping relationship between a name of a scene appearing in the target video and a video frame timestamp, and the determining the first video segment associated with the target video content based on a pre-stored mapping relationship between video content and a video frame timestamp comprises:
acquiring a target video frame timestamp associated with the at least one first target scene name based on a mapping relation between the name of a scene appearing in the target video and a video frame timestamp;
determining the first video segment based on the target video frame timestamp.
7. The video playback control method according to claim 3, wherein names of objects and names of scenes that appear in the target video are stored in advance, and the at least one target keyword includes a first object name and a first scene name;
the determining the target video content based on the at least one target keyword comprises:
searching a second target object name matched with the first object name in the prestored names of objects appearing in the target video;
searching a second target scene name matched with the first scene name in the prestored names of scenes appearing in the target video;
and determining the second target object name and the second target scene name as the target video content.
8. The method according to claim 7, wherein the mapping relationship is a mapping relationship among names of objects appearing in the target video, names of scenes, and video frame timestamps, and the determining the first video segment associated with the target video content based on the pre-stored mapping relationship between video content and video frame timestamps comprises:
acquiring a first video frame timestamp associated with the name of the second target object based on the mapping relation among the name of the object appearing in the target video, the name of the scene and the video frame timestamp;
filtering a second video frame timestamp associated with the second target scene name in the first video frame timestamp;
determining the second video frame timestamp as a target video frame timestamp;
determining the first video segment based on the target video frame timestamp.
9. The video playback control method according to claim 2, wherein the receiving a first input from a user includes:
receiving a first input of object information displayed in a target display interface by a user, wherein the object information comprises at least one of an object name and an object image;
the determining target video content in the target video based on the first input comprises:
determining at least one first target object name according to the object information aimed at by the first input;
and determining the at least one first target object name as the target video content.
10. The method according to claim 4 or 9, wherein the mapping relationship is a mapping relationship between names of objects appearing in the target video and video frame time stamps, and the determining the first video segment associated with the target video content based on a pre-stored mapping relationship between video content and video frame time stamps comprises:
acquiring a target video frame timestamp associated with the at least one first target object name based on a mapping relation between the name of an object appearing in the target video and a video frame timestamp;
determining the first video segment based on the target video frame timestamp.
11. The video playback control method according to claim 1, 6 or 8, wherein the determining the first video segment based on the target video frame timestamp comprises:
determining at least one continuous video frame according to the target video frame timestamp, wherein the interval of each adjacent video frame timestamp in the continuous video frame is less than second preset time;
determining one of the consecutive video frames as one of the first video segments.
12. An electronic device, comprising:
the input receiving module is used for receiving a first input of a user;
a playing module, configured to play a first video segment in a target video at a first playing speed in response to the first input, where the first video segment includes target video content, and the target video content is associated with the first input;
wherein the first playback speed is different from a second playback speed of a second video segment in the target video, the second video segment not including the target video content;
the input receiving module includes:
the second receiving unit is used for receiving a first input of a target object appearing in the first video frame during the playing process of the target video by a user;
the playing module comprises:
the characteristic information acquisition unit is used for identifying the target object in a second video frame and acquiring the characteristic information of the target object, wherein the second video frame comprises the first video frame and at least one frame of the video frames, and the interval time between the first video frame and the second video frame is less than a first preset time;
a time stamp obtaining unit configured to obtain a time stamp of a video frame including an object matching the feature information of the target object, and determine the time stamp as a target video frame time stamp;
a first video determination unit to determine the first video segment based on the target video frame timestamp;
and the second playing unit is used for playing the first video clip according to the first playing speed.
13. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor; characterized in that the processor implements the video playback control method according to any one of claims 1 to 11 when executing the program.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the video playback control method according to any one of claims 1 to 11.
CN201910888082.0A 2019-09-19 2019-09-19 Video playing control method and electronic equipment Active CN110557683B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910888082.0A CN110557683B (en) 2019-09-19 2019-09-19 Video playing control method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910888082.0A CN110557683B (en) 2019-09-19 2019-09-19 Video playing control method and electronic equipment

Publications (2)

Publication Number Publication Date
CN110557683A CN110557683A (en) 2019-12-10
CN110557683B true CN110557683B (en) 2021-08-10

Family

ID=68740797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910888082.0A Active CN110557683B (en) 2019-09-19 2019-09-19 Video playing control method and electronic equipment

Country Status (1)

Country Link
CN (1) CN110557683B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111050214A (en) * 2019-12-26 2020-04-21 维沃移动通信有限公司 Video playing method and electronic equipment
CN111988670B (en) * 2020-08-18 2021-10-22 腾讯科技(深圳)有限公司 Video playing method and device, electronic equipment and computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106021496A (en) * 2016-05-19 2016-10-12 海信集团有限公司 Video search method and video search device
CN106851407A (en) * 2017-01-24 2017-06-13 维沃移动通信有限公司 A kind of control method and terminal of video playback progress

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8331772B1 (en) * 2006-09-07 2012-12-11 Opentv, Inc. Systems and methods to position and play content
CN104796781B (en) * 2015-03-31 2019-01-18 小米科技有限责任公司 Video clip extracting method and device
CN107743248A (en) * 2017-09-28 2018-02-27 北京奇艺世纪科技有限公司 A kind of video fast forward method and device
CN108184137A (en) * 2017-12-29 2018-06-19 北京奇虎科技有限公司 A kind of positioning playing method and apparatus of Streaming Media FLV files
CN108388583A (en) * 2018-01-26 2018-08-10 北京览科技有限公司 A kind of video searching method and video searching apparatus based on video content
CN108401193A (en) * 2018-03-21 2018-08-14 北京奇艺世纪科技有限公司 A kind of video broadcasting method, device and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106021496A (en) * 2016-05-19 2016-10-12 海信集团有限公司 Video search method and video search device
CN106851407A (en) * 2017-01-24 2017-06-13 维沃移动通信有限公司 A kind of control method and terminal of video playback progress

Also Published As

Publication number Publication date
CN110557683A (en) 2019-12-10

Similar Documents

Publication Publication Date Title
CN109361865B (en) Shooting method and terminal
CN107786827B (en) Video shooting method, video playing method and device and mobile terminal
CN110557683B (en) Video playing control method and electronic equipment
CN108182271B (en) Photographing method, terminal and computer readable storage medium
CN109078319B (en) Game interface display method and terminal
CN110740259B (en) Video processing method and electronic equipment
CN108777766B (en) Multi-person photographing method, terminal and storage medium
CN108984143B (en) Display control method and terminal equipment
CN108174110B (en) Photographing method and flexible screen terminal
CN110855921B (en) Video recording control method and electronic equipment
CN111010608B (en) Video playing method and electronic equipment
CN111010510B (en) Shooting control method and device and electronic equipment
CN109922294B (en) Video processing method and mobile terminal
CN109618218B (en) Video processing method and mobile terminal
CN108924413B (en) Shooting method and mobile terminal
CN109246474B (en) Video file editing method and mobile terminal
CN108762877B (en) Control method of mobile terminal interface and mobile terminal
CN108282611B (en) Image processing method and mobile terminal
CN111050214A (en) Video playing method and electronic equipment
CN110958485A (en) Video playing method, electronic equipment and computer readable storage medium
CN108347628B (en) Method for prompting member activation, mobile terminal and server
CN111491211A (en) Video processing method, video processing device and electronic equipment
CN107728877B (en) Application recommendation method and mobile terminal
CN108109186B (en) Video file processing method and device and mobile terminal
CN111221602A (en) Interface display method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant