CN109168037B - Video playing method and device - Google Patents

Video playing method and device Download PDF

Info

Publication number
CN109168037B
CN109168037B CN201811190835.2A CN201811190835A CN109168037B CN 109168037 B CN109168037 B CN 109168037B CN 201811190835 A CN201811190835 A CN 201811190835A CN 109168037 B CN109168037 B CN 109168037B
Authority
CN
China
Prior art keywords
video
target
time period
playing
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811190835.2A
Other languages
Chinese (zh)
Other versions
CN109168037A (en
Inventor
邱志善
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yayue Technology Co ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201811190835.2A priority Critical patent/CN109168037B/en
Publication of CN109168037A publication Critical patent/CN109168037A/en
Application granted granted Critical
Publication of CN109168037B publication Critical patent/CN109168037B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2387Stream processing in response to a playback request from an end-user, e.g. for trick-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a video playing method and device. Wherein, the method comprises the following steps: receiving a video processing request, wherein the video processing request is used for requesting processing of a target object in a played target video, and the target object is an object appearing in the target video; acquiring an object identifier of a target object from a video processing request; and processing videos in a target time period in the process of playing the target video, wherein the target time period is a time period in which a target object represented by the object identifier appears in the target video, and the processing of the videos in the target time period comprises the following steps: only the video over the target time period is played, or the video over the target time period is fast forwarded or skipped. The invention solves the technical problem that the accuracy and efficiency of fast forwarding or skipping a video are low due to the fact that the related technology adopts a manual mode to fast forward or skip a certain section of video.

Description

Video playing method and device
The application is to the application number: 201710007572.6, filing date: in 2017, 05.01.8, a divisional application of the original application entitled video playing method and device.
Technical Field
The invention relates to the field of computers, in particular to a video playing method and device.
Background
During the video playing process, the user can select to watch only the main drama when watching the video (such as TV play, movie) according to the personal likes and dislikes of the actor or due to the limited time, and can manually select to fast forward or directly skip when encountering the drama of the actor which is not preferred or important by the user. However, the user manually selects the way to fast forward or skip a certain segment of video will have the following problems:
1. since the user cannot predict the subsequent content of the video, the fast forward is often too much or too little, and the user needs to manually select fast forward or rewind again to actually position the video to a proper position for continuous playing for several times, which will cause the accuracy and efficiency of fast forward or skipping the video to be lower, and seriously affect the impression and experience of the user.
2. Some TV or TV boxes, due to their low performance, may have a noticeable pause phenomenon during fast forward or skip, which seriously affects the smoothness of video playback.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a video playing method and a video playing device, which are used for at least solving the technical problem that the accuracy and efficiency of fast forwarding or skipping a video are low due to the fact that a certain video is fast forwarded or skipped manually in the related art.
According to an aspect of an embodiment of the present invention, there is provided a video playing method, including: receiving a video processing request, wherein the video processing request is used for requesting processing of a target object in a played target video, and the target object is an object appearing in the target video; acquiring an object identifier of a target object from a video processing request; and processing videos in a target time period in the process of playing the target video, wherein the target time period is a time period in which a target object represented by the object identifier appears in the target video, and the processing of the videos in the target time period comprises the following steps: only the video over the target time period is played, or the video over the target time period is fast forwarded or skipped.
According to another aspect of the embodiments of the present invention, there is also provided a video playing apparatus, including: the video processing device comprises a first receiving unit, a second receiving unit and a display unit, wherein the first receiving unit is used for receiving a video processing request, the video processing request is used for requesting to process a target object in a played target video, and the target object is an object appearing in the target video; a first obtaining unit, configured to obtain an object identifier of a target object from a video processing request; and the playing unit is used for processing the video in a target time period in the process of playing the target video, wherein the target time period is a time period in which the target object represented by the object identifier appears in the target video, and the processing of the video in the target time period comprises the following steps: only the video over the target time period is played, or the video over the target time period is fast forwarded or skipped.
In the embodiment of the invention, a video processing request is received, wherein the video processing request is used for requesting processing of a target object in a played target video, and the target object is an object appearing in the target video; acquiring an object identifier of a target object from a video processing request; and only playing the video in the target time slot during playing the target video, or fast forwarding or skipping the video in the target time slot, wherein the target time slot is a time slot in which the target object represented by the object identifier appears in the target video, the object identifier of the target object appearing in the target video is obtained from the received video processing request, then the target time slot in which the target object appears in the target video is obtained, and only playing the video in the target time slot during playing the target video, or fast forwarding or skipping the video in the target time slot, so that the purpose of automatically fast forwarding or skipping a certain video during playing the video is achieved, and the technical problem that the accuracy and efficiency of fast forwarding or skipping a certain video by manually fast forwarding or skipping a certain video in the related art are low is solved, therefore, the technical effects of improving the precision of fast forwarding or skipping a certain video and improving the efficiency of fast forwarding or skipping a certain video are achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a schematic diagram of a hardware environment of a video playback method according to an embodiment of the present invention;
FIG. 2 is a flow chart of an alternative video playback method according to an embodiment of the present invention;
fig. 3 is a flowchart of a video playing method according to a preferred embodiment of the present invention;
FIG. 4 is a schematic diagram of an alternative video playback device according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an alternative video playback device according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an alternative video playback device according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an alternative video playback device according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of an alternative video playback device according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of an alternative video playback device according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of an alternative video playback device according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of an alternative video playback device according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of an alternative video playback device according to an embodiment of the present invention; and
fig. 13 is a block diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
According to an embodiment of the present invention, a method embodiment of a video playing method is provided.
Alternatively, in the present embodiment, the video playing method described above may be applied to a hardware environment formed by the server 102 and the terminal 104 as shown in fig. 1. As shown in fig. 1, a server 102 is connected to a terminal 104 via a network including, but not limited to: the terminal 104 is not limited to a PC, a mobile phone, a tablet computer, etc. in a wide area network, a metropolitan area network, or a local area network. The video playing method according to the embodiment of the present invention may be executed by the server 102, or executed by the terminal 104, or executed by both the server 102 and the terminal 104. The terminal 104 may execute the video playing method according to the embodiment of the present invention by a client installed thereon, and the following embodiments all take the client executing the video playing method as an example for description.
Fig. 2 is a flowchart of an alternative video playing method according to an embodiment of the present invention, and as shown in fig. 2, the method may include the following steps:
step S202, receiving a video processing request, wherein the video processing request is used for requesting to process a target object in a played target video, and the target object is an object appearing in the target video;
step S204, acquiring an object identifier of the target object from the video processing request;
step S206, processing the video in the target time period in the process of playing the target video, where the target time period is a time period in which the target object represented by the object identifier appears in the target video, and processing the video in the target time period includes: only the video over the target time period is played, or the video over the target time period is fast forwarded or skipped.
Through the steps S202 to S206, the object identifier of the target object appearing in the target video is obtained from the received video processing request, then the target time segment of the target object appearing in the target video is obtained, and only the video in the target time segment is played in the process of playing the target video, or the video in the target time segment is fast forwarded or skipped, so that the purpose of automatically fast forwarding or skipping a certain video in the video playing process is achieved, and the technical problem that the precision and efficiency of fast forwarding or skipping a certain video are low due to the fact that the related art adopts a manual mode to fast forward or skip a certain video is solved, thereby achieving the technical effects of improving the precision of fast forwarding or skipping a certain video and improving the efficiency of fast forwarding or skipping a certain video.
In the technical solution provided in step S202, the target video in the embodiment of the present invention may be any type of video, such as a television drama, a movie, and the like, the target object appearing in the target video may be a character object (e.g., a hero, a gametophyte, and the like), an animal object (e.g., a dog, a parrot, and the like), or a thing object (e.g., a mountain, water, a tree, and the like), and the type of the target video and the type of the target object appearing in the target video are not particularly limited in the embodiment of the present invention. The video processing request in the embodiment of the present invention may be used to request processing of a target object in a played target video, where it should be noted that processing of a target object in a target video may include, but is not limited to: and performing fast forward processing on the video segment including the target object in the target video, or performing skip processing on the video segment including the target object in the target video, or only playing the video segment including the target object in the target video.
It should be further noted that, the triggering manner of the video processing request is also not specifically limited in the embodiments of the present invention, for example, the video processing request may be generated by triggering a touch operation performed on a target object by a user in a client operation interface, where the touch operation performed on the target object by the user in the client operation interface may include, but is not limited to: clicks (including single click, double click, etc.), long presses, drags, gestures, etc. When the user performs the touch operation on the target object in the client operation interface, a video processing request can be triggered and generated.
It should be further noted that, in an actual application scenario, a user may perform a touch operation on a target object in a client operation interface before playing a target video to trigger generation of a video processing request, or the user may perform a touch operation on the target object in the client operation interface during playing of the target video to generate the video processing request, that is, the time when the client receives the video processing request may be before playing the target video or during playing of the target video.
In the technical solution provided in step S204, one or more target objects may appear in the target video, and the number of the target objects in the embodiment of the present invention is not specifically limited. Each target object may have a unique object identifier, where the object identifier may be used to uniquely identify one target object, that is, the object identifier of each target object is different, and it should be noted here that, in the embodiment of the present invention, the type of the object identifier of the target object is not specifically limited, and the object identifier of the target object may be a name of the target object, or may be a character string composed of a string of numbers or characters. It should be noted that the time period in which the target object appears in the target video may be a target time period, and each frame video image or key frame video image of the video in the target time period may include the target object.
Optionally, after the target video is acquired, by analyzing information of the target video, acquiring an object to be processed appearing in the target video and allocating a unique object identifier to the object, recording a time period during which the object to be processed appears in the target video, and establishing a corresponding relationship between the object identifier of the object to be processed and the time period during which the object to be processed appears in the target video, in which the object to be processed may include the target object, and the time period during which the object to be processed appears in the target video may include the target time period. Optionally, in the embodiment of the present invention, information obtained after the target video is subjected to the above preprocessing may be stored, where the information includes an object to be processed appearing in the target video, an object identifier of the object to be processed, a time period during which the object to be processed appears in the target video, and a correspondence between the object identifier of the object to be processed and the time period during which the object to be processed appears in the target video, so that the information can be subsequently and quickly searched and utilized. It should be noted that the preprocessing process may be performed by a client and the information obtained by the preprocessing may be stored in the client, or the preprocessing process may be performed by a server and the information obtained by the preprocessing may be stored in the server, and the client may obtain the information from the server through a communication connection with the server. In order to reduce the occupation of the storage space of the client and optimize the performance of the client, the embodiment of the present invention preferably uses the server to execute the preprocessing process, and stores the information obtained by the preprocessing process in the server.
Based on the above preprocessing process, after receiving the video processing request, the client may parse the received video processing request to determine the object identifier of the target object appearing in the target video, and then the client may request the target time period in which the target object appears in the target video from the server. After finding the target time period corresponding to the object identifier of the target object, the server can feed back the target time period to the client.
In the technical solution provided in step S206, after acquiring the target time period corresponding to the object identifier of the target object, the client may process a video in the target time period in a process of playing the target video, where the processing process may include: only the video over the target time period is played, or the video over the target time period is fast forwarded or skipped. It should be noted here that playing only the video in the target time period may be that the client starts playing from the start time of the target time period and ends playing to the end time of the target time period. The skipping processing of the video in the target time period may be that the client skips to play at a next time after the end time of the target time period after playing to the last time before the start time of the target time period. The fast-forwarding of the video over the target time period may be that the client only plays video frames at one or more moments over the target time period, where the video frames may be key frames of the video over the target time period.
It should be noted that, the client may adopt a mode of caching the target video offline or a mode of loading the target video online in real time, and the two different playing modes correspond to the process of processing the video in the target time period in the process of playing the target video indicated by the following two optional embodiments, specifically:
as an alternative embodiment, in the case that the client plays the target video in a manner of caching the target video offline, the processing, in step S206, of the video in the target time period during playing the target video may include the following steps:
step S2061, sending the object identification to a server;
step S2063, receiving a time identifier returned by the server, wherein the time identifier is used for indicating a target time period;
in step S2065, only the video in the target time slot indicated by the time indicator is played, or the video in the target time slot indicated by the time indicator is fast forwarded or skipped.
It should be noted that, when the client uses the offline target video caching mode, the client may load the target video from the server to the local in advance, and then may send the obtained object identifier of the target object to the server, and the server may search for the target time period corresponding to the object identifier of the target object according to the correspondence between the object identifier of the object to be processed and the time period in which the object to be processed appears in the target video, and return the search result to the client in the form of a time identifier, where the time identifier may be used to indicate the target time period corresponding to the object identifier of the target object. After receiving the time identifier returned by the server, the client may mark a target time period of the locally cached target video, and fast forward or skip the video in the target time period when playing to the target time period, or only play the video in the target time period.
In the embodiment, the target time period of the target event which is cached in advance is marked, and the video in the target time period is processed when the target time period is played, including only playing the video in the target time period, or fast forwarding or skipping the video in the target time period, so that the effect of reducing the processing delay of the client to the video in the target time period and further improving the processing efficiency of the video in the target time period can be achieved.
As an optional embodiment, in the case that the client plays the target video in a manner of loading the target video online, the processing, in step S206, of the video in the target time period in the process of playing the target video may include the following steps:
step S2062, sending the object identification to a server;
step S2064, receiving videos except for the video in the target time period in the target video returned by the server;
in step S2066, a video other than the video in the target time period in the target video is played.
It should be noted that, when the client uses an online real-time loading manner of a target video, the client may first send an object identifier of an acquired target object to the server, and the server may search a target time period corresponding to the object identifier of the target object according to a pre-established correspondence between the object identifier of the object to be processed and a time period in which the object to be processed appears in the target video, and return videos in the target video except videos in the target time period to the client, where the videos in the target time period are videos that need to be fast-forwarded or skipped, and the videos in the target video except the videos in the target time period are videos that need to be played in the client. After receiving the videos, except for the videos in the target time period, of the target videos returned by the server, the client can play the videos, except for the videos in the target time period, of the target videos in the client. It should be noted here that, if the video processing request requests skipping processing of the target object, in this embodiment, the server may return the videos in the target video except for the video in the target time period to the client, so that the client only plays the videos in the target video except for the video in the target time period and directly skips the video in the target time period; if the video processing request requests fast-forward processing of the target object, in this embodiment, the server may return, to the client, videos in the target video except for the video in the target time period and one or more key frames in the videos in the target time period, so that the client plays the videos in the target video except for the video in the target time period and plays the one or more key frames in the videos in the target time period. The purpose of fast forwarding or skipping the video in the target time segment in the process of playing the target video can be realized through the steps.
Optionally, in the case that the client plays the target video in a manner of loading the target video online, the processing, in step S206, of the video in the target time period in the process of playing the target video may further include the following steps:
step S2062', sending the object identification to a server;
step S2064', receiving a video in the target time period in the target video returned by the server;
in step S2066', the video in the target time period in the target video is played.
It should be noted that, when the client uses an online real-time target video loading manner, the client may first send the obtained object identifier of the target object to the server, and the server may search a target time period corresponding to the object identifier of the target object according to a pre-established correspondence between the object identifier of the object to be processed and a time period in which the object to be processed appears in the target video, and return the video in the target time period in the target video to the client, where the video in the target time period is a video that needs to be played in the client. After receiving the video in the target time period in the target video returned by the server, the client can play the video in the target time period in the client. Through the steps, the aim of only playing the video in the target time period in the process of playing the target video can be achieved.
The embodiment loads the videos of the target video except the videos in the target time period from the server, and plays the videos of the target video except the videos in the target time period in the client so as to achieve the purpose of fast forwarding or skipping the videos in the target time period in the process of playing the target video. Or, the video in the target time period in the target video is loaded from the server, and the video in the target time period in the target video is played in the client, so that the purpose that only the video in the target time period is played in the process of playing the target video is achieved. The embodiment can achieve the effects of reducing the storage space occupied by the whole target video cached by the client and further improving the performance of the client system.
As an alternative embodiment, in the process of playing the target video, the embodiment may further include the following steps:
step S208, receiving a control instruction, wherein the control instruction is used for instructing to resume playing of the video in at least one time period of the target time period.
In the technical solution provided in step S208, the target time period in which the target object appears in the target video may include at least one time period, that is, the target object may appear in at least one time period, and the time periods all belong to the target time period corresponding to the object identifier of the target object. In the process of playing the target video, the client may detect whether there is a control instruction in real time, where the control instruction may be used to instruct to resume playing of the video over at least one of the target time periods. It should be noted here that the control instruction may be generated by being triggered by a touch operation performed by a user in the client operation interface, where the touch operation performed by the user in the client operation interface may include, but is not limited to: click, long press, drag, slide, gesture, etc. When the client detects that the user executes any one of the touch operations on the client operation interface, the client can trigger generation of a control instruction.
And step S210, responding to the control instruction, and resuming playing of the video in at least one time period in the target time period.
In the technical solution provided in step S210, after receiving the control instruction, the client may respond to the control instruction, and resume playing the video in at least one time period of the target time period. It should be noted here that the video in at least one of the target time periods may be a video in at least one of the target time periods that has already been played, or the video in at least one of the target time periods may also be a video in at least one of the target time periods that has not been played.
As an alternative embodiment, the resuming playing of the video in at least one of the target time periods in step S210 may include:
step S2102 of resuming playing of the video in one or more time slots of the target time slots that have already been played; and/or
In step S2104, the video in one or more time periods of the target time periods that have not been played back is resumed.
It should be noted that the control instruction in this embodiment may be used to instruct to resume playing of the video in one or more time periods of the target time periods that have already been played, or may also be used to instruct to resume playing of the video in one or more time periods of the target time periods that have not been played, and after receiving the control instruction, the client may perform a corresponding resume playing operation according to the content instructed by the control instruction. It should be further noted that resuming playing the video in one or more time periods of the played target time periods may be resuming playing the video in any one or more time periods of the played target time periods, or resuming playing the video in a last played time period of the played target time periods.
The resuming playing of the video in one or more time periods in the target time period that has not been played may be resuming playing of the video in any one or more time periods in the target time period that has not been played, or resuming playing of the video in the first time period that has been played in the target time period that has not been played.
The embodiment can resume playing the video in at least one time period in the target time period (including the video in one or more time periods in the target time period which has already been played and/or the video in one or more time periods in the target time period which has not been played), so that the aim of meeting different actual requirements of the user can be fulfilled, the playing control of the video in the target time period is more flexible, and the use experience of the user is greatly improved.
As an alternative embodiment, the step S210 of receiving the control instruction may include: in step S2102, a control command is received at the current time. Accordingly, the resuming playing of the video over one or more time periods of the already played target time periods in step S2102 may include: in step S21022, the video in the time slot that was last played before the current time is resumed.
It should be noted that, when the control instruction is received at the current time, the control instruction may be used to instruct to resume playing of the video that is in the target time period that has already been played and is last played before the current time. The embodiment can meet the requirement that if a user skips a certain section of video to find the loss of an important part in the video content in the process of watching the video, and the loss of the section of video can influence the watching of the subsequent video of the user, the user needs to restore and play the video in the time period last played before the current moment so as to restore the loss in the video content, so that the user can better understand the video content, and the watching experience of the user is improved.
As an optional embodiment, before receiving the video processing request, the embodiment may further perform preprocessing on the target video, where it is to be noted that, in order to ensure the system performance of the client and reduce the storage space of the client, the preprocessing on the target video may be performed by the server, where the preprocessing process may specifically include the following steps S2012 to S2016:
in step S2012, information of the target video is acquired.
In the technical solution provided in step S2012, the information of the target video may include, but is not limited to: the ID of the target video, the number of video frames in the target video, the data information of each frame of video image in the target video, the timestamp of the video frame in the target video, and the like. It should be noted that the information of the target video may also include other information, which is not illustrated herein.
Step S2014, obtaining, according to the information of the target video, an object identifier of the object to be processed in the target video and a time period during which the object to be processed appears in the target video, where the object to be processed includes the target object, and the time period during which the object to be processed appears in the target video includes the target time period.
In the technical solution provided in step S2014, after the server acquires the information of the target video, the server may determine the object to be processed in the target video according to the information, where it should be noted that the object to be processed may be a human object, an animal object, or an object, the object to be processed includes the target object, and the number of the objects to be processed may be one or multiple. After determining the object to be processed, the server may obtain, according to the information of the target video, an object identifier of the object to be processed and a time period during which the object to be processed appears in the target video, where the time period during which the object to be processed appears in the target video may include a target time period corresponding to the object identifier of the target object.
Optionally, the step S2014 of acquiring the object identifier of the object to be processed in the target video and the time period during which the object to be processed appears in the target video according to the information of the target video may include the following steps S20142 to S20144:
step S20142, performing image recognition on the object to be processed appearing in the target video, acquiring characteristic data of the object to be processed, and recording the time period of the object to be processed appearing in the target video;
step S20144, acquiring an object identifier of the object to be processed corresponding to the feature data of the object to be processed from a preset database, where a corresponding relationship between the feature data of the object to be processed and the object identifier of the object to be processed is pre-stored in the preset database.
It should be noted that, in the embodiment, image recognition may be performed on each frame of video image in the target video, and whether the video image includes the object to be processed is determined, and if the frame of video image includes the object to be processed, the embodiment may first record a timestamp of the video frame, and acquire feature data of the object to be processed by using an image recognition technology. It should be noted that, in the embodiment of the present invention, an algorithm used for image recognition is not specifically limited, and all algorithms that can implement image recognition on an object to be processed to obtain feature data of the object to be processed belong to the protection scope of the present invention. It should be further noted that, by performing image recognition on each frame of video image in the target video, feature data of one or more objects to be processed and a time period during which the objects to be processed appear in the target video may be obtained. After the feature data of the object to be processed is obtained, the embodiment may match the obtained feature data with the feature data stored in the preset database, and if the feature data matched with the obtained feature data is stored in the preset database, the object identifier corresponding to the feature data in the preset database may be determined as the object identifier of the object to be processed according to the correspondence stored in the preset database, where the correspondence between the feature data of the object to be processed and the object identifier of the object to be processed may be stored in the preset database in advance. So far, the object identification of the object to be processed and the time period of the object to be processed appearing in the target video are all obtained.
According to the image recognition technology and the corresponding relation between the feature data of the object to be processed and the object identification of the object to be processed, which are stored in the preset database, the object identification of the object to be processed and the time period of the object to be processed appearing in the target video are obtained, and the method and the device are simple and convenient and high in accuracy.
Step S2016, establish a correspondence between the object identifier of the object to be processed and the time period during which the object to be processed appears in the target video.
In the technical solution provided in step S2016, after the server obtains the object identifier of the object to be processed and the time period during which the object to be processed appears in the target video, a corresponding relationship between the object identifier of the object to be processed and the time period during which the object to be processed appears in the target video may be established, and the corresponding relationship is stored in the server, so that the target time period corresponding to the object identifier of the target object may be quickly and accurately obtained by using the corresponding relationship in the following process, thereby achieving an effect of improving the processing efficiency of the video in the target time period.
According to the embodiment, the object identification of the object to be processed in the target video and the time period of the object to be processed appearing in the target video are obtained in advance, and the corresponding relation between the object identification of the object to be processed and the time period of the object to be processed appearing in the target video is established, so that the target time period corresponding to the object identification of the target object can be conveniently and accurately obtained by using the corresponding relation, and further the efficiency and the accuracy of fast forwarding or skipping processing of the video on the target time period in the video playing process are improved.
The present invention also provides a preferred embodiment, which provides a user-defined fast forward/skip specific actor participation segment scheme based on facial recognition technology. Before describing this scheme in detail, the terms appearing in the preferred embodiment are first explained as follows:
album: a single movie or a series, called an album.
Single video: the smallest unit of play, an album contains 1 or more single videos. For example, an album of a movie type, containing a single video of a feature; an album of a series type contains a plurality of single videos, one single video corresponding to a series episode.
Front end: the front end here refers to the video APP installed on the TV.
A rear end: refers to remote services and interfaces deployed at remote servers.
This solution will be described in detail below from the product side and the technical side, respectively:
the expression form and the operation steps of the scheme at the product side are as follows:
when the TV end plays the contents of an album, the user can click on a remote control menu key and provide a filter actor entry button on the pop-up menu panel. The user moves the focus to the button, presses the enter key, pops up the list of the actors starring the tv show and gets the focus automatically.
The user moves the focus through the direction key, selects actors needing not to be seen, presses the confirmation key for selection, and the relevant position of the interface presents the selection effect; the list supports multiple selections if the selected actor can be deselected by pressing the ok key again.
The user clicks the 'confirm' button of the actor list and the save becomes effective. The configuration is bound with the unique ID of the user and stored in the background of the TV, so that the user can still realize the synchronization of the configuration data when logging in by using other clients.
When the content of the same album is played, according to the filtering actor list of the user, the segments containing the actor playing are marked by special colors in the progress jump, and when the content is played to the corresponding position, the subsequent content or the fast-forward related segment is directly played by skipping the related segments.
The user can return to the last skipped part by operating the direction key upwards, and review the content of the previously skipped possibly related important scenarios.
The execution flow of the scheme at the technical side can be as shown in fig. 3, and is specifically described as follows:
step S301, clicking to play a single video.
Step S302, an actor ID list under a specific cid is loaded from the background, where the list includes IDs of actors appearing in the video.
Step S303, determining whether the user configures a filtering actor ID list, where the filtering actor IS list configured by the user includes IDs of actors that the user does not want to watch. If the user configures a list of filtering actor IDs, perform step S304; if the user does not configure the filtered actor ID list, step S306 is performed.
In step S304, a time period corresponding to the ID in the filtered actor ID list is obtained, where the time period is a time period in which the actor corresponding to the ID in the filtered actor ID list appears.
Step S305, fast forward or skip when the video is played to the above time period.
Step S306, the user is prompted to configure a filtered actor ID list.
The following is required for the execution flow shown in the above steps: when the background acquires the single video uploaded by the editor, the background system can collect the face image appearing in the video according to the video content for identification, and the face feature data and the appearing corresponding time period are stored as the additional attribute of the single video. When the background receives the star data edited by the editor, the system performs face recognition according to the uploaded photos, and acquires the star face feature data as star extra data for storage. The actor ID can be obtained by matching the face feature data identified in the video with the face feature data stored in the star resource, and then the corresponding time period of the actor is determined according to the analysis result of the single video.
The scheme can enable a user to select partial actors which the user does not want to watch in the participant list, identify the parts of the content where the partial actors perform through a face recognition technology, meet the partial contents in the playing process, and automatically fast forward or skip (based on the selection mode of the user). And the user can pause fast forward or play back the skipped content at any time through the remote controller. The scheme solves the problem that the related technology can not accurately skip or fast forward a certain or some actor evolution segments, can automatically judge and maintain contents without editing participation, and reduces the workload of editing.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
According to the embodiment of the invention, the video playing device for implementing the video playing method is also provided. Fig. 4 is a schematic diagram of an alternative video playing apparatus according to an embodiment of the present invention, and as shown in fig. 4, the apparatus may include:
a first receiving unit 22, configured to receive a video processing request, where the video processing request is used to request processing of a target object in a played target video, and the target object is an object appearing in the target video; a first obtaining unit 24, configured to obtain an object identifier of the target object from the video processing request; and a playing unit 26, configured to process a video in a target time period during playing of a target video, where the target time period is a time period in which a target object represented by an object identifier appears in the target video, and the processing of the video in the target time period includes: and only playing the video in the target time period, or fast forwarding or skipping the video in the target time period.
It should be noted that the first receiving unit 22 in this embodiment may be configured to execute step S202 in embodiment 1 of this application, the first obtaining unit 24 in this embodiment may be configured to execute step S204 in embodiment 1 of this application, and the playing unit 26 in this embodiment may be configured to execute step S206 in embodiment 1 of this application.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of embodiment 1 described above. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 1, and may be implemented by software or hardware.
As an alternative embodiment, as shown in fig. 5, the playing unit 26 may include: a first sending module 261, configured to send the object identifier to the server; a first receiving module 263, configured to receive a time identifier returned by the server, where the time identifier is used to indicate a target time period; the first playing module 265 is configured to play only the video in the target time period indicated by the time identifier during playing the target video, or fast forward or skip the video in the target time period indicated by the time identifier.
It should be noted that the first sending module 261 in this embodiment may be configured to execute step S2061 in embodiment 1 of this application, the first receiving module 263 in this embodiment may be configured to execute step S2063 in embodiment 1 of this application, and the first playing module 265 in this embodiment may be configured to execute step S2065 in embodiment 1 of this application.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of embodiment 1 described above. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 1, and may be implemented by software or hardware.
As an alternative embodiment, as shown in fig. 6, the playing unit 26 may include: a second sending module 262, configured to send the object identifier to the server; the second receiving module 264 is used for receiving videos, except for the videos in the target time period, in the target videos returned by the server; and a second playing module 266, configured to play videos in the target video except for the video in the target time period.
It should be noted that the second sending module 262 in this embodiment may be configured to execute step S2062 in embodiment 1 of this application, the second receiving module 264 in this embodiment may be configured to execute step S2064 in embodiment 1 of this application, and the second playing module 266 in this embodiment may be configured to execute step S2066 in embodiment 1 of this application.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of embodiment 1 described above. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 1, and may be implemented by software or hardware.
As an alternative embodiment, as shown in fig. 7, the playing unit 26 may include: a third sending module 262' configured to send the object identifier to a server; a third receiving module 264' for receiving the video in the target time period in the target video returned by the server; a third playing module 266' is configured to play the video in the target video over the target time period.
It should be noted that the third sending module 262 'in this embodiment may be configured to execute the step S2062' in embodiment 1 of this application, the third receiving module 264 'in this embodiment may be configured to execute the step S2064' in embodiment 1 of this application, and the third playing module 266 'in this embodiment may be configured to execute the step S2066' in embodiment 1 of this application.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of embodiment 1 described above. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 1, and may be implemented by software or hardware.
As an alternative embodiment, as shown in fig. 8, the embodiment may further include: a second receiving unit 28, configured to receive a control instruction in a process of playing the target video, where the control instruction is used to instruct to resume playing of the video in at least one of the target time periods; and a resume playing unit 210, configured to resume playing the video in at least one of the target time periods in response to the control instruction.
It should be noted that the second receiving unit 28 in this embodiment may be configured to execute step S208 in embodiment 1 of this application, and the resume playing unit 210 in this embodiment may be configured to execute step S210 in embodiment 1 of this application.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of embodiment 1 described above. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 1, and may be implemented by software or hardware.
As an alternative embodiment, as shown in fig. 9, the resume play unit 210 may include: a fourth playing module 2102, configured to resume playing of the video over one or more time periods of the target time periods that have already been played; and/or a fifth playback module 2104 for resuming playback of the video over one or more of the target time periods that have not been played back.
It should be noted that the fourth playing module 2102 in this embodiment may be configured to execute step S2102 in embodiment 1 of the present application, and the fifth playing module 2104 in this embodiment may be configured to execute step S2104 in embodiment 1 of the present application.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of embodiment 1 described above. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 1, and may be implemented by software or hardware.
As an alternative embodiment, as shown in fig. 10, the second receiving unit 28 may include: a fourth receiving module 282, configured to receive a control instruction at the current time; the fourth play module 2102 may include: a resume play sub-module 21022, configured to resume playing the video in the time period last played before the current time.
It should be noted that the fourth receiving module 282 in this embodiment may be configured to execute the step S2082 in embodiment 1 of this application, and the resume play sub-module 21022 in this embodiment may be configured to execute the step S21022 in embodiment 1 of this application.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of embodiment 1 described above. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 1, and may be implemented by software or hardware.
As an alternative embodiment, as shown in fig. 11, the embodiment may further include: a second acquisition unit 212 for acquiring information of the target video before receiving the video processing request; a third obtaining unit 214, configured to obtain, according to the information of the target video, an object identifier of an object to be processed in the target video and a time period during which the object to be processed appears in the target video, where the object to be processed includes the target object and the time period during which the object to be processed appears in the target video includes the target time period; the establishing unit 216 is configured to establish a correspondence between the object identifier of the object to be processed and a time period in which the object to be processed appears in the target video.
It should be noted that the second obtaining unit 212 in this embodiment may be configured to execute step S2012 in embodiment 1 of this application, the third obtaining unit 214 in this embodiment may be configured to execute step S2014 in embodiment 1 of this application, and the establishing unit 216 in this embodiment may be configured to execute step S2016 in embodiment 1 of this application.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of embodiment 1 described above. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 1, and may be implemented by software or hardware.
As an alternative embodiment, as shown in fig. 12, the third obtaining unit 214 may include: the first obtaining submodule 2142 is configured to perform image recognition on an object to be processed appearing in the target video, obtain feature data of the object to be processed, and record a time period during which the object to be processed appears in the target video; the second obtaining submodule 2144 is configured to obtain, from the preset database, an object identifier of the object to be processed corresponding to the feature data of the object to be processed, where a correspondence between the feature data of the object to be processed and the object identifier of the object to be processed is stored in the preset database in advance.
It should be noted that the first obtaining sub-module 2142 in this embodiment may be configured to execute step S20142 in embodiment 1 of the present application, and the second obtaining sub-module 2144 in this embodiment may be configured to execute step S20144 in embodiment 1 of the present application.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of embodiment 1 described above. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 1, and may be implemented by software or hardware.
Through the modules, the purpose of automatically fast forwarding or skipping a certain video segment in the video playing process is achieved, and the technical problem that the accuracy and efficiency of fast forwarding or skipping the certain video segment are low due to the fact that the related technology adopts a manual mode to fast forward or skip the certain video segment is solved, so that the technical effects of improving the accuracy of fast forwarding or skipping the certain video segment and improving the efficiency of fast forwarding or skipping the certain video segment are achieved.
Example 3
According to the embodiment of the invention, the invention also provides a terminal for implementing the video playing method.
Fig. 13 is a block diagram of a terminal according to an embodiment of the present invention, and as shown in fig. 13, the terminal may include: one or more processors 201 (only one of which is shown), a memory 203, and a transmission means 205, as shown in fig. 12, the terminal may further include an input/output device 207.
The memory 203 may be used to store software programs and modules, such as program instructions/modules corresponding to the video playing method and apparatus in the embodiments of the present invention, and the processor 201 executes various functional applications and data processing by running the software programs and modules stored in the memory 203, that is, implements the video playing method described above. The memory 203 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 203 may further include memory located remotely from the processor 201, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 205 is used for receiving or transmitting data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 205 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmission device 205 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
Wherein the memory 203 is specifically used for storing application programs.
The processor 201 may call an application stored in the memory 203 to perform the following steps: receiving a video processing request, wherein the video processing request is used for requesting processing of a target object in a played target video, and the target object is an object appearing in the target video; acquiring an object identifier of a target object from a video processing request; and processing videos in a target time period in the process of playing the target video, wherein the target time period is a time period in which a target object represented by the object identifier appears in the target video, and the processing of the videos in the target time period comprises the following steps: only the video over the target time period is played, or the video over the target time period is fast forwarded or skipped.
The processor 201 is further configured to perform the following steps: sending the object identification to a server; receiving a time identifier returned by the server, wherein the time identifier is used for indicating a target time period; and only playing the video in the target time period indicated by the time identifier during the playing of the target video, or fast forwarding or skipping the video in the target time period indicated by the time identifier.
The processor 201 is further configured to perform the following steps: sending the object identification to a server; receiving videos on a target time period in the target videos returned by the server; playing videos in the target video in the target time period; or, the object identifier is sent to the server; receiving videos except for the videos in the target time period in the target videos returned by the server; and playing videos in the target video except the videos in the target time period.
The processor 201 is further configured to perform the following steps: receiving a control instruction in the process of playing the target video, wherein the control instruction is used for indicating that the video in at least one time period in the target time period is resumed to be played; and responding to the control instruction, and resuming playing of the video in at least one time period in the target time period.
The processor 201 is further configured to perform the following steps: resuming playing the video on one or more time periods in the played target time period; and/or resuming playing of the video over one or more of the target time periods that have not been played.
The processor 201 is further configured to perform the following steps: receiving a control instruction at the current moment; and resuming playing the video in the time period which is played last before the current time.
The processor 201 is further configured to perform the following steps: before receiving a video processing request, acquiring information of a target video; acquiring an object identifier of an object to be processed in a target video and a time period of the object to be processed appearing in the target video according to the information of the target video, wherein the object to be processed comprises the target object, and the time period of the object to be processed appearing in the target video comprises the target time period; and establishing a corresponding relation between the object identification of the object to be processed and the time period of the object to be processed appearing in the target video.
The processor 201 is further configured to perform the following steps: performing image recognition on an object to be processed appearing in the target video, acquiring characteristic data of the object to be processed, and recording a time period of the object to be processed appearing in the target video; and acquiring an object identifier of the object to be processed corresponding to the characteristic data of the object to be processed from a preset database, wherein the preset database stores the corresponding relation between the characteristic data of the object to be processed and the object identifier of the object to be processed in advance.
The embodiment of the invention provides a video playing scheme. The object identification of the target object appearing in the target video is obtained from the received video processing request, then the target time slot of the target object appearing in the target video is obtained, only the video on the target time slot is played in the process of playing the target video, or the video on the target time slot is fast forwarded or skipped, so that the purpose of automatically fast forwarding or skipping a certain video in the video playing process is achieved, the technical problem that the accuracy and the efficiency of fast forwarding or skipping a certain video are low due to the fact that the related technology adopts a manual mode to fast forward or skip a certain video is solved, and the technical effects of improving the accuracy of fast forwarding or skipping a certain video and improving the efficiency of fast forwarding or skipping a certain video are achieved.
Optionally, the specific examples in this embodiment may refer to the examples described in embodiment 1 and embodiment 2, and this embodiment is not described herein again.
It can be understood by those skilled in the art that the structure shown in fig. 13 is only an illustration, and the terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a Mobile Internet Device (MID), a PAD, etc. Fig. 13 is a diagram illustrating a structure of the electronic device. For example, the terminal may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 13, or have a different configuration than shown in FIG. 13.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Example 4
The embodiment of the invention also provides a storage medium. Alternatively, in this embodiment, the storage medium may be a program code for executing a video playing method.
Optionally, in this embodiment, the storage medium may be located on at least one of a plurality of network devices in a network shown in the above embodiment.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
s1, receiving a video processing request, wherein the video processing request is used for requesting processing of a target object in a played target video, and the target object is an object appearing in the target video;
s2, obtaining the object mark of the target object from the video processing request;
s3, processing a video in a target time period during playing of the target video, where the target time period is a time period in which a target object represented by the object identifier appears in the target video, and processing the video in the target time period includes: only the video over the target time period is played, or the video over the target time period is fast forwarded or skipped.
Optionally, the storage medium is further arranged to store program code for performing the steps of: sending the object identification to a server; receiving a time identifier returned by the server, wherein the time identifier is used for indicating a target time period; and only playing the video in the target time period indicated by the time identifier during the playing of the target video, or fast forwarding or skipping the video in the target time period indicated by the time identifier.
Optionally, the storage medium is further arranged to store program code for performing the steps of: sending the object identification to a server; receiving videos on a target time period in the target videos returned by the server; playing videos in the target video in the target time period; or, the object identifier is sent to the server; receiving videos except for the videos in the target time period in the target videos returned by the server; and playing videos in the target video except the videos in the target time period.
Optionally, the storage medium is further arranged to store program code for performing the steps of: receiving a control instruction in the process of playing the target video, wherein the control instruction is used for indicating that the video in at least one time period in the target time period is resumed to be played; and responding to the control instruction, and resuming playing of the video in at least one time period in the target time period.
Optionally, the storage medium is further arranged to store program code for performing the steps of: resuming playing the video on one or more time periods in the played target time period; and/or resuming playing of the video over one or more of the target time periods that have not been played.
Optionally, the storage medium is further arranged to store program code for performing the steps of: receiving a control instruction at the current moment; and resuming playing the video in the time period which is played last before the current time.
Optionally, the storage medium is further arranged to store program code for performing the steps of: before receiving a video processing request, acquiring information of a target video; acquiring an object identifier of an object to be processed in a target video and a time period of the object to be processed appearing in the target video according to the information of the target video, wherein the object to be processed comprises the target object, and the time period of the object to be processed appearing in the target video comprises the target time period; and establishing a corresponding relation between the object identification of the object to be processed and the time period of the object to be processed appearing in the target video.
Optionally, the storage medium is further arranged to store program code for performing the steps of: performing image recognition on an object to be processed appearing in the target video, acquiring characteristic data of the object to be processed, and recording a time period of the object to be processed appearing in the target video; and acquiring an object identifier of the object to be processed corresponding to the characteristic data of the object to be processed from a preset database, wherein the preset database stores the corresponding relation between the characteristic data of the object to be processed and the object identifier of the object to be processed in advance.
Optionally, the specific examples in this embodiment may refer to the examples described in embodiment 1 and embodiment 2, and this embodiment is not described herein again.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (24)

1. A video playback method, comprising:
determining an object identification list of objects appearing in a target video;
matching the object identification list with a pre-configured filtering object list to determine the object identification of the target object to be processed in the target video;
receiving a video processing request, wherein the video processing request is used for requesting processing of a target object in a played target video, the target object is an object appearing in the target video, a preset database stores a corresponding relation between an object identifier of an object to be processed and a time period of the object to be processed appearing in the target video, and the object to be processed comprises the target object;
acquiring an object identifier of the target object from the video processing request; and processing the video in a target time period in the process of playing the target video, and marking the target time period with a special color on a progress bar of the target video, wherein the target time period is a time period in which the target object represented by the object identifier appears in the target video, and the time period in which the object to be processed appears in the target video comprises the target time period,
wherein processing the video over the target time period comprises: only playing the video in the target time period, or fast forwarding or skipping the video in the target time period;
the video only played in the target time period is played from the starting time of the target time period to the ending time of the target time period; the video in the fast forward target time quantum is a video frame only playing one or more moments in the target time quantum, wherein the video frame is a key frame of the video in the target time quantum; and the video on the target time period is played by jumping to the next time after the end time of the target time period from the last time before the starting time of the target time period in the target video.
2. The method of claim 1, wherein the playing only the video over the target time segment, or fast forwarding or skipping the video over the target time segment comprises:
only playing the video in the target time period according to the time identifier or the video returned by the server, or fast forwarding or skipping the video in the target time period, wherein the time identifier is used for indicating the target time period, and the video returned by the server comprises one of the following: a video over the target time period in the target video, a video other than the video over the target time period in the target video.
3. The method of claim 2, wherein playing only the video in the target time segment according to the time identification returned by the server, or fast forwarding or skipping the video in the target time segment comprises:
sending the object identification to a server;
receiving the time identifier returned by the server;
and only playing the video in the target time period indicated by the time identifier in the process of playing the target video, or fast forwarding or skipping the video in the target time period indicated by the time identifier.
4. The method of claim 3, further comprising, after receiving the time identifier returned by the server:
and marking the target time period of the target video cached locally.
5. The method of claim 2, wherein playing only the video in the target time slot according to the video returned by the server, or fast forwarding or skipping the video in the target time slot comprises:
sending the object identification to a server; receiving videos on the target time period in the target videos returned by the server; playing videos in the target video over the target time period; or
Sending the object identification to a server; receiving videos, except for the videos in the target time period, in the target videos returned by the server; and playing videos in the target videos except the videos in the target time period.
6. The method of claim 1, wherein during the playing of the target video, the method further comprises:
receiving a control instruction, wherein the control instruction is used for instructing to resume playing of the video in at least one time period of the target time periods;
and responding to the control instruction, and resuming playing of the video on at least one time period in the target time period.
7. The method of claim 6, wherein resuming playing of the video over at least one of the target time periods comprises:
resuming playing the video on one or more of the target time periods that have already been played; and/or
And resuming playing the video on one or more time periods in the target time period which is not played.
8. The method of claim 7,
the receiving the control instruction comprises: receiving the control instruction at the current moment;
the resuming playing of the video over one or more of the target time periods that have already been played comprises: and resuming playing the video on the time period which is played last before the current time.
9. The method of claim 1, wherein prior to said receiving a video processing request, the method further comprises:
acquiring information of the target video;
acquiring an object identifier of an object to be processed in the target video and a time period for the object to be processed to appear in the target video according to the information of the target video;
and establishing a corresponding relation between the object identification of the object to be processed and the time period of the object to be processed appearing in the target video.
10. The method according to claim 9, wherein the obtaining of the object identifier of the object to be processed in the target video and the time period for which the object to be processed appears in the target video according to the information of the target video comprises:
performing image recognition on the object to be processed appearing in the target video, acquiring characteristic data of the object to be processed, and recording the time period of the object to be processed appearing in the target video;
and acquiring an object identifier of the object to be processed corresponding to the characteristic data of the object to be processed from a preset database, wherein the preset database stores the corresponding relationship between the characteristic data of the object to be processed and the object identifier of the object to be processed in advance.
11. The method according to claim 10, wherein obtaining the object identifier of the object to be processed corresponding to the feature data of the object to be processed from a preset database comprises:
matching the acquired feature data with the feature data stored in the preset database;
and if the feature data matched with the acquired feature data are stored in the preset database, determining the object identifier corresponding to the feature data in the preset database as the object identifier of the object to be processed according to the corresponding relation stored in the preset database.
12. The method of claim 1, wherein receiving a video processing request comprises:
receiving the video processing request before playing the target video; alternatively, the first and second electrodes may be,
and receiving the video processing request in the process of playing the target video.
13. The method of claim 1, wherein the target objects appearing in the target video comprise one or more.
14. The method of claim 1, wherein the object identification of each of the target objects is different.
15. A video playback apparatus, comprising:
a determining unit configured to determine an object identification list of objects appearing in the target video;
the matching unit is used for matching the object identification list with a pre-configured filtering object list so as to determine the object identification of the target object to be processed in the target video;
a first receiving unit, configured to receive a video processing request, where the video processing request is used to request processing of a target object in a played target video, where the target object is an object appearing in the target video, and a preset database stores a correspondence between an object identifier of an object to be processed and a time period during which the object to be processed appears in the target video, where the object to be processed includes the target object;
a first obtaining unit, configured to obtain an object identifier of the target object from the video processing request; and
a playing unit, configured to process a video in a target time period in a process of playing the target video, and perform a special color marking on the progress bar of the target video for the target time period, where the target time period is a time period in which the target object represented by the object identifier appears in the target video, and a time period in which the object to be processed appears in the target video includes the target time period, and processing the video in the target time period includes: only playing the video in the target time period, or fast forwarding or skipping the video in the target time period, wherein the playing of only the video in the target time period is from the starting time of the target time period to the ending time of the target time period; the video in the fast forward target time quantum is a video frame only playing one or more moments in the target time quantum, wherein the video frame is a key frame of the video in the target time quantum; and the video on the target time period is played by jumping to the next time after the end time of the target time period from the last time before the starting time of the target time period in the target video.
16. The apparatus according to claim 15, wherein the playing unit is specifically configured to, during playing the target video, play only the video in the target time period according to a time identifier or a video returned by a server, or fast forward or skip the video in the target time period, where the time identifier is used to indicate the target time period, and the video returned by the server includes one of: a video over the target time period in the target video, a video other than the video over the target time period in the target video.
17. The apparatus of claim 16, wherein the playback unit comprises:
the first sending module is used for sending the object identifier to a server;
the first receiving module is used for receiving the time identifier returned by the server;
and the first playing module is used for only playing the video in the target time period indicated by the time identifier in the process of playing the target video, or fast forwarding or skipping the video in the target time period indicated by the time identifier.
18. The apparatus of claim 16, wherein the playback unit comprises:
the second sending module is used for sending the object identifier to a server; the second receiving module is used for receiving videos, except for the videos in the target time period, in the target videos returned by the server; the second playing module is used for playing videos except the videos in the target time period in the target videos; or
The third sending module is used for sending the object identifier to a server; a third receiving module, configured to receive a video in the target video returned by the server over the target time period; and the third playing module is used for playing the video in the target time period in the target video.
19. The apparatus of claim 15, further comprising:
a second receiving unit, configured to receive a control instruction in a process of playing the target video, where the control instruction is used to instruct to resume playing of a video in at least one of the target time periods;
and the resuming playing unit is used for responding to the control instruction and resuming playing the video on at least one time slot in the target time slot.
20. The apparatus of claim 19, wherein the resume play unit comprises:
the fourth playing module is used for resuming the playing of the video on one or more time periods in the played target time period; and/or
And the fifth playing module is used for resuming the playing of the video on one or more time periods in the target time period which is not played.
21. The apparatus of claim 20,
the second receiving unit includes: the fourth receiving module is used for receiving the control instruction at the current moment;
the fourth playing module comprises: and the resume playing sub-module is used for resuming playing the video on the time slot which is played last before the current time.
22. The apparatus of claim 15, further comprising:
a second obtaining unit, configured to obtain information of the target video before the video processing request is received;
a third obtaining unit, configured to obtain, according to the information of the target video, an object identifier of an object to be processed in the target video and a time period during which the object to be processed appears in the target video;
and the establishing unit is used for establishing a corresponding relation between the object identification of the object to be processed and the time period of the object to be processed appearing in the target video.
23. The apparatus of claim 22, wherein the third obtaining unit comprises:
the first acquisition submodule is used for carrying out image recognition on the object to be processed appearing in the target video, acquiring characteristic data of the object to be processed and recording the time period of the object to be processed appearing in the target video;
and the second obtaining submodule is used for obtaining the object identifier of the object to be processed corresponding to the characteristic data of the object to be processed from a preset database, wherein the preset database stores the corresponding relation between the characteristic data of the object to be processed and the object identifier of the object to be processed in advance.
24. A computer-readable storage medium, in which a computer program is stored which a processor is arranged to execute, wherein the computer program is arranged to perform the method of any of claims 1 to 14 when executed.
CN201811190835.2A 2017-01-05 2017-01-05 Video playing method and device Active CN109168037B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811190835.2A CN109168037B (en) 2017-01-05 2017-01-05 Video playing method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811190835.2A CN109168037B (en) 2017-01-05 2017-01-05 Video playing method and device
CN201710007572.6A CN106878767B (en) 2017-01-05 2017-01-05 Video broadcasting method and device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201710007572.6A Division CN106878767B (en) 2017-01-05 2017-01-05 Video broadcasting method and device

Publications (2)

Publication Number Publication Date
CN109168037A CN109168037A (en) 2019-01-08
CN109168037B true CN109168037B (en) 2021-08-27

Family

ID=59165578

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201710007572.6A Active CN106878767B (en) 2017-01-05 2017-01-05 Video broadcasting method and device
CN201811190835.2A Active CN109168037B (en) 2017-01-05 2017-01-05 Video playing method and device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201710007572.6A Active CN106878767B (en) 2017-01-05 2017-01-05 Video broadcasting method and device

Country Status (1)

Country Link
CN (2) CN106878767B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107454475A (en) * 2017-07-28 2017-12-08 珠海市魅族科技有限公司 Control method and device, computer installation and the readable storage medium storing program for executing of video playback
CN107743248A (en) * 2017-09-28 2018-02-27 北京奇艺世纪科技有限公司 A kind of video fast forward method and device
CN108271069A (en) * 2017-12-11 2018-07-10 北京奇艺世纪科技有限公司 The segment filter method and device of a kind of video frequency program
CN108124170A (en) * 2017-12-12 2018-06-05 广州市动景计算机科技有限公司 A kind of video broadcasting method, device and terminal device
CN109963184B (en) * 2017-12-14 2022-04-29 阿里巴巴集团控股有限公司 Audio and video network playing method and device and electronic equipment
CN110121098B (en) * 2018-02-05 2021-08-17 腾讯科技(深圳)有限公司 Video playing method and device, storage medium and electronic device
CN108763375A (en) * 2018-05-17 2018-11-06 上海七牛信息技术有限公司 A kind of media file caching method, device and multimedia play system
CN108882024B (en) * 2018-08-01 2021-08-20 北京奇艺世纪科技有限公司 Video playing method and device and electronic equipment
CN111274418A (en) * 2018-12-05 2020-06-12 阿里巴巴集团控股有限公司 Multimedia resource playing method and device, electronic equipment and storage medium
CN110475148A (en) * 2019-08-13 2019-11-19 北京奇艺世纪科技有限公司 Video broadcasting method, device and electronic equipment
CN111629266B (en) * 2020-04-10 2022-02-11 北京奇艺世纪科技有限公司 Playing progress display method and device, electronic equipment and storage medium
CN114422867A (en) * 2021-12-16 2022-04-29 珠海格力电器股份有限公司 Video playing method, system, storage medium and electronic equipment
CN114745588A (en) * 2022-04-08 2022-07-12 泰州市华仕达机械制造有限公司 Microcomputer operating platform applying characteristic detection

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104796781A (en) * 2015-03-31 2015-07-22 小米科技有限责任公司 Video clip extraction method and device
CN105872717A (en) * 2015-10-26 2016-08-17 乐视移动智能信息技术(北京)有限公司 Video processing method and system, video player and cloud server
CN106385624A (en) * 2016-09-29 2017-02-08 乐视控股(北京)有限公司 Video playing method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4550725B2 (en) * 2005-11-28 2010-09-22 株式会社東芝 Video viewing support system
CN103152642A (en) * 2011-12-06 2013-06-12 中兴通讯股份有限公司 Playing method and device of program
US8789120B2 (en) * 2012-03-21 2014-07-22 Sony Corporation Temporal video tagging and distribution
US8948568B2 (en) * 2012-07-31 2015-02-03 Google Inc. Customized video
CN103336955A (en) * 2013-07-09 2013-10-02 百度在线网络技术(北京)有限公司 Generation method and generation device of character playing locus in video, and client
CN103488764B (en) * 2013-09-26 2016-08-17 天脉聚源(北京)传媒科技有限公司 Individualized video content recommendation method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104796781A (en) * 2015-03-31 2015-07-22 小米科技有限责任公司 Video clip extraction method and device
CN105872717A (en) * 2015-10-26 2016-08-17 乐视移动智能信息技术(北京)有限公司 Video processing method and system, video player and cloud server
CN106385624A (en) * 2016-09-29 2017-02-08 乐视控股(北京)有限公司 Video playing method and device

Also Published As

Publication number Publication date
CN109168037A (en) 2019-01-08
CN106878767A (en) 2017-06-20
CN106878767B (en) 2018-09-18

Similar Documents

Publication Publication Date Title
CN109168037B (en) Video playing method and device
CN113965811B (en) Play control method and device, storage medium and electronic device
CN110784752B (en) Video interaction method and device, computer equipment and storage medium
CN107613235B (en) Video recording method and device
US11363353B2 (en) Video highlight determination method and apparatus, storage medium, and electronic device
CN106803987B (en) Video data acquisition method, device and system
CN111615003B (en) Video playing control method, device, equipment and storage medium
CN108632676B (en) Image display method, image display device, storage medium and electronic device
WO2017015090A1 (en) Media production system with score-based display feature
CN112383790B (en) Live broadcast screen recording method and device, electronic equipment and storage medium
CN109829064B (en) Media resource sharing and playing method and device, storage medium and electronic device
CN110708589A (en) Information sharing method and device, storage medium and electronic device
US20190199763A1 (en) Systems and methods for previewing content
CN109905749B (en) Video playing method and device, storage medium and electronic device
CN107547922B (en) Information processing method, device, system and computer readable storage medium
CN115086752B (en) Recording method, system and storage medium for browser page content
CN104901939B (en) Method for broadcasting multimedia file and terminal and server
CN113542909A (en) Video processing method and device, electronic equipment and computer storage medium
CN108616768B (en) Synchronous playing method and device of multimedia resources, storage position and electronic device
KR102492022B1 (en) Method, Apparatus and System of managing contents in Multi-channel Network
CN103313124A (en) Local recording service implementation method and local recording service implementation device
CN112052376A (en) Resource recommendation method, device, server, equipment and medium
KR102492014B1 (en) Method, Apparatus and System of managing contents in Multi-channel Network
CN110691256B (en) Video associated information processing method and device, server and storage medium
CN116156224A (en) Processing method, device, equipment and medium based on video playing record

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221214

Address after: 1402, Floor 14, Block A, Haina Baichuan Headquarters Building, No. 6, Baoxing Road, Haibin Community, Xin'an Street, Bao'an District, Shenzhen, Guangdong 518100

Patentee after: Shenzhen Yayue Technology Co.,Ltd.

Address before: 518000 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 Floors

Patentee before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.