CN111010619A - Method, apparatus, computer device and storage medium for processing short video data - Google Patents

Method, apparatus, computer device and storage medium for processing short video data Download PDF

Info

Publication number
CN111010619A
CN111010619A CN201911235283.7A CN201911235283A CN111010619A CN 111010619 A CN111010619 A CN 111010619A CN 201911235283 A CN201911235283 A CN 201911235283A CN 111010619 A CN111010619 A CN 111010619A
Authority
CN
China
Prior art keywords
video
tag
current
preset
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911235283.7A
Other languages
Chinese (zh)
Inventor
王玉东
顾伟
付元宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201911235283.7A priority Critical patent/CN111010619A/en
Publication of CN111010619A publication Critical patent/CN111010619A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2387Stream processing in response to a playback request from an end-user, e.g. for trick-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44222Analytics of user selections, e.g. selection of programs or purchase activity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • H04N21/8405Generation or processing of descriptive data, e.g. content descriptors represented by keywords

Abstract

The present application relates to a method, apparatus, computer device and storage medium for processing short video data. The method comprises the following steps: the method comprises the steps of obtaining a current playing point location of a current video, obtaining a video label of a next video when the playing point location of the current video is located at a preset point location, judging whether the video label of the next video is matched with a first preset label, obtaining a playing mode corresponding to the first preset label when the video label of the next video is matched with the first preset label, and taking the playing mode corresponding to the first preset label as a target playing mode of the next video. The video tag is determined by marking the video according to each user watching the video, namely, the video content is determined through the watching records of other users, so that the real video content is obtained, the corresponding playing mode can be selected according to the real video content, and the watching experience of the user is improved.

Description

Method, apparatus, computer device and storage medium for processing short video data
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for processing short video data, a computer device, and a storage medium.
Background
When watching short videos, users can have favorite short video contents and unpleasant short video contents, and different users have different feelings on the short videos with the same contents. Each user can adopt a corresponding playing mode according to the content of the short video, for example, for the unpleasant short video content, the user can directly skip or play at a higher speed, and for the user's favorite short video content, the user can play at a normal playing speed or a slower speed, and so on. To adjust the play mode according to the video content, the video content needs to be known in advance. At present, the content of the unplayed video is known by the short video through a video title, or after playing, the short video is prompted through a barrage, or a user directly views the short video to know the content of the video. Since the title is mostly defined by the uploader, it is difficult for the user to know the real content in the video through the video title, and the user cannot extract the prediction of the video content that the user does not want to watch or the favorite watching content, thereby affecting the watching experience of the user.
Disclosure of Invention
In order to solve the above technical problem, the present application provides a method, an apparatus, a computer device, and a storage medium for processing short video data.
In a first aspect, the present application provides a method for processing short video data, comprising:
acquiring a current playing point position of a current video;
when the playing point position of the current video is located at a preset point position, acquiring a video label of the next video;
judging whether a video label of the next video is matched with a first preset label or not;
and when the video tag of the next video is matched with the first preset tag, acquiring the play mode corresponding to the first preset tag, and taking the play mode corresponding to the first preset tag as the target play mode of the next video.
In a second aspect, the present application provides an apparatus for processing short video data, comprising:
the playing point location obtaining module is used for obtaining the current playing point location of the current video;
the next video tag acquisition module is used for acquiring a video tag of a next video when the playing point location of the current video is located at the preset point location;
the judging module is used for judging whether the video label of the next video is matched with a first preset label or not;
and the next video playing mode determining module is used for acquiring a playing mode corresponding to the first preset label when the video label of the next video is matched with the first preset label, and taking the playing mode corresponding to the first preset label as a target playing mode of the next video.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring a current playing point position of a current video;
when the playing point position of the current video is located at a preset point position, acquiring a video label of the next video;
judging whether a video label of the next video is matched with a first preset label or not;
and when the video tag of the next video is matched with the first preset tag, acquiring the play mode corresponding to the first preset tag, and taking the play mode corresponding to the first preset tag as the target play mode of the next video.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a current playing point position of a current video;
when the playing point position of the current video is located at a preset point position, acquiring a video label of the next video;
judging whether a video label of the next video is matched with a first preset label or not;
and when the video tag of the next video is matched with the first preset tag, acquiring the play mode corresponding to the first preset tag, and taking the play mode corresponding to the first preset tag as the target play mode of the next video.
The method, the apparatus, the computer device and the storage medium for processing the short video data comprise: acquiring a current playing point position of a current video; when the playing point position of the current video is located at a preset point position, acquiring a video label of the next video; judging whether a video label of the next video is matched with a first preset label or not; and when the video tag of the next video is matched with the first preset tag, acquiring the play mode corresponding to the first preset tag, and taking the play mode corresponding to the first preset tag as the target play mode of the next video. The video tag is determined by marking the video according to each user watching the video, namely, the video content is determined through the watching records of other users, so that the real video content is obtained, the corresponding playing mode can be selected according to the real video content, and the watching experience of the user is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a diagram of an application environment of a method of processing short video data according to an embodiment;
FIG. 2 is a flow diagram that illustrates a method for processing short video data in one embodiment;
FIG. 3 is a block diagram of an apparatus for processing short video data according to one embodiment;
FIG. 4 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a diagram of an application environment of a method of processing short video data according to an embodiment. Referring to fig. 1, the method of processing short video data is applied to a system for processing short video data. The system for processing short video data includes a terminal 110 and a server 120. The terminal 110 and the server 120 are connected through a network. When the current video is sent by the server 120, the terminal 110 obtains the current playing point location of the current video, and when the playing point location of the current video is located at the preset point location, obtains a video tag of a next video, where the video tag is obtained by counting the mark information of the user, and determines whether the video tag of the next video matches with the first preset tag, and when the video tag of the next video matches with the first preset tag, obtains a playing mode corresponding to the first preset tag, and takes the playing mode corresponding to the first preset tag as a target playing mode of the next video.
The terminal 110 may specifically be a desktop terminal or a mobile terminal, and the mobile terminal may specifically be at least one of a mobile phone, a tablet computer, a notebook computer, and the like. The server 120 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers.
In one embodiment, as shown in fig. 2, a method of processing short video data is provided. The embodiment is mainly illustrated by applying the method to the terminal 110 in fig. 1. Referring to fig. 2, the method for processing short video data specifically includes the following steps:
step S201, a current playing point of a current video is obtained.
Step S202, when the playing point location of the current video is located at the preset point location, the video label of the next video is obtained.
In this embodiment, the video tag is obtained by counting the tag information of the user.
Step S203, determine whether the video tag of the next video matches the first preset tag.
Step S204, when the video label of the next video is matched with the first preset label, acquiring the play mode corresponding to the first preset label, and taking the play mode corresponding to the first preset label as the target play mode of the next video.
Specifically, the current video refers to a short video being played. The short video refers to a video with a short video playing time, such as 5 seconds or 15 seconds. The current playing point position refers to the progress information of the current video being played. The preset points are preset, and the preset point locations of the videos may be the same or different, for example, the preset point location is set at 3 seconds from the end of the current video. The next video is the video data behind the current video after being sequenced according to a preset sequencing rule. The video tag of the next video can be a user-defined tag of a user, and can also be a preset video tag. The user-defined label is a label filled in by a user through a user-defined option, and a preset video label is provided with a preset option for the user to select. The first preset tag refers to a preset tag, wherein the first preset tag may include one or more tags. The video tags may include advertisements, soft pornography, heavy tastes, fishy smell, terrorism, abuse and vulgar, etc., and the first predetermined tag is one or more of the above video tags. Each user-defined video tag can be selected by a user who defines the video tag when the user watches other videos, and if the user A defines the tag MMM when watching the video X, the user A checks and selects the user-defined tag MMM when watching the video Y and the video Z.
In one embodiment, the preset video tags include statistical tags and fixed tags.
Specifically, the fixed tag is a pre-configured tag for a user to select, the statistical tag is a tag obtained by performing statistics and analysis on a custom tag of each video data according to each user, and the custom tag is a tag defined by the user according to personal preference. The method can be used for counting the occurrence frequency of the custom tags directly when counting the custom tags, also can be used for classifying the custom tags to obtain tags of multiple types, counting the tags of various types, and selecting the tags with more occurrence frequencies as unified tags after counting the frequency.
In one embodiment, the counting time may be counted from the beginning of the mark to the current, or may be counted for a period of time, such as counting the tags in the last half year or the last month, and so on. And counting the latest tags to better accord with the current habits of the user, thereby improving the user experience.
In one embodiment, the statistical tags are tags obtained by analyzing and counting a historical tag set formed by historical tags obtained by labeling each video by each user by a server, wherein the historical tag set comprises custom tags and fixed tags, each custom tag of the custom tag set is classified and counted to obtain a plurality of candidate statistical tags and corresponding numbers, and the candidate statistical tags with the number greater than the preset number of the candidate statistical tags are statistical tags.
Specifically, each user classifies history tags obtained by labeling each video to obtain types corresponding to each history tag, and the history tags are counted according to the types to obtain candidate counting tags and corresponding occurrence times. The historical labels are firstly classified according to the types, and the similar labels are classified to obtain the candidate statistical label labels. Counting the number of each candidate statistical label, judging whether the number of each candidate statistical label is larger than a preset number or not, and taking the candidate statistical labels larger than the preset number as statistical labels. Wherein the predetermined number may be customized, such as defined as 5, 10, or 100, etc.
The play mode refers to a play mode for playing video, including but not limited to skip, double speed play, normal play mode, etc. The skipping mode is to skip the video data directly, the double-speed playing mode includes slow playing and fast playing, the slow playing is a playing mode slower than the normal playing speed, and the fast playing is a playing mode faster than the normal playing speed. Different video tags can correspond different play modes, and each user can set the play mode corresponding to each video tag in a user-defined manner. If the first preset label is set as advertisement, bloody smell and horror video, the corresponding playing mode is skipping or double-speed playing.
The method for processing short video data comprises the following steps: the method comprises the steps of obtaining a current playing point location of a current video, obtaining a video label of a next video when the playing point location of the current video is located at a preset point location, wherein the video label is obtained by counting mark information of a user, judging whether the video label of the next video is matched with a first preset label or not, obtaining a playing mode corresponding to the first preset label when the video label of the next video is matched with the first preset label, and taking the playing mode corresponding to the first preset label as a target playing mode of the next video. When the current video is played, the video label of the next video marked by the user is obtained, the video label is judged, and if the video label is a preset label, the next video is played by adopting a playing mode of the preset corresponding label. The video is marked by the user-marked label, the obtained label of the video is more in line with the expectation of the user, the next video is played according to the playing mode corresponding to the pre-configured video label, and the watching experience of the user is improved.
In one embodiment, the method of processing short video data further comprises: acquiring a historical label of a current user identification, acquiring a preset video label, and determining a current label of a current video according to the historical label of the current user identification and the preset video label; and before switching the current video, displaying options and custom options corresponding to the current label.
In particular, the current user identification refers to the user identification of the user being processed. Wherein the current user identification may be one or more. The preset video tag refers to tag data set in advance for evaluating a video. The current tag is a tag of a current video being watched by a user corresponding to the current user identification being processed. The historical tags refer to tags marked on each video watched by a user before the current time, wherein the historical tags can be used for customizing an obtained time period when being obtained, for example, all tags after the user uses a product can be obtained, or tags corresponding to the user in the last week, month, half year and other time can be obtained, the current tags are obtained by integrating the preset tags and the historical tags corresponding to the current user, wherein the integration can be used for calculating a union of two tag sets, or classifying and combining the preset tags and the historical tags corresponding to the current user to obtain similar tags. And displaying options and user-defined options corresponding to the current tag so that a user can tag the current video. The options corresponding to the current tags are used for directly marking videos by the user, so that the user can quickly mark the videos meeting the same tags, the user-defined options are used for filling user-defined tags, and when the user meets a new type of tags or designs tags according to personal preferences, interaction with the user is increased, and viewing experience of the user is improved. If the preset tags comprise A, B, C and D tags, and the user-defined tags comprise E tags and F tags, options and user-defined options corresponding to A, B, C, D, E and F tags are displayed, when the user considers that the current video is of type A, the option corresponding to A can be directly selected to mark the current video, and if the type of the current video is G, the current video is marked by writing the option in G through the user-defined option. The marking of the current video can be performed at any time of the playing of the current video or after the playing is finished, for example, the video marking can be performed when the video is close to the tail sound, or the video marking can be performed in a staying time period after the video is played and before the next video is played. The display time of the marking options is customized according to requirements.
In one embodiment, the tag option is hidden and is presented after a user performs a preset operation for presenting the tag option. The hidden function is set to meet the requirements of different users or different scenes, and the watching experience of the users can be improved.
In one embodiment, determining a current tag of a current video according to a history tag of a current user identifier and a preset video tag includes: and calculating a union set of the historical labels and the preset video labels to obtain the current label of the current video.
Specifically, a union mode is adopted to perform de-duplication processing on the historical labels and the preset video labels to obtain all different labels, and all different labels are used as the labels of the current video. The union is used for removing the duplicate, and the repeated appearance of the same label can be avoided, so that the video labels are simplified.
In one embodiment, after presenting the option and the custom option corresponding to the current tag, the method further includes: receiving a target option selected by a user from options corresponding to a current tag, and storing a corresponding relation between the current tag corresponding to the target option and a current video; or receiving a custom tag filled in from a custom option by a user, and storing the corresponding relation between the custom tag and the current video when the custom tag is not matched with the current tag.
Specifically, when the user includes a plurality of tags for the marking information of the current video, part of the current tags may be the current tags, part of the current tags may be the custom tags, and all of the current tags or all of the custom tags may be the current tags. After displaying options and user-defined options corresponding to the current tags, a user selects one or more tags from the options corresponding to the current tags as target options, and stores the corresponding relation between the current tags corresponding to the target options and the current video to obtain the marking information of the current video; and/or receiving a custom tag filled in from the custom option by a user, judging whether the custom tag is matched with the current tag or not, taking the option corresponding to the matched current tag as a target option when the custom tag is matched with the current tag, and storing the corresponding relation between the custom tag and the current video when the custom tag is not matched with the current tag to obtain marking information.
In one embodiment, the playing mode includes skipping and double-speed playing, acquiring a playing mode corresponding to the first preset tag, and after the playing mode corresponding to the first preset tag is used as a target playing mode of the next video, the method further includes: when the target playing mode is skip, acquiring a video tag of the next video, and executing to judge whether the video tag of the next video is matched with the first video tag; when the target playing mode is double-speed playing, when entering the next video, generating a playing instruction for playing the next video at double speed, and executing the playing instruction.
Specifically, skipping refers to not playing the next video, and double-speed playing refers to playing the video at a preset playing rate, such as 2-speed, 1.5-speed, or 0.8-speed, etc. And if the target playing mode of the next video is skipped, acquiring the video tag of the next video, executing and judging whether the video tag of the next video is matched with the first video tag, if so, acquiring the playing mode of the first video tag, and taking the playing mode as the target playing mode of the next video, otherwise, continuously acquiring the next video.
In one embodiment, after acquiring the video tag of the next video, the method further includes: the video tag of the next video is shown.
Specifically, a video tag is presented, which can be used to prompt the user for other users to mark the data of the video, and if there are 100 people marking that the next video is an advertisement, the user can manually determine the play mode of the next video according to the video tag of the next video.
In one embodiment, the user may manually modify the play mode of the video at any time.
In a specific embodiment, the method for processing short video data includes:
step S301: and (6) reporting the label.
The method comprises the steps of collecting user watching video behavior data through a client log, and collecting short video samples which can be disliked by a user (such as collecting videos which are clicked to be 'disliked' or 'stepped on' more). By analyzing the short video samples offline, the reasons why the short videos may be disliked are summarized, such as 'draw a drama', 'advertisement', 'embarrassment', 'soft pornography', 'burn a brain', 'urine point', and the like.
Client online video tags and corresponding progress prompt and active marking functions-when the client detects that the user slides the progress bar backwards, the client is reminded whether to mark the segment as a plurality of tags counted in the above.
Step S302: and (5) marking reminding and displaying by the fragments.
Reminding of short videos to be played, namely when a user continuously watches a plurality of short videos through a sliding screen, when a certain short video a is watched, a video b behind a progress point a is marked as a certain label by a plurality of users, the user is reminded in advance by tips text boxes which do not influence the watching of the user when the user watches the video a, for example, the next video is marked as advertisement implantation by xx users "
Reminding of playing short video-when a user is playing a short video marked by other users, the user can be prompted by a short toast box, "the video has been marked as advertisement placement by xx users".
Step S303: skipping of the marked segment.
Actively skipping, namely when a user is about to watch the short videos marked by other users, when the user receives a label prompt of a next video, simultaneously popping up a button for skipping the next video near a tips text box, clicking to confirm, and directly playing a page of the next short video after sliding out the current short video.
Automatic skipping-the user can preset at the client, turn on the "automatic skipping mark segment" switch, and select the label to be skipped automatically, such as selecting the "soft pornography" and "advertisement placement" labels. And then, when the user views the two labels marked by more people, the skipping is automatically carried out without manual operation. And alerts the user after skipping that the segment marked as soft pornography by the xx user has been skipped.
By marking of the user, more accurate dimension data which can be referred to is obtained. A large number of users can obtain more complete and sufficient algorithms and machine learning materials for the marking data of the video. The user can automatically set the video data to be skipped, and the user experience is improved.
Fig. 2 is a flow diagram illustrating a method for processing short video data according to one embodiment. It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 3, there is provided an apparatus 200 for processing short video data, comprising:
a playing point location obtaining module 201, configured to obtain a current playing point location of a current video.
The next video tag obtaining module 202 is configured to obtain a video tag of a next video when the playing point location of the current video is located at the preset point location, where the video tag is obtained by counting the tag information of the user.
The determining module 203 is configured to determine whether the video tag of the next video matches the first preset tag.
The next video playing mode determining module 204 is configured to, when the video tag of the next video matches the first preset tag, acquire a playing mode corresponding to the first preset tag, and use the playing mode corresponding to the first preset tag as a target playing mode of the next video.
In an embodiment, the apparatus for processing short video data further includes:
the historical label acquisition module is used for acquiring a historical label of the current user identifier;
the preset video tag acquisition module is used for acquiring a preset video tag;
the label determining module is used for determining a current label of the current video according to a historical label of the current user identifier and a preset video label;
and the option display module is used for displaying the options and the user-defined options corresponding to the current label before switching the current video.
In one embodiment, the tag determination module is specifically configured to calculate a union of the historical tags and preset video tags to obtain a current tag of the current video.
In an embodiment, the apparatus for processing short video data further includes:
and the marking module is used for receiving a target option selected by a user from options corresponding to the current tag and storing the corresponding relation between the current tag corresponding to the target option and the current video.
The marking module is also used for receiving the user-defined label filled in from the user-defined option, and when the user-defined label is not matched with the current label, the corresponding relation between the user-defined label and the current video is stored.
In one embodiment, the preset video tags of the preset video tag obtaining module include statistical tags and fixed tags, where the statistical tags are tags obtained by analyzing and counting a history tag set composed of history tags obtained by labeling each video by a server according to each user, where the history tag set includes custom tags and fixed tags, and the custom tags of the custom tag set are classified and counted to obtain a plurality of candidate statistical tags and corresponding numbers, and the candidate statistical tags whose number of the candidate statistical tags is greater than the preset number are statistical tags.
In an embodiment, the apparatus 200 for processing short video further includes:
the skipping module is used for acquiring the video label of the next video when the target playing mode is skipping, and judging whether the video label of the next video is matched with the first video label or not;
and the speed doubling playing module is used for generating a playing instruction for playing the next video at the speed doubling speed when the target playing mode is speed doubling playing and entering the next video, and executing the playing instruction.
In an embodiment, the apparatus 200 for processing short video further includes:
and the display module is used for displaying the video label of the next video.
FIG. 4 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be the terminal 110 in fig. 1. As shown in fig. 4, the computer apparatus includes a processor, a memory, a network interface, an input device, and a display screen connected via a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program which, when executed by the processor, causes the processor to implement a method of processing short video data. The internal memory may also have stored therein a computer program that, when executed by the processor, causes the processor to perform a method of processing short video data. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 4 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the apparatus for processing short video data provided herein may be implemented in the form of a computer program that is executable on a computer device such as that shown in fig. 4. The memory of the computer device may store various program modules constituting the apparatus for processing short video data, such as the playing point location obtaining module 201, the next video tag obtaining module 202, the judging module 203 and the next video playing mode determining module 204 shown in fig. 3. The computer program constituted by the respective program modules causes the processor to execute the steps in the method of processing short video data of the respective embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 4 may execute the step of acquiring the current playing point of the current video by the playing point location acquiring module 201 in the apparatus for processing short video data shown in fig. 3. The computer device may execute, by using the next video tag obtaining module 202, obtaining a video tag of the next video when the playing point location of the current video is located at the preset point location, where the video tag is obtained by counting the tag information of the user. The computer device may perform the determination of whether the video tag of the next video matches the first preset tag through the determination module 203. The computer device may obtain, through the next video play mode determining module 204, when the video tag of the next video matches the first preset tag, a play mode corresponding to the first preset tag, and use the play mode corresponding to the first preset tag as a target play mode of the next video.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program: acquiring a current playing point position of a current video; when the playing point location of the current video is located at a preset point location, acquiring a video label of the next video, wherein the video label is obtained by counting the marking information of a user; judging whether a video label of the next video is matched with a first preset label or not; and when the video tag of the next video is matched with the first preset tag, acquiring the play mode corresponding to the first preset tag, and taking the play mode corresponding to the first preset tag as the target play mode of the next video.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring a history label of a current user identifier; acquiring a preset video label; determining a current label of a current video according to a historical label of a current user identifier and a preset video label; and before switching the current video, displaying options and custom options corresponding to the current label.
In one embodiment, determining a current tag of a current video according to a history tag of a current user identifier and a preset video tag includes: and calculating a union set of the historical labels and the preset video labels to obtain the current label of the current video.
In one embodiment, after presenting the option corresponding to the current tag and the custom option, the processor executes the computer program to further implement the following steps: receiving a target option selected by a user from options corresponding to a current tag, and storing a corresponding relation between the current tag corresponding to the target option and a current video; and/or receiving a custom tag filled in from the custom option by a user, and storing the corresponding relation between the custom tag and the current video when the custom tag is not matched with the current tag.
In one embodiment, the preset video tags include statistical tags and fixed tags, the statistical tags are tags obtained by analyzing and counting a history tag set formed by history tags obtained by labeling each video by a server according to each user, wherein the history tag set includes custom tags and fixed tags, each custom tag of the custom tag set is classified and counted to obtain a plurality of candidate statistical tags and corresponding quantities, and the candidate statistical tags with the quantity of the candidate statistical tags larger than the preset quantity are the statistical tags.
In one embodiment, the playing mode includes skipping and double-speed playing, the playing mode corresponding to the first preset tag is obtained, and after the playing mode corresponding to the first preset tag is taken as the target playing mode of the next video, the processor executes the computer program and further implements the following steps: when the target playing mode is skip, acquiring a video tag of the next video, and executing to judge whether the video tag of the next video is matched with the first video tag; when the target playing mode is double-speed playing, when entering the next video, generating a playing instruction for playing the next video at double speed, and executing the playing instruction.
In one embodiment, after obtaining the video tag of the next video, the processor when executing the computer program further performs the following steps: the video tag of the next video is shown.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: acquiring a current playing point position of a current video; when the playing point location of the current video is located at a preset point location, acquiring a video label of the next video, wherein the video label is obtained by counting the marking information of a user; judging whether a video label of the next video is matched with a first preset label or not; and when the video tag of the next video is matched with the first preset tag, acquiring the play mode corresponding to the first preset tag, and taking the play mode corresponding to the first preset tag as the target play mode of the next video.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a history label of a current user identifier; acquiring a preset video label; determining a current label of a current video according to a historical label of a current user identifier and a preset video label; and before switching the current video, displaying options and custom options corresponding to the current label.
In one embodiment, determining a current tag of a current video according to a history tag of a current user identifier and a preset video tag includes: and calculating a union set of the historical labels and the preset video labels to obtain the current label of the current video.
In one embodiment, after presenting the option corresponding to the current tag and the custom option, the computer program when executed by the processor further performs the following steps: receiving a target option selected by a user from options corresponding to a current tag, and storing a corresponding relation between the current tag corresponding to the target option and a current video; and/or receiving a custom tag filled in from the custom option by a user, and storing the corresponding relation between the custom tag and the current video when the custom tag is not matched with the current tag.
In one embodiment, the preset video tags include statistical tags and fixed tags, the statistical tags are tags obtained by analyzing and counting a history tag set formed by history tags obtained by labeling each video by a server according to each user, wherein the history tag set includes custom tags and fixed tags, each custom tag of the custom tag set is classified and counted to obtain a plurality of candidate statistical tags and corresponding quantities, and the candidate statistical tags with the quantity of the candidate statistical tags larger than the preset quantity are the statistical tags.
In one embodiment, the playing mode includes skipping and double-speed playing, the playing mode corresponding to the first preset tag is obtained, and after the playing mode corresponding to the first preset tag is taken as the target playing mode of the next video, the computer program further implements the following steps when executed by the processor: when the target playing mode is skip, acquiring a video tag of the next video, and executing to judge whether the video tag of the next video is matched with the first video tag; when the target playing mode is double-speed playing, when entering the next video, generating a playing instruction for playing the next video at double speed, and executing the playing instruction.
In one embodiment, after obtaining the video tag of the next video, the computer program when executed by the processor further performs the steps of: the video tag of the next video is shown.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of processing short video data, the method comprising:
acquiring a current playing point position of a current video;
when the playing point location of the current video is located at a preset point location, acquiring a video tag of a next video, wherein the video tag is obtained by counting the marking information of a user;
judging whether the video label of the next video is matched with a first preset label or not;
and when the video tag of the next video is matched with the first preset tag, acquiring a play mode corresponding to the first preset tag, and taking the play mode corresponding to the first preset tag as a target play mode of the next video.
2. The method of claim 1, further comprising:
acquiring a history label of a current user identifier;
acquiring a preset video label;
determining a current label of the current video according to the historical label of the current user identifier and the preset video label;
and displaying options and custom options corresponding to the current label before switching the current video.
3. The method of claim 2, wherein determining the current tag of the current video according to the history tag of the current user identifier and the preset video tag comprises:
and calculating a union set of the historical labels and the preset video labels to obtain the current label of the current video.
4. The method of claim 2, wherein after presenting the option corresponding to the current tag and the custom option, further comprising:
receiving a target option selected by a user from options corresponding to the current tag, and storing a corresponding relation between the current tag corresponding to the target option and the current video; and/or
And receiving a custom tag filled in from the custom option by a user, and storing the custom tag and the corresponding relation between the custom tag and the current video when the custom tag is not matched with the current tag.
5. The method according to claim 2, wherein the preset video tags include statistical tags and fixed tags, the statistical tags are tags obtained by analyzing and counting a history tag set composed of history tags obtained by labeling each video by a server according to each user, wherein the history tag set includes custom tags and the fixed tags, each custom tag of the custom tag set is classified and counted to obtain a plurality of candidate statistical tags and corresponding quantities, and the candidate statistical tags whose quantities are greater than a preset quantity are the statistical tags.
6. The method according to claim 1, wherein the playback mode includes skip and double speed playback, and after acquiring the playback mode corresponding to the first preset tag and using the playback mode corresponding to the first preset tag as the target playback mode of the next video, the method further comprises:
when the target playing mode is skip, acquiring a video tag of a next video, and executing to judge whether the video tag of the next video is matched with the first video tag;
and when the target playing mode is double-speed playing, generating a playing instruction for playing the next video at double speed when the next video is entered, and executing the playing instruction.
7. The method according to any one of claims 1 to 6, wherein after the obtaining the video tag of the next video, further comprising:
and displaying the video label of the next video.
8. An apparatus for processing short video data, the apparatus comprising:
the playing point location obtaining module is used for obtaining the current playing point location of the current video;
the next video tag obtaining module is used for obtaining a video tag of a next video when the playing point location of the current video is located at a preset point location;
the judging module is used for judging whether the video label of the next video is matched with a first preset label or not;
and the next video playing mode determining module is used for acquiring a playing mode corresponding to the first preset label when the video label of the next video is matched with the first preset label, and taking the playing mode corresponding to the first preset label as the target playing mode of the next video.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 7 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN201911235283.7A 2019-12-05 2019-12-05 Method, apparatus, computer device and storage medium for processing short video data Pending CN111010619A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911235283.7A CN111010619A (en) 2019-12-05 2019-12-05 Method, apparatus, computer device and storage medium for processing short video data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911235283.7A CN111010619A (en) 2019-12-05 2019-12-05 Method, apparatus, computer device and storage medium for processing short video data

Publications (1)

Publication Number Publication Date
CN111010619A true CN111010619A (en) 2020-04-14

Family

ID=70115673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911235283.7A Pending CN111010619A (en) 2019-12-05 2019-12-05 Method, apparatus, computer device and storage medium for processing short video data

Country Status (1)

Country Link
CN (1) CN111010619A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111669627A (en) * 2020-06-30 2020-09-15 广州市百果园信息技术有限公司 Method, device, server and storage medium for determining video code rate
CN112839256A (en) * 2020-12-30 2021-05-25 珠海极海半导体有限公司 Video playing method and device and electronic equipment
CN113949933A (en) * 2021-09-30 2022-01-18 卓尔智联(武汉)研究院有限公司 Playing data analysis method, device, equipment and storage medium
CN113949920A (en) * 2021-12-20 2022-01-18 深圳佑驾创新科技有限公司 Video annotation method and device, terminal equipment and storage medium
WO2022142642A1 (en) * 2020-12-28 2022-07-07 上海掌门科技有限公司 Method and device for playing back video information
CN114969431A (en) * 2021-04-13 2022-08-30 中移互联网有限公司 Image processing method and device and electronic equipment
WO2022206530A1 (en) * 2021-03-31 2022-10-06 腾讯科技(深圳)有限公司 Multimedia playback method and apparatus, terminal, and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8589402B1 (en) * 2008-08-21 2013-11-19 Adobe Systems Incorporated Generation of smart tags to locate elements of content
US20150163551A1 (en) * 2011-05-20 2015-06-11 Google Inc. Interface for watching a stream of videos
CN107995523A (en) * 2017-12-21 2018-05-04 广东欧珀移动通信有限公司 Video broadcasting method, device, terminal and storage medium
CN109587578A (en) * 2018-12-21 2019-04-05 麒麟合盛网络技术股份有限公司 The processing method and processing device of video clip
CN110209879A (en) * 2018-08-15 2019-09-06 腾讯科技(深圳)有限公司 A kind of video broadcasting method, device, equipment and storage medium
CN110381364A (en) * 2019-06-13 2019-10-25 北京奇艺世纪科技有限公司 Video data handling procedure, device, computer equipment and storage medium
CN110475154A (en) * 2018-05-10 2019-11-19 腾讯科技(深圳)有限公司 Network television video playing method and device, Web TV and computer media

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8589402B1 (en) * 2008-08-21 2013-11-19 Adobe Systems Incorporated Generation of smart tags to locate elements of content
US20150163551A1 (en) * 2011-05-20 2015-06-11 Google Inc. Interface for watching a stream of videos
CN107995523A (en) * 2017-12-21 2018-05-04 广东欧珀移动通信有限公司 Video broadcasting method, device, terminal and storage medium
CN110475154A (en) * 2018-05-10 2019-11-19 腾讯科技(深圳)有限公司 Network television video playing method and device, Web TV and computer media
CN110209879A (en) * 2018-08-15 2019-09-06 腾讯科技(深圳)有限公司 A kind of video broadcasting method, device, equipment and storage medium
CN109587578A (en) * 2018-12-21 2019-04-05 麒麟合盛网络技术股份有限公司 The processing method and processing device of video clip
CN110381364A (en) * 2019-06-13 2019-10-25 北京奇艺世纪科技有限公司 Video data handling procedure, device, computer equipment and storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111669627A (en) * 2020-06-30 2020-09-15 广州市百果园信息技术有限公司 Method, device, server and storage medium for determining video code rate
CN111669627B (en) * 2020-06-30 2022-02-15 广州市百果园信息技术有限公司 Method, device, server and storage medium for determining video code rate
WO2022142642A1 (en) * 2020-12-28 2022-07-07 上海掌门科技有限公司 Method and device for playing back video information
CN112839256A (en) * 2020-12-30 2021-05-25 珠海极海半导体有限公司 Video playing method and device and electronic equipment
WO2022206530A1 (en) * 2021-03-31 2022-10-06 腾讯科技(深圳)有限公司 Multimedia playback method and apparatus, terminal, and storage medium
US11943510B2 (en) 2021-03-31 2024-03-26 Tencent Technology (Shenzhen) Company Ltd Multimedia playback method, apparatus, terminal, and storage medium
CN114969431A (en) * 2021-04-13 2022-08-30 中移互联网有限公司 Image processing method and device and electronic equipment
CN113949933A (en) * 2021-09-30 2022-01-18 卓尔智联(武汉)研究院有限公司 Playing data analysis method, device, equipment and storage medium
CN113949933B (en) * 2021-09-30 2023-08-22 卓尔智联(武汉)研究院有限公司 Playing data analysis method, device, equipment and storage medium
CN113949920A (en) * 2021-12-20 2022-01-18 深圳佑驾创新科技有限公司 Video annotation method and device, terminal equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111010619A (en) Method, apparatus, computer device and storage medium for processing short video data
CN111131901B (en) Method, apparatus, computer device and storage medium for processing long video data
Napoli Audience evolution: New technologies and the transformation of media audiences
CN111105819B (en) Clipping template recommendation method and device, electronic equipment and storage medium
EP1920546B1 (en) Enhanced electronic program guides
CN110263189B (en) Media content recommendation method and device, storage medium and computer equipment
US10775968B2 (en) Systems and methods for analyzing visual content items
US20140089801A1 (en) Timestamped commentary system for video content
US20120096088A1 (en) System and method for determining social compatibility
CN110198491B (en) Video sharing method and device
CN107454442B (en) Method and device for recommending video
CN110287372A (en) Label for negative-feedback determines method, video recommendation method and its device
CN109996122B (en) Video recommendation method and device, server and storage medium
CN112507163B (en) Duration prediction model training method, recommendation method, device, equipment and medium
CN111414532B (en) Information recommendation method, equipment and machine-readable storage medium
CN111258484A (en) Video playing method and device, electronic equipment and storage medium
CN110177306A (en) Video broadcasting method, device, mobile terminal and medium based on mobile terminal
US10771856B2 (en) System and method for storing advertising data
CN110297975A (en) Appraisal procedure, device, electronic equipment and the storage medium of Generalization bounds
CN113535991A (en) Multimedia resource recommendation method and device, electronic equipment and storage medium
CN109120996B (en) Video information identification method, storage medium and computer equipment
CN110209944B (en) Stock analyst recommendation method and device, computer equipment and storage medium
CN112052315A (en) Information processing method and device
CN104811464B (en) A kind of information processing method, device and system
CN112073738B (en) Information processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200414