CN108769822B - Video display method and terminal equipment - Google Patents

Video display method and terminal equipment Download PDF

Info

Publication number
CN108769822B
CN108769822B CN201810329976.1A CN201810329976A CN108769822B CN 108769822 B CN108769822 B CN 108769822B CN 201810329976 A CN201810329976 A CN 201810329976A CN 108769822 B CN108769822 B CN 108769822B
Authority
CN
China
Prior art keywords
video
attribute information
terminal device
tags
clips
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810329976.1A
Other languages
Chinese (zh)
Other versions
CN108769822A (en
Inventor
崔玉路
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201810329976.1A priority Critical patent/CN108769822B/en
Publication of CN108769822A publication Critical patent/CN108769822A/en
Application granted granted Critical
Publication of CN108769822B publication Critical patent/CN108769822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • H04N21/8405Generation or processing of descriptive data, e.g. content descriptors represented by keywords

Abstract

The embodiment of the invention discloses a video display method and terminal equipment, relates to the technical field of terminals, and can solve the problem that a user spends a long time when searching for certain video segments required by the user. The specific scheme is as follows: receiving a first input of a user under the condition that M labels are displayed on a current interface of terminal equipment, wherein the first input is the selection input of the user on N labels in the M labels, M and N are positive integers, N is less than or equal to M, and each label in the M labels is used for indicating at least one video segment in a first video; responding to a first input, determining N tags from the M tags, and acquiring at least one video segment corresponding to each tag in the N tags from a first video to obtain X video segments, wherein X is an integer and is not less than N; and displaying P video clips in the X video clips on a current interface of the terminal equipment, wherein P is a positive integer and is less than or equal to X. The embodiment of the invention can be applied to the process of displaying the video clip.

Description

Video display method and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of terminals, in particular to a video display method and terminal equipment.
Background
In the process of playing videos through terminal equipment, more and more users like to issue own comments in the process of watching videos, and exchange and interact with other audiences through the comments to increase the interestingness and interactivity in the process of playing videos.
The terminal equipment can display the comment content published by the user in the video watching process on a video playing interface in real time in a bullet screen mode through a video bullet screen technology; in addition, the terminal device can also display the comment content of other users when commenting on the video playing interface, so that the user can communicate and interact with other users.
However, because the text displayed in the video playing interface in the bullet screen form corresponds to the video segment, and when the user wants to find out the video segment corresponding to some bullet screen text, the user can watch the video from the start time of the whole video and find out the bullet screen text on the video displaying interface until finding out the video segment corresponding to the bullet screen text, so that the time consumption of the user is long when finding out the video segment corresponding to some bullet screen text.
Disclosure of Invention
The embodiment of the invention provides a video display method and terminal equipment, which can solve the problem that a user consumes a long time when searching for some video segments required by the user.
In order to solve the technical problem, the embodiment of the invention adopts the following technical scheme:
in a first aspect of the embodiments of the present invention, a video display method is provided, where the video display method may include: receiving a first input of a user under the condition that M labels are displayed on a current interface of terminal equipment, wherein the first input is the selection input of the user on N labels in the M labels, M and N are positive integers, N is less than or equal to M, and each label in the M labels is used for indicating at least one video segment in a first video; responding to a first input, determining N tags from the M tags, and acquiring at least one video segment corresponding to each tag in the N tags from a first video to obtain X video segments, wherein X is an integer and is not less than N; and displaying P video clips in the X video clips on a current interface of the terminal equipment, wherein P is a positive integer and is less than or equal to X.
In a second aspect of the embodiments of the present invention, there is provided a terminal device, including: the device comprises a receiving unit, an acquisition unit and a display unit.
The receiving unit is used for receiving a first input of a user under the condition that M labels are displayed on a current interface of the terminal device, wherein the first input is a selection input of the user on N labels in the M labels, M and N are positive integers, N is less than or equal to M, and each label in the M labels is used for indicating at least one video segment in the first video. The acquisition unit is used for responding to the first input received by the receiving unit, determining N labels from the M labels, and acquiring at least one video segment corresponding to each label in the N labels from the first video to obtain X video segments, wherein X is an integer and is larger than or equal to N. And the display unit is used for displaying P video clips in the X video clips acquired by the acquisition unit on a current interface of the terminal equipment, wherein P is a positive integer and is less than or equal to X.
In a third aspect of the embodiments of the present invention, a terminal device is provided, where the terminal device includes a processor, a memory, and a computer program stored in the memory and being executable on the processor, and the computer program, when executed by the processor, implements the steps of the video display method according to the first aspect.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the video display method according to the first aspect.
In the embodiment of the present invention, the terminal device may display P video clips on the current interface of the terminal device according to a first input (a selection input of the user on N of the M tags) of the user. The terminal equipment can directly display the P video clips in the X video clips on the current interface of the terminal equipment according to the N tags selected by the user, and the user does not need to blindly search the video clips in the whole first video, so that the time consumption of the user in the process of searching the video clips is shortened.
Drawings
Fig. 1 is a first schematic diagram of an example interface of a mobile phone according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an architecture of an android operating system according to an embodiment of the present invention;
fig. 3 is a flowchart of a video display method according to a first embodiment of the present invention;
fig. 4 is a schematic diagram of an example interface of a mobile phone according to an embodiment of the present invention;
fig. 5 is a flowchart of a video display method according to an embodiment of the present invention;
fig. 6 is a schematic diagram illustrating an example of time information according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating an example of a display template according to an embodiment of the present invention;
fig. 8 is a third schematic view of an example interface of a mobile phone according to an embodiment of the present invention;
fig. 9 is a first schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 11 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first" and "second," and the like, in the description and in the claims of embodiments of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first input and the second input, etc. are for distinguishing different inputs, rather than for describing a particular order of inputs. In the description of the embodiments of the present invention, the meaning of "a plurality" means two or more unless otherwise specified.
The term "and/or" herein is an association relationship describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The symbol "/" herein denotes a relationship in which the associated object is or, for example, a/B denotes a or B.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The following explains some concepts related to the video display method and the terminal device provided by the embodiments of the present invention.
Bullet screen characters: and displaying the characters on a video playing interface of the terminal equipment in a bullet screen mode.
For example, as shown in fig. 1, a terminal device is taken as a mobile phone for description. Assuming that a video currently played by the mobile phone 10 is a video 1, a current interface of the mobile phone 10 is a video playing interface 11, and the mobile phone 10 displays comment contents of the user in a bullet screen form on the video playing interface 11, where the comment contents displayed in the bullet screen form are referred to as bullet screen characters 12.
It can be understood that, in the embodiment of the present invention, the comment content of the user may be a text, an expression, a picture, some special symbols, and the like.
The bullet screen content: the general name of at least one bullet screen character means that the bullet screen content comprises at least one bullet screen character, and each bullet screen character comprises a keyword.
Hotspot frequency: the maximum frequency of the keywords in the bullet screen content.
For example, assuming that the currently played video is video 1, a first barrage content corresponding to a first time period (e.g., a time period from 30 th minute to 31 th minute) of the video 1 includes 5 barrage characters, keywords of the 5 barrage characters are "a", "B", "a", and "C", respectively, of the keywords, the number of times of occurrence of the keyword "a" is the largest, that is, the maximum frequency of the keyword "a" in the first barrage content is "a", "B", "a", "C", and the keyword "a" is the largest
Figure BDA0001627657980000031
Instantaneous frequency of hot spot
Figure BDA0001627657980000032
Currently, in the prior art, when a user searches for some video segments (e.g., video segments corresponding to some bullet screen characters) that the user needs, the user may watch the video from the start time of the whole video, and search for the bullet screen characters on a video display interface until the video segments corresponding to the bullet screen characters are found, so that the time consumption of the user is long when the user searches for some video segments that the user needs.
In order to solve the problem that in the prior art, when a user searches for some video clips required by the user, the time consumption is long, embodiments of the present invention provide a video display method and a terminal device, where the terminal device may display P video clips on a current interface of the terminal device according to a first input of the user (a selection input of the user on N of M tags). The terminal equipment can directly display the P video clips in the X video clips on the current interface of the terminal equipment according to the N tags selected by the user, and the user does not need to search the video clips in the whole first video blindly, so that the time consumption of the user in the process of searching the video clips can be shortened.
The embodiment of the invention provides a video display method and terminal equipment, which can be applied to a process of displaying video clips. In particular, the method can be applied to the process that the terminal equipment displays at least one video clip on the interface of the terminal equipment according to the input of the user.
The terminal device in the embodiment of the present invention may be a terminal device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment applied to the video display method provided by the embodiment of the present invention, taking an android operating system as an example.
Fig. 2 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 2, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the video display method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 2, so that the video display method may operate based on the android operating system shown in fig. 2. Namely, the processor or the terminal device can implement the video display method provided by the embodiment of the invention by running the software program in the android operating system.
In a first embodiment of the present invention, fig. 3 illustrates a video display method provided in an embodiment of the present invention, which may be applied to a terminal device having an android operating system as illustrated in fig. 2. As shown in fig. 3, the video display method includes steps 301 to 303 described below.
Step 301, under the condition that M tags are displayed on the current interface of the terminal device, the terminal device receives a first input of a user.
In the embodiment of the invention, the first input is the selection input of N labels in M labels by a user, M and N are positive integers, and N is less than or equal to M; each of the M tags is for indicating at least one video segment in the first video.
In the embodiment of the invention, at least one video segment indicated by each label is a video segment in the first video.
Optionally, in this embodiment of the present invention, the at least one video segment indicated by each tag may be a video segment in a first video currently played by the terminal device.
Optionally, in the embodiment of the present invention, the terminal device currently plays the first video, a "barrage video selection" icon is displayed on a current video playing interface of the terminal device, and a user may operate (for example, click/press operation, etc.) on the "barrage video selection" icon to trigger the terminal device to display M tags on the current interface; after the terminal device displays the M tags on the current interface, the user may select N tags of the M tags.
Optionally, in the embodiment of the present invention, the selection input of the user on the N tags may be an operation of clicking, long-pressing, or double-clicking the N tags.
Illustratively, assume that the terminal device is currently playing a first video. The current interface of the terminal device displays 5(M ═ 5) tags (tag 1, tag 2, tag 3, tag 4, and tag 5, respectively), and it is assumed that each tag is used to indicate one video segment in the first video, e.g., tag 1 is used to indicate video segment 1, tag 2 is used to indicate video segment 2, tag 3 is used to indicate video segment 3, tag 4 is used to indicate video segment 4, tag 5 is used to indicate video segment 5, and video segment 1, video segment 2, video segment 3, video segment 4, and video segment 5 are all video segments in the first video.
For example, the terminal device is taken as a mobile phone as an example for explanation. As shown in fig. 4 (a), the current interface of the mobile phone 10 is a video playing interface 13, and a "barrage video selection" icon 14, a plurality of barrage characters 12, a video playing progress bar, and the like are displayed on the video playing interface 13; after the user clicks the "barrage video selection" icon 14, as shown in fig. 4 (B), the current interface of the mobile phone 10 is an interface 15, and 5(M is 5) tags are displayed on the interface 15, where the 5 tags are tag 1, tag 2, tag 3, tag 4, and tag 5, respectively; the user selects 3(N ═ 3) of the 5 tags (e.g., tag 1, tag 3, and tag 4) on the interface 15.
Optionally, in the embodiment of the present invention, the user may also input the keyword on the current interface of the terminal device, so that the terminal device searches for the corresponding tag according to the keyword, to determine the N tags.
Optionally, in the embodiment of the present invention, before the step 301, the video display method provided in the embodiment of the present invention further includes the following steps 401 to 403.
Step 401, the terminal device obtains Q attribute information of the first video according to a preset duration.
Wherein Q is an integer, and Q is more than or equal to M.
In the embodiment of the present invention, the terminal device may divide the first video into Q video segments according to a preset duration, and acquire corresponding attribute information for each of the Q video segments to obtain Q attribute information, where each of the Q attribute information is used to indicate at least one video segment.
Optionally, in this embodiment of the present invention, for each of the Q video segments, one attribute information includes a path of the first video, time information of each of at least one video segment indicated by the attribute information, and a bullet screen content corresponding to the time information of each video segment.
Illustratively, assuming the preset duration is 1 minute, the total duration of the first video is 30 minutes. The terminal device divides the first video into 30 video segments according to a preset duration, and acquires corresponding attribute information for each of the 30 video segments to obtain 30 attribute information.
For example, the exemplary description is given here by taking 3 pieces of attribute information (such as attribute information 1, attribute information 2, and attribute information 3) out of 30 pieces of attribute information, and each piece of attribute information is used to indicate one video clip in the first video. The attribute information 1 is used for indicating a video segment 1, and the attribute information 1 comprises a path of a first video, time information of the video segment 1 and a bullet screen content 1 corresponding to the time information of the video segment 1; the attribute information 2 is used for indicating a video segment 2, and the attribute information 2 comprises a path of the first video, time information of the video segment 2 and a bullet screen content 2 corresponding to the time information of the video segment 2; the attribute information 3 is used to indicate the video segment 3, and the attribute information 3 includes the path of the first video, the time information of the video segment 3, and the bullet screen content 3 corresponding to the time information of the video segment 3.
Step 402, for each of the Q pieces of attribute information, the terminal device executes a method shown in S below to obtain Q pieces of tags.
S: the terminal equipment generates a label according to the bullet screen content included in the attribute information.
In the embodiment of the invention, the bullet screen content comprises the keywords, and the terminal equipment can generate the corresponding tags according to the keywords.
Optionally, in the embodiment of the present invention, the tag may be at least one of a keyword, an icon, and a picture. The method and the device can be specifically set according to actual use requirements, and the embodiment of the invention is not limited.
It can be understood that, in the embodiment of the present invention, in the case that the bullet screen contents included in the attribute information are the same, the terminal device may generate a label according to the same bullet screen contents.
For example, assuming that the bullet screen content 1 included in the attribute information 1 and the bullet screen content 3 included in the attribute information 3 are the same, the terminal device may generate a label from the bullet screen content 1 and the bullet screen content 3.
For example, here, taking a case where the bullet screen content included in each attribute information is different as an example, an exemplary description will be given of generating a tag according to the bullet screen content. Assuming that the above-mentioned one attribute information is attribute information 1, the terminal device may generate a tag 1 from the bullet-screen content 1 included in the attribute information 1.
It can be understood that, in the embodiment of the present invention, the terminal device may generate the tag 2 according to the bullet screen content 2 included in the attribute information 2, generate the tag 3 according to the bullet screen content 3 included in the attribute information 3, and so on, to obtain Q tags.
And step 403, the terminal device determines M tags meeting preset conditions from the Q tags.
In the embodiment of the invention, after the terminal device generates Q labels, the bullet screen content included in the attribute information corresponding to each label is detected to judge whether the label meets the preset condition.
Optionally, in the embodiment of the present invention, the preset condition may be that the bullet screen content included in the attribute information includes information such as characters, expressions, and pictures. The method and the device can be specifically set according to actual use requirements, and the embodiment of the invention is not limited.
For example, let Q be 30. The terminal device detects the bullet screen content included in the attribute information corresponding to each tag, and determines that the bullet screen content 1 included in the attribute information 1 corresponding to the tag 1 includes text information, the bullet screen content 2 included in the attribute information 2 corresponding to the tag 2 includes text information, the bullet screen content 3 included in the attribute information 3 corresponding to the tag 3 includes text information, the bullet screen content 4 included in the attribute information 4 corresponding to the tag 4 includes text information, and the bullet screen content 5 included in the attribute information 5 corresponding to the tag 5 includes text information, and other tags except the 5 tags do not meet preset conditions (that is, the bullet screen content included in the attribute information corresponding to the tag does not include information such as text, expressions, pictures, and the like).
Optionally, in the embodiment of the present invention, after determining the M tags that meet the preset condition, the terminal device may delete the other tags that do not meet the preset condition except the M tags.
It can be understood that, in the embodiment of the present invention, after determining M tags, the terminal device may store the M tags in the terminal device.
Optionally, in this embodiment of the present invention, one attribute information further includes a hot spot frequency, where the hot spot frequency is a maximum frequency of a keyword in the bullet screen content, and the keyword is a keyword in the bullet screen content included in one attribute information. Accordingly, in conjunction with steps 401 to 403, before step 301 (or after step 403), the video display method provided by the embodiment of the present invention further includes step 501 described below.
Step 501, the terminal device displays M tags on a current interface of the terminal device in a preset manner according to M hot spot frequencies.
Optionally, in the embodiment of the present invention, the preset mode may be a sequential display mode, a reverse display mode, an enlarged display mode, a color display mode, a highlighted display mode, or the like. The method and the device can be specifically set according to actual use requirements, and the embodiment of the invention is not limited.
Optionally, in the embodiment of the present invention, the terminal device may display the M tags in a preset manner according to the magnitude relationship between each of the M hot spot frequencies and according to the magnitude relationship.
For example, assume that M is 5. The keywords in the bullet screen content 1 included in the attribute information 1 are "X1", "X2", "X1", "X1" and "X3", respectively, the number of times of occurrence of the keyword "X1" is the largest among the keywords, and the maximum frequency of the keyword "X1" in the bullet screen content 1 is "X1", "X2", "X1", "X1", and "X3", respectively
Figure BDA0001627657980000061
That is, the hotspot frequency 1 included in the attribute information 1 is
Figure BDA0001627657980000062
The keywords in the bullet screen content 2 included in the attribute information 2 are "X4", "X4", "X5", "X6", "X6", "X7", respectively, and the maximum frequency of the keyword "X4" or the keyword "X6" in the bullet screen content 2 is "X4", "X4", "X5", "X6", "X7", respectively
Figure BDA0001627657980000063
That is, the hotspot frequency 2 included in the attribute information 2 is
Figure BDA0001627657980000064
The keywords in the bullet screen content 3 included in the attribute information 3 are "X8", "X8", "X9", and "X8", respectively, and the maximum frequency of the keyword "X8" in the bullet screen content 3 is "X8", "X8", "X9", and "X8", respectively
Figure BDA0001627657980000065
Instant property letterThe hotspot frequency 3 included in message 3 is
Figure BDA0001627657980000066
The keywords included in the bullet screen content 4 in the attribute information 4 are "X9", "X10", "X10", "X10" and "X10", respectively, and the maximum frequency of the keyword "X10" in the bullet screen content 4 is "X9", "X10", "X10", "X10", and "X10", respectively
Figure BDA0001627657980000067
That is, the hotspot frequency 4 included in the attribute information 4 is
Figure BDA0001627657980000068
The keywords in the bullet screen content 5 included in the attribute information 5 are "X3", "X5", "X11", "X12", "X13", and "X14", respectively, and the maximum frequency of the keywords in the bullet screen content 5 is "X3", "X5", "X11", "X12", and "X14"
Figure BDA0001627657980000069
That is, the hotspot frequency 5 included in the attribute information 5 is
Figure BDA00016276579800000610
The terminal device may display the 5 tags according to the magnitude relationship of each hotspot frequency in the 5 hotspot frequencies, that is, hotspot frequency 4> hotspot frequency 3> hotspot frequency 1> hotspot frequency 2> hotspot frequency 5, in order of the magnitude relationship, that is, the tag 4, tag 3, tag 1, tag 2, and tag 5 are displayed in order.
In the embodiment of the invention, the terminal equipment can display the M labels on the current interface of the terminal equipment in a preset mode according to the M hot spot frequencies; therefore, the user can be prompted with the bullet screen content of the hot spot (with high frequency), so that the user can conveniently select at least one tag from the M tags.
Step 302, the terminal device determines N tags from the M tags in response to the first input, and obtains at least one video segment corresponding to each tag of the N tags from the first video to obtain X video segments.
Wherein X is an integer and is not less than N.
In the embodiment of the present invention, the terminal device may obtain at least one video segment indicated by each of the N tags, so as to obtain X video segments.
Illustratively, assuming that N is 3, the 3 tags selected by the user are tag 1, tag 2 and tag 3, respectively, and the at least one video segment corresponding to tag 1 is 2 video segments (e.g., video segment 1 and video segment 6), the at least one video segment corresponding to tag 2 is 1 video segment (e.g., video segment 2), and the at least one video segment corresponding to tag 3 is 3 video segments (e.g., video segment 3, video segment 7 and video segment 8). The terminal device may obtain, through tag 1, tag 2, and tag 3 selected by the user, 2 video segments (e.g., video segment 1 and video segment 6) indicated by tag 1, 1 video segment (e.g., video segment 2) indicated by tag 2, and 3 video segments (e.g., video segment 3, video segment 7, and video segment 8) indicated by tag 3 to obtain X ═ 6 video segments (i.e., video segment 1, video segment 6, video segment 2, video segment 3, video segment 7, and video segment 8).
Optionally, in the embodiment of the present invention, as shown in fig. 5 in combination with fig. 3, the step 302 may be specifically implemented by the following step 302'.
Step 302', for each of the N tags, the following steps 302a and 302b are performed to obtain X video segments.
Step 302a, the terminal device obtains attribute information according to one of the labels.
In the embodiment of the invention, one tag corresponds to one attribute information, and one attribute information is used for indicating at least one video clip.
It can be understood that, in the embodiment of the present invention, for each tag, the terminal device may obtain one attribute information according to one tag in each tag, so as to obtain N attribute information; and aiming at each attribute information, at least one video clip can be obtained according to one attribute information in each attribute information to obtain X video clips.
Illustratively, assume that the labels selected by the user are label 1, label 3, and label 4, respectively, and that label 1 indicates one video segment (e.g., video segment 1), label 3 indicates one video segment (e.g., video segment 3), and label 4 indicates one video segment (e.g., video segment 4). The terminal device may obtain attribute information 1 corresponding to the tag 1, attribute information 3 corresponding to the tag 3, and attribute information 4 corresponding to the tag 4 according to the tag 1, the tag 3, and the tag 4 selected by the user, where the attribute information 1 is used to indicate the video segment 1, the attribute information 3 is used to indicate the video segment 3, and the attribute information 4 is used to indicate the video segment 4.
For example, assume that N is 3. As shown in table 1, an example of a correspondence relationship between a tag, attribute information, and a video clip provided by an embodiment of the present invention is given in a table manner.
TABLE 1
Figure BDA0001627657980000071
Referring to table 1, the terminal device obtains attribute information 1 according to the tag 1, where the attribute information 1 is used to indicate a video clip 1; the terminal equipment acquires attribute information 3 according to the label 3, wherein the attribute information 3 is used for indicating the video clip 3; the terminal device obtains attribute information 4 according to the tag 4, and the attribute information 4 is used for indicating the video clip 4.
Optionally, in this embodiment of the present invention, one attribute information includes a path of the first video, time information of each of at least one video segment indicated by the one attribute information, and barrage content corresponding to the time information of each video segment.
Optionally, in this embodiment of the present invention, the time information of one video segment in each video segment may be used to indicate a time period occupied by the one video segment in the first video, where the time period includes a start time and an end time of the one video segment; the bullet screen content corresponding to the time information of the video clip comprises at least one bullet screen character in the time period, and each bullet screen character comprises a keyword.
It can be understood that, in the embodiment of the present invention, the bullet screen content corresponding to the time information of each video clip included in one attribute information is the same. For example, it is assumed that at least one video segment indicated by the attribute information 1 is a video segment 1 and a video segment 6, the time information of the video segment 1 is a time period 1, and the time information of the video segment 6 is a time period 6; then, the bullet screen content 1 corresponding to the time slot 1 is the same as the bullet screen content 6 corresponding to the time slot 6.
Illustratively, it is assumed that at least one video clip indicated by each piece of attribute information is one video clip, and N (N ═ 3) pieces of attribute information acquired by the terminal device are attribute information 1, attribute information 3, and attribute information 4, respectively. As shown in table 2, an example of a corresponding relationship between a path of a first video, time information of a video segment, and bullet screen content corresponding to the time information is given in a table manner.
TABLE 2
Figure BDA0001627657980000081
Referring to table 2, the path of the first video included in the attribute information 1 is D: \\ shipin1, the time information of the video clip 1 is time period 1, and the bullet screen content corresponding to the time period 1 is bullet screen content 1; the path of the first video included in the attribute information 3 is D: \\ shipin1, the time information of the video clip 3 is time period 3, and the bullet screen content corresponding to the time period 3 is bullet screen content 3; the path of the first video included in the attribute information 4 is D: \\ shipin1, the time information of the video clip 4 is time period 4, and the bullet screen content corresponding to the time period 4 is bullet screen content 4.
For example, as shown in fig. 6, the total duration of the first video is 30 minutes, i.e., the playing time of the first video is from 00: 00 begins to 30:00 ends; the time period 1 occupied by the video segment 1 in the first video is a time period from 08:00 to 09:00, the starting time of the video segment 1 is 08:00, and the ending time of the video segment 1 is 09: 00; the time period 3 occupied by the video segment 3 in the first video is a time period from 15:00 to 16:00, the starting time of the video segment 3 is 15:00, and the ending time is 16: 00; the time period 4 occupied by the video segment 4 in the first video is a time period of 25:00 to 26:00, the start time of the video segment 4 is 25:00, and the end time is 26: 00.
Step 302b, the terminal device obtains at least one video clip according to one attribute information.
Illustratively, in conjunction with table 1, the terminal device may obtain video clip 1 according to attribute information 1, video clip 3 according to attribute information 3, and video clip 4 according to attribute information 4.
Optionally, in this embodiment of the present invention, the step 302b may be specifically implemented by the following step 302 b' and step 302b ″.
Step 302 b', the terminal device obtains the first video according to the path of the first video included in one attribute information in each attribute information.
For example, with reference to table 2, description will be given taking one attribute information of each attribute information as attribute information 1. The terminal device may, according to the path D of the first video included in the attribute information 1: \ shipin1 retrieves the first video.
Step 302b ", the terminal device obtains at least one video clip indicated by at least one time information included in one attribute information from the first video according to at least one time information included in one attribute information.
Wherein one time information corresponds to one video clip.
Optionally, in this embodiment of the present invention, the terminal device intercepts, from the first video, a video segment indicated by each piece of time information in at least one piece of time information according to the at least one piece of time information included in one piece of attribute information.
For example, it is assumed that one of the attribute information in the step 302b ″ is attribute information 1, where the attribute information 1 includes a time period 1 and a time period 6, a video segment corresponding to the time period 1 is a video segment 1, and a video segment corresponding to the time period 6 is a video segment 6. The terminal device may intercept the video segment 1 indicated by the time period 1 from the first video according to the time period 1, and intercept the video segment 6 indicated by the time period 6 from the first video according to the time period 6 to obtain the video segment 1 and the video segment 6.
Exemplarily, it is assumed that at least one time information included in the attribute information 1 is a period 1. The terminal device may intercept the video segment 1 indicated by the period of 08:00 to 09:00 from the first video according to the period 1 shown in fig. 6 (i.e., the period of 08:00 to 09: 00).
It can be understood that, in the embodiment of the present invention, the terminal device may intercept, from the first video, the video segment 3 indicated in the time period 3 according to the time information (e.g., the time period 3) included in the attribute information 3, and the terminal device may intercept, from the first video, the video segment 4 indicated in the time period 4 according to the time information (e.g., the time period 4) included in the attribute information 4.
Step 303, the terminal device displays P video clips of the X video clips on the current interface of the terminal device.
Wherein P is a positive integer and is less than or equal to X.
Optionally, in this embodiment of the present invention, in a case where P ═ X, the terminal device may display X video segments on a current interface of the terminal device; in the case that P < X, the terminal device may select P video clips from the X video clips and display the P video clips on a current interface of the terminal device.
Exemplarily, it is assumed that X is 3 and P is 2. The terminal device can randomly select the video segment 1 and the video segment 4 from the video segment 1, the video segment 3 and the video segment 4, and display the video segment 1 and the video segment 4 on a current interface of the terminal device.
Optionally, in the embodiment of the present invention, before the step 303, the video display method provided in the embodiment of the present invention may further include the following step 601 and step 602, and the step 303 may be specifically implemented by the following step 303 a.
Step 601, the terminal device obtains a target display template according to the P video clips.
In the embodiment of the present invention, the terminal device may first obtain a corresponding target display template according to information of the P video segments (e.g., the number of videos and/or the size of the videos), and then display the P video segments on the current interface based on the target display template.
For example, the terminal device may select a corresponding target display template from display templates pre-stored in the terminal device according to the number of the P video segments, or the terminal device may obtain the corresponding target display template from the server side by using a mobile network.
Step 602, the terminal device performs composition processing on the P video segments according to the target display template to obtain P video segments after the composition processing.
In the embodiment of the invention, the terminal device can synthesize the P video clips according to the number of the P video clips and the target display template, so that the P video clips can be displayed on one display interface.
Step 303a, the terminal device displays the P video clips after the composition processing on the current interface of the terminal device.
For example, assuming that P video segments acquired by the terminal device are 3 video segments, as shown in (a) in fig. 7, the terminal device may display the video segments 1, 3 and 4 after the composition processing in the target display template 16, where the position relationship of the video segments 1, 3 and 4 when displayed in the target display template 16 may be determined (e.g., randomly displayed) by the terminal device, and may be specifically set according to actual use requirements, which is not limited in the embodiment of the present invention; assuming that the number P of video clips acquired by the terminal device is 2 video clips, as shown in (B) of fig. 7, the terminal device may display the video clip 1 and the video clip 4 after the composition processing in the target display template 17, where a positional relationship of the video clip 1 and the video clip 4 when displayed in the target display template 17 may be determined by the terminal device (e.g., random display).
In the embodiment of the invention, the terminal equipment can display the P video clips after the synthesis processing on the current interface of the terminal equipment, so that the P video clips can be displayed on one display interface, and the use of the P video clips by a user is facilitated.
Optionally, in this embodiment of the present invention, when one attribute information further includes a hotspot frequency, the step 303 may be specifically implemented by the following steps 303b and 303 c.
Step 303b, the terminal device determines P video segments from the X video segments according to the N hot spot frequencies.
Optionally, in the embodiment of the present invention, the terminal device may select, according to a magnitude relationship of each hotspot frequency in the X hotspot frequencies, P video clips with higher hotspot frequencies from the X video clips.
For example, suppose X (X ═ 3) hot spot frequencies are respectively hot spot frequency 1 and
Figure BDA0001627657980000102
hot spot frequency 3 of
Figure BDA0001627657980000103
Hot spot frequency 4 of
Figure BDA0001627657980000101
The terminal device determines the magnitude relationship between the three hot spot frequencies (i.e. hot spot frequency 4)>Hotspot frequency 3>Hotspot frequency 1), selecting a video clip 4 corresponding to the hotspot frequency 4 and a video clip 3 corresponding to the hotspot frequency 3 from the video clips 1, 3 and 4.
And step 303c, the terminal device displays the P video clips on the current interface of the terminal device.
The embodiment of the invention provides a video display method, and a terminal device can display P video clips on a current interface of the terminal device according to a first input (a selection input of a user to N tags in M tags). The terminal equipment can directly display the P video clips in the X video clips on the current interface of the terminal equipment according to the N tags selected by the user, and the user does not need to blindly search the video clips in the whole first video, so that the time consumption of the user in the process of searching the video clips is shortened.
Furthermore, in the embodiment of the present invention, because each of the M tags is generated according to the bullet screen content included in the attribute information of the first video, when a user wants to search for a video segment corresponding to some bullet screen content, the user can directly select N tags of the M tags on the terminal device, so that P video segments of the X video segments are displayed on the current interface of the terminal device, and the user is not required to blindly search for the video segments corresponding to the bullet screen content in the entire first video, thereby reducing the time consumption of the user in the process of searching for the video segments.
Optionally, in the embodiment of the present invention, after the step 303, the video display method provided in the embodiment of the present invention may further include the following step 701.
Step 701, for each video segment in the P video segments, the terminal device displays the bullet screen content corresponding to one video segment on one video segment.
In the embodiment of the invention, the terminal equipment can display the barrage content corresponding to one video clip on the video clip in an overlapping manner.
Illustratively, the terminal device is taken as a mobile phone for example, and it is assumed that the mobile phone displays the video clip 1 and the video clip 4 by using the display template 17. As shown in fig. 8, the mobile phone 10 displays the barrage content 1 on the video segment 1 in an overlapping manner, where the barrage content 1 includes 5 barrage characters, such as barrage characters "× 1 ×", "× X2 ×", "× 1 ×", "× X1 ×" and "× 3"; the mobile phone 10 displays the barrage content 4 on the video segment 4 in an overlapping manner, and the barrage content 4 includes 5 barrage characters, such as the barrage characters ". X9", ". X10", ". X10", ". X10". and ". X10".
Because terminal equipment can be after showing P video clip, each video clip stack in P video clip shows corresponding barrage content, consequently can be convenient for the user to look over the barrage content that each video clip corresponds, and then increased user's interest.
Further, the user may set, through the terminal device, a font color, a font size, and the like of the text of the bullet screen content displayed on the current interface of the terminal device, and may also trigger the terminal device to add new bullet screen content.
Optionally, in the embodiment of the present invention, after the step 303, the video display method provided in the embodiment of the present invention further includes the following steps 801 and 802.
Step 801, the terminal device receives a second input of the user.
Wherein the second input is used for triggering the terminal device to add an element to at least one of the P video segments.
Optionally, in the embodiment of the present invention, the user may trigger the terminal device to add background music, add pictures, perform filter processing, and the like to at least one of the P video segments.
For example, the user may click on the "edit" icon on the interface shown in fig. 8, then select a video clip for which an element needs to be added, and make a second input (e.g., the user selects favorite background music) with respect to the selected video clip to trigger the cell phone 10 to add an element to the selected video clip.
Step 802, the terminal device responds to a second input and adds an element to at least one video clip.
Because the terminal equipment can add elements to at least one video clip, the secondary editing of at least one video clip is realized, and the interestingness of the user is further increased.
Furthermore, after the user adds an element to at least one video clip at the terminal device, the terminal device may be triggered to store the video clip after the element is added; and the user can also trigger the terminal equipment to send the video clip added with the element to other terminal equipment.
In a second embodiment of the present invention, fig. 9 shows a schematic diagram of a possible structure of a terminal device involved in the embodiment of the present invention, and as shown in fig. 9, the terminal device 90 may include: a receiving unit 91, an acquisition unit 92, and a display unit 93.
The receiving unit 91 may be configured to receive a first input of a user when the current interface of the terminal device 90 displays M tags, where the first input is a selection input of the user to N tags of the M tags, M and N are positive integers, and N is less than or equal to M, and each tag of the M tags is used to indicate at least one video segment in the first video. The obtaining unit 92 may be configured to determine N tags from the M tags in response to the first input received by the receiving unit 91, and obtain at least one video segment corresponding to each tag of the N tags from the first video to obtain X video segments, where X is an integer and X ≧ N. The display unit 93 may be configured to display, on the current interface of the terminal device 90, P video segments of the X video segments acquired by the acquisition unit 92, where P is a positive integer and P is less than or equal to X.
In a possible implementation manner, the obtaining unit 92 may specifically be configured to: for each of the N tags, the following method is performed to obtain X video segments: acquiring attribute information according to one label in each label, wherein the attribute information is used for indicating at least one video clip; and acquiring at least one video clip according to the attribute information.
In one possible implementation, the one attribute information may include a path of the first video, time information of each of at least one video segment indicated by the one attribute information, and barrage content corresponding to the time information of each video segment. Correspondingly, the obtaining unit 92 may be further configured to obtain Q attribute information of the first video according to a preset time duration before the receiving unit 91 receives the first input of the user, where Q is an integer and Q is greater than or equal to M. As shown in fig. 10, the terminal device 90 in the embodiment of the present invention may further include: a first processing unit 94 and a determination unit 95. The first processing unit 94 may be configured to execute the method shown in S for each of the Q pieces of attribute information acquired by the acquiring unit 92, so as to obtain Q pieces of tags: s: and generating a label according to the bullet screen content included in the attribute information. The determining unit 95 may be configured to determine M tags meeting a preset condition from the Q tags obtained by the first processing unit 94.
In a possible implementation manner, one attribute information may further include a hotspot frequency, where the hotspot frequency is the maximum frequency of a keyword in the bullet screen content, and the keyword is a keyword in the bullet screen content included in one attribute information. Correspondingly, the display unit 93 may be further configured to display M tags on the current interface of the terminal device 90 in a preset manner according to the M hot spot frequencies.
In a possible implementation manner, the obtaining unit 92 may specifically be configured to: acquiring a first video according to a path of the first video included in one attribute information of each attribute information; and acquiring at least one video clip indicated by at least one piece of time information included in one piece of attribute information from the first video according to at least one piece of time information included in one piece of attribute information, wherein one piece of time information corresponds to one video clip.
In a possible implementation manner, the display unit 93 may specifically be configured to: determining P video clips from the X video clips according to the N hot spot frequencies; p video clips are displayed on the current interface of the terminal device 90.
In a possible implementation manner, the display unit 93 may be further configured to, after P video segments of the X video segments are displayed on the current interface of the terminal device 90, display, on one video segment, the barrage content corresponding to one video segment for each of the P video segments.
In a possible implementation manner, the obtaining unit 92 may be further configured to obtain the target display template according to P video segments before the display unit 93 displays P video segments of the X video segments on the current interface of the terminal device 90. Correspondingly, the terminal device 90 in the embodiment of the present invention may further include: a second processing unit. The second processing unit may be configured to perform synthesis processing on the P video segments according to the target display template acquired by the acquiring unit 92, so as to obtain P video segments after the synthesis processing. The display unit 93 may be specifically configured to: and displaying the P video clips obtained by the second processing unit after the synthesis processing on the current interface of the terminal device 90.
In a possible implementation manner, the receiving unit 91 may be further configured to receive a second input of the user after the displaying unit 93 displays P video segments of the X video segments on the current interface of the terminal device 90, where the second input is used to trigger the terminal device 90 to add an element to at least one video segment of the P video segments. The terminal device 90 in the embodiment of the present invention may further include: and adding a unit. Wherein the adding unit may be configured to add an element to the at least one video segment in response to the second input received by the receiving unit 91.
The terminal device 90 provided in the embodiment of the present invention can implement each process implemented by the terminal device in the above method embodiments, and for avoiding repetition, detailed description is not repeated here.
The embodiment of the invention provides a terminal device, which can display P video clips on a current interface of the terminal device according to a first input (a selection input of a user to N tags in M tags) of the user. The terminal equipment can directly display the P video clips in the X video clips on the current interface of the terminal equipment according to the N tags selected by the user, and the user does not need to blindly search the video clips in the whole first video, so that the time consumption of the user in the process of searching the video clips is shortened.
In a third embodiment of the present invention, fig. 11 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention. As shown in fig. 11, the terminal device 100 includes but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111.
It should be noted that the terminal device structure shown in fig. 11 does not constitute a limitation of the terminal device, and the terminal device may include more or less components than those shown, or combine some components, or arrange different components, as will be understood by those skilled in the art. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The user input unit 107 is configured to receive a first input of a user when M tags are displayed on a current interface of the terminal device, where the first input is a selection input of the user on N tags of the M tags, M and N are positive integers, and N is not greater than M, and each tag of the M tags is used to indicate at least one video segment in the first video.
A processor 110, configured to determine N tags from the M tags in response to a first input received by the user input unit 107, and obtain at least one video segment corresponding to each tag of the N tags from the first video to obtain X video segments, where X is an integer and X is greater than or equal to N; and displaying P video clips in the X video clips on a current interface of the terminal equipment, wherein P is a positive integer and is less than or equal to X.
The embodiment of the invention provides a terminal device, which can display P video clips on a current interface of the terminal device according to a first input (a selection input of a user to N tags in M tags) of the user. The terminal equipment can directly display the P video clips in the X video clips on the current interface of the terminal equipment according to the N tags selected by the user, and the user does not need to blindly search the video clips in the whole first video, so that the time consumption of the user in the process of searching the video clips is shortened.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 102, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 11, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and preferably, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 100 includes some functional modules that are not shown, and are not described in detail here.
Preferably, an embodiment of the present invention further provides a terminal device, which includes the processor 110 shown in fig. 11, the memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, and when the computer program is executed by the processor 110, the computer program implements each process of the foregoing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the method embodiments, and can achieve the same technical effects, and in order to avoid repetition, the details are not repeated here. The computer-readable storage medium may be a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (13)

1. A method for video display, the method comprising:
receiving a first input of a user under the condition that M labels are displayed on a current interface of terminal equipment, wherein the first input is a selection input of the user on N labels in the M labels, M and N are positive integers, N is less than or equal to M, and each label in the M labels is used for indicating at least one video segment in a first video;
responding to the first input, determining the N tags from the M tags, and acquiring the at least one video segment corresponding to each tag in the N tags from the first video to obtain X video segments, wherein X is an integer and is greater than or equal to N;
displaying P video clips in the X video clips on a current interface of the terminal equipment, wherein P is a positive integer and is not more than X;
the obtaining the at least one video clip corresponding to each of the N tags from the first video to obtain X video clips includes:
for each of the tags, the following method is performed to obtain the X video segments:
acquiring attribute information according to one of the labels, wherein the attribute information is used for indicating the at least one video clip;
acquiring the at least one video clip according to the attribute information;
wherein one attribute information includes a path of the first video, time information of each of the at least one video segment indicated by the one attribute information, and a bullet screen content corresponding to the time information of each video segment.
2. The method of claim 1,
before the receiving the first input of the user, the method further comprises:
obtaining Q attribute information of the first video according to a preset time interval, wherein Q is an integer and is more than or equal to M;
for each attribute information in the Q attribute information, executing the method shown by S to obtain Q tags:
s: generating a label according to the bullet screen content included in the attribute information;
determining the M tags meeting a preset condition from the Q tags;
the acquiring Q pieces of attribute information of the first video according to the preset time interval includes:
dividing the first video into Q video segments according to a preset time length, and acquiring corresponding attribute information for each of the Q video segments to obtain the Q attribute information.
3. The method according to claim 1 or 2, wherein one attribute information further includes a hotspot frequency, the hotspot frequency is a maximum frequency of a keyword in the bullet screen content, and the keyword is a keyword in the bullet screen content included in the one attribute information;
the method further comprises the following steps:
and displaying the M labels on the current interface of the terminal equipment in a preset mode according to the M hot spot frequencies.
4. The method according to claim 2, wherein said obtaining the at least one video clip according to one attribute information of each attribute information comprises:
acquiring the first video according to the path of the first video included in one attribute information of each attribute information;
and acquiring the at least one video clip indicated by the at least one piece of time information included in the attribute information from the first video according to the at least one piece of time information included in the attribute information, wherein one piece of time information corresponds to one video clip.
5. The method according to claim 3, wherein the displaying P video clips of the X video clips on the current interface of the terminal device comprises:
determining the P video clips from the X video clips according to the N hot spot frequencies;
and displaying the P video clips on a current interface of the terminal equipment.
6. The method according to claim 1, wherein after displaying P video clips of the X video clips on a current interface of the terminal device, the method further comprises:
for each of the P video segments, displaying the bullet screen content corresponding to the one video segment on the one video segment.
7. The method according to claim 1, wherein before displaying P video clips of the X video clips on a current interface of the terminal device, the method further comprises:
acquiring a target display template according to the P video clips;
synthesizing the P video clips according to the target display template to obtain the P video clips after the synthesizing;
the displaying P video clips in the X video clips on the current interface of the terminal equipment comprises:
and displaying the P video clips after the synthesis processing on a current interface of the terminal equipment.
8. The method according to claim 1 or 7, wherein after displaying P video clips of the X video clips on the current interface of the terminal device, the method further comprises:
receiving a second input of a user, wherein the second input is used for triggering the terminal equipment to add an element to at least one video clip in the P video clips;
in response to the second input, adding an element to the at least one video clip.
9. A terminal device, characterized in that the terminal device comprises:
the terminal device comprises a receiving unit, a processing unit and a display unit, wherein the receiving unit is used for receiving a first input of a user under the condition that M labels are displayed on a current interface of the terminal device, the first input is a selection input of the user on N labels in the M labels, M and N are positive integers, N is less than or equal to M, and each label in the M labels is used for indicating at least one video segment in a first video;
an obtaining unit, configured to determine, in response to the first input received by the receiving unit, the N tags from the M tags, and obtain, from the first video, the at least one video segment corresponding to each tag of the N tags, so as to obtain X video segments, where X is an integer and X is greater than or equal to N;
the display unit is used for displaying P video clips in the X video clips acquired by the acquisition unit on a current interface of the terminal equipment, wherein P is a positive integer and is not more than X;
the obtaining unit is specifically configured to:
for each of the tags, the following method is performed to obtain the X video segments:
acquiring attribute information according to one of the labels, wherein the attribute information is used for indicating the at least one video clip;
acquiring the at least one video clip according to the attribute information;
wherein one attribute information includes a path of the first video, time information of each of the at least one video segment indicated by the one attribute information, and a bullet screen content corresponding to the time information of each video segment.
10. The terminal device of claim 9,
the acquisition unit is further used for acquiring Q attribute information of the first video according to a preset time length before the receiving unit receives the first input of the user, wherein Q is an integer and is more than or equal to M;
the terminal device further includes:
a first processing unit, configured to execute the method shown in S for each of the Q pieces of attribute information acquired by the acquisition unit to obtain Q pieces of tags:
s: generating a label according to the bullet screen content included in the attribute information;
a determining unit, configured to determine, from the Q tags obtained by the first processing unit, the M tags meeting a preset condition;
the obtaining unit is specifically configured to divide the first video into Q video segments according to a preset duration, and obtain corresponding attribute information for each of the Q video segments to obtain the Q attribute information.
11. The terminal device according to claim 9 or 10, wherein one attribute information further includes a hotspot frequency, the hotspot frequency is a maximum frequency of a keyword in the bullet screen content, and the keyword is a keyword in the bullet screen content included in the one attribute information;
the display unit is further configured to display the M tags on a current interface of the terminal device in a preset manner according to the M hot spot frequencies.
12. The terminal device according to claim 11, wherein the display unit is specifically configured to:
determining the P video clips from the X video clips according to the N hot spot frequencies;
and displaying the P video clips on a current interface of the terminal equipment.
13. The terminal device according to claim 9, wherein the obtaining unit is further configured to obtain a target display template according to P video segments before the display unit displays the P video segments in the X video segments on the current interface of the terminal device;
the terminal device further includes:
the second processing unit is used for synthesizing the P video clips according to the target display template acquired by the acquisition unit to obtain the P video clips after the synthesis processing;
the display unit is specifically configured to:
and displaying the P video clips obtained by the second processing unit after the synthesis processing on a current interface of the terminal equipment.
CN201810329976.1A 2018-04-13 2018-04-13 Video display method and terminal equipment Active CN108769822B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810329976.1A CN108769822B (en) 2018-04-13 2018-04-13 Video display method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810329976.1A CN108769822B (en) 2018-04-13 2018-04-13 Video display method and terminal equipment

Publications (2)

Publication Number Publication Date
CN108769822A CN108769822A (en) 2018-11-06
CN108769822B true CN108769822B (en) 2021-01-08

Family

ID=63981807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810329976.1A Active CN108769822B (en) 2018-04-13 2018-04-13 Video display method and terminal equipment

Country Status (1)

Country Link
CN (1) CN108769822B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020133371A1 (en) * 2018-12-29 2020-07-02 深圳市大疆创新科技有限公司 Competition video subtitle processing method and broadcast guiding system
CN110572515B (en) * 2019-08-23 2021-08-06 咪咕音乐有限公司 Video color ring management method, color ring platform, terminal, system and storage medium
CN110933511B (en) * 2019-11-29 2021-12-14 维沃移动通信有限公司 Video sharing method, electronic device and medium
CN114302253B (en) * 2021-11-25 2024-03-12 北京达佳互联信息技术有限公司 Media data processing method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104469508A (en) * 2013-09-13 2015-03-25 中国电信股份有限公司 Method, server and system for performing video positioning based on bullet screen information content
CN104994425A (en) * 2015-06-30 2015-10-21 北京奇艺世纪科技有限公司 Video labeling method and device
CN107071587A (en) * 2017-04-25 2017-08-18 腾讯科技(深圳)有限公司 The acquisition methods and device of video segment
CN107454465A (en) * 2017-07-31 2017-12-08 北京小米移动软件有限公司 Video playback progress display method and device, electronic equipment
CN107820138A (en) * 2017-11-06 2018-03-20 广东欧珀移动通信有限公司 Video broadcasting method, device, terminal and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104469508A (en) * 2013-09-13 2015-03-25 中国电信股份有限公司 Method, server and system for performing video positioning based on bullet screen information content
CN104994425A (en) * 2015-06-30 2015-10-21 北京奇艺世纪科技有限公司 Video labeling method and device
CN107071587A (en) * 2017-04-25 2017-08-18 腾讯科技(深圳)有限公司 The acquisition methods and device of video segment
CN107454465A (en) * 2017-07-31 2017-12-08 北京小米移动软件有限公司 Video playback progress display method and device, electronic equipment
CN107820138A (en) * 2017-11-06 2018-03-20 广东欧珀移动通信有限公司 Video broadcasting method, device, terminal and storage medium

Also Published As

Publication number Publication date
CN108769822A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
CN110851051B (en) Object sharing method and electronic equipment
CN110233931B (en) Unread message management method and terminal equipment
CN108958580B (en) Display control method and terminal equipment
CN108415652B (en) Text processing method and mobile terminal
CN109543099B (en) Content recommendation method and terminal equipment
CN108763541B (en) Page display method and terminal
CN108769822B (en) Video display method and terminal equipment
CN108446058B (en) Mobile terminal operation method and mobile terminal
CN111031398A (en) Video control method and electronic equipment
CN110099296B (en) Information display method and terminal equipment
CN110196668B (en) Information processing method and terminal equipment
CN109828731B (en) Searching method and terminal equipment
CN108616772B (en) Bullet screen display method, terminal and server
CN110703972B (en) File control method and electronic equipment
CN110868633A (en) Video processing method and electronic equipment
CN109189303B (en) Text editing method and mobile terminal
CN109358931B (en) Interface display method and terminal
CN108874906B (en) Information recommendation method and terminal
CN108984066B (en) Application icon display method and mobile terminal
CN110647277A (en) Control method and terminal equipment
CN109672845B (en) Video call method and device and mobile terminal
CN111274842A (en) Method for identifying coded image and electronic equipment
CN111368119A (en) Searching method and electronic equipment
CN109063079B (en) Webpage labeling method and electronic equipment
CN109067975B (en) Contact person information management method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant