CN113536036A - Video data display method and device, electronic equipment and storage medium - Google Patents

Video data display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113536036A
CN113536036A CN202110018414.7A CN202110018414A CN113536036A CN 113536036 A CN113536036 A CN 113536036A CN 202110018414 A CN202110018414 A CN 202110018414A CN 113536036 A CN113536036 A CN 113536036A
Authority
CN
China
Prior art keywords
video
target
playing
video data
segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110018414.7A
Other languages
Chinese (zh)
Inventor
吴启亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110018414.7A priority Critical patent/CN113536036A/en
Publication of CN113536036A publication Critical patent/CN113536036A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Library & Information Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application provides a video data display method and device, electronic equipment and a storage medium, and relates to the fields of videos, cloud technologies and big data. The method comprises the following steps: acquiring at least one piece of video data, and displaying the at least one piece of video data through a video display interface; the method comprises the steps that corresponding to any video data, the video data comprise prompt information of at least one target video frame matched with a target keyword in a target video, the target video is a video matched with the target keyword, the prompt information can prompt a user which video frames in the video data are the target video frames, the user can conveniently and quickly position the target video frames in the video data to browse the target video frames, the video content is quickly known, the user can quickly determine the desired video, the user time is saved, the user experience is improved, the user can be effectively motivated to use a video application program, and the time and the frequency of the user using the video application program are improved.

Description

Video data display method and device, electronic equipment and storage medium
Technical Field
The application relates to the fields of videos, cloud technologies and big data, in particular to a video data display method and device, electronic equipment and a storage medium.
Background
With the continuous development of internet technology, video playing platforms are widely developed. Video is widely spread and applied as an information spreading carrier because of the characteristic of being capable of expressing information more intuitively, abundantly and clearly.
Most video playing platforms support video retrieval functions. When a user wants to retrieve videos, keywords can be input in the retrieval box, videos corresponding to the keywords are displayed to the user through the video playing platform, and the user can click to view any displayed video. Generally, a user can judge which video is wanted to be watched only through the front cover and the title of the video, and if the user needs to confirm in detail that the video needs to be played from the head, the judgment can be more accurate, so that the user needs to spend time for screening the displayed videos, the user time is wasted, the user experience is poor, and the user cannot be motivated to use the video application program.
Disclosure of Invention
The application provides a display method, a display device, electronic equipment and a storage medium, wherein the display method and the display device can stimulate a user to use a video application program.
In one aspect, a method for displaying video data is provided, the method comprising:
acquiring at least one piece of video data, and displaying the at least one piece of video data through a video display interface;
the video data comprises prompt information of at least one target video frame matched with the target keywords in the target video corresponding to any video data, and the target video is the video matched with the target keywords.
In one possible implementation, the at least one target video frame is determined by:
determining a target video according to the target keyword;
and for any target video, determining each target video frame in the target video according to the matching degree of the target keyword and the content of each video frame in the target video.
In one possible implementation, for any target video, the target segment is generated by:
and generating a target segment according to each target video frame in the target video.
In one possible implementation, generating a target segment from each target video frame in a target video includes:
for each target video frame in the target video, determining an associated video frame of the target video frame;
generating target segments according to the target video frames and the associated video frames;
wherein associating the video frame comprises any one of:
a preset number of video frames adjacent to the target video frame in the target video;
at least one of a video frame in the target video that is within a first preset duration after the target video frame or a video frame within a second preset duration before the target video frame.
In another aspect, there is provided a video data display apparatus, including:
the video data acquisition module is used for acquiring at least one piece of video data;
the video data display module is used for displaying at least one piece of video data through a video display interface;
the video data comprises prompt information of at least one target video frame matched with the target keywords in the target video corresponding to any video data, and the target video is the video matched with the target keywords.
In one possible implementation, the prompt message includes at least one of:
a target segment of a target video;
hint information of a position of at least one target video frame in the target video.
In one possible implementation, the target keywords include search keywords, and the video data is a video search result; alternatively, the first and second electrodes may be,
the target keywords are recommendation keywords, and the target video is a recommendation video.
In a possible implementation manner, when the prompt message includes a target segment of a target video, and the video data are sequentially displayed in the video display interface, the video data display module is further configured to:
playing a target segment of the video data located at the designated display position;
playing a target segment of the video data at the appointed display position, and playing a target video corresponding to the target segment if a playing operation instruction of a user for any video data is not received when the playing of the target segment is finished;
sequentially playing the target segments of the video data according to the display sequence;
and responding to the preview playing operation aiming at any video data, and playing the target segment corresponding to the preview playing operation.
In a possible implementation manner, when playing the target segment of the video data located at the designated display position, the video data display module is specifically configured to:
and circularly playing the target segment of the video data at the designated display position.
In one possible implementation, when the hint information includes a target segment of the target video, the video data also includes the target video.
In one possible implementation, the video data is formed by splicing a target segment and a target video, and the target segment is located before the target video.
In a possible implementation manner, when the prompt message includes a prompt message of a position of at least one target video frame in the target video, the video data display module is specifically configured to:
and displaying at least one target video through a video display interface and displaying prompt information of the position of the corresponding at least one target video frame in the playing progress information of each target video.
In one possible implementation, when the prompt message includes a target segment of the target video, the video data display module is further configured to:
in response to a video playback operation for any one of the video data, performing at least one of:
playing a spliced video corresponding to the video playing operation;
playing a target video corresponding to the video playing operation;
playing a target segment corresponding to the video playing operation;
playing a target segment and a target video corresponding to the video playing operation;
the spliced video is formed by splicing a target segment and a target video.
In one possible implementation, when the spliced video is played, the video data display module is further configured to:
and displaying prompt information of the position of the target segment in the spliced video.
In a possible implementation manner, when the video data display module responds to a video playing operation for any video data and plays a spliced video corresponding to the video playing operation, the video data display module is specifically configured to:
responding to the video playing operation, and if the playing of the target segment of the spliced video is finished, starting to play the spliced video from the playing starting point of the target video in the spliced video;
responding to the video playing operation, if the target segment of the spliced video is not played completely, playing the spliced video from the playing starting point of the target segment of the spliced video, or playing the spliced video from the starting point of the unplayed part of the target segment of the spliced video.
In one possible implementation, when the prompt message includes a target segment of the target video, the target segment further includes an associated video frame of the target video frame;
wherein associating the video frame comprises any one of:
a preset number of video frames adjacent to the target video frame in the target video;
at least one of a video frame in the target video that is within a first preset duration after the target video frame or a video frame within a second preset duration before the target video frame.
In one possible implementation, the at least one target video frame is determined by:
determining a target video according to the target keyword;
and for any target video, determining each target video frame in the target video according to the matching degree of the target keyword and the content of each video frame in the target video.
In one possible implementation, for any target video, the target segment is generated by:
and generating a target segment according to each target video frame in the target video.
In one possible implementation, generating a target segment from each target video frame in a target video includes:
for each target video frame in the target video, determining an associated video frame of the target video frame;
generating target segments according to the target video frames and the associated video frames;
wherein associating the video frame comprises any one of:
a preset number of video frames adjacent to the target video frame in the target video;
at least one of a video frame in the target video that is within a first preset duration after the target video frame or a video frame within a second preset duration before the target video frame.
In yet another aspect, an electronic device is provided, comprising a memory and a processor, wherein the memory has stored therein a computer program; the processor, when running the computer program, performs the video data display method provided in any of the alternative embodiments of the present application.
In yet another aspect, a computer-readable storage medium is provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the video data display method provided in any of the alternative embodiments of the present application.
The beneficial effect that technical scheme that this application provided brought is:
the application provides a video data display method, a device, an electronic device and a storage medium, compared with the prior art, the application can display at least one piece of video data through a video display interface, wherein any piece of video data comprises prompt information of at least one target video frame matched with a target keyword in a target video, the target video is a video matched with the target keyword, the prompt information can prompt a user which video frames in the video data are the target video frames, the user can be conveniently and quickly positioned to the target video frames in the video data to browse the target video frames and quickly know the video content, so that the user can quickly determine the desired video, the user time is saved, the user experience is improved, and the user is effectively encouraged to use the video application program, the duration and frequency of the video application program used by the user are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
Fig. 1A is a schematic diagram illustrating an architecture of a video data display system according to an embodiment of the present application;
fig. 1B is a schematic interaction flow diagram of a video data display system according to an embodiment of the present application;
FIG. 2A is a schematic diagram of a user interface provided in an embodiment of the present application;
FIG. 2B is a schematic view of another user interface provided by the embodiments of the present application;
fig. 3 is a schematic flowchart of a video processing method according to an embodiment of the present application;
fig. 4 is a schematic diagram of a video frame tag according to an embodiment of the present application;
fig. 5 is a schematic diagram of tag matching provided in an embodiment of the present application;
FIG. 6A is a schematic view of another user interface provided by an embodiment of the present application;
fig. 6B is a schematic view illustrating a video display provided by an embodiment of the present application;
FIG. 7 is a schematic view of another user interface provided by an embodiment of the present application;
fig. 8 is a schematic flowchart illustrating a video data display method according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a video data display apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
The embodiment of the application provides a video data display method, which can display at least one piece of video data through a video display interface, wherein any piece of video data comprises prompt information of at least one target video frame matched with a target keyword in a target video, the target video is a video matched with the target keyword, a video application program can display the prompt information of the target video frame matched with the keyword to a user, the probability of the target video frame matched with the keyword is a video frame corresponding to the content in which the user is interested, the prompt information can prompt the user which video frames in the video data are the target video frames, the user can conveniently and quickly position the target video frames in the video data to browse the target video frames, the user can quickly know the video content, the user can quickly determine the desired video, and the user time is saved, the user experience is improved, the user is effectively stimulated to use the video application program, and the time length and the frequency of the user in using the video application program are improved.
The data related to the optional embodiments provided by the application can be realized based on a cloud technology, and the data processing/data computing related to the implementation of the scheme can be realized based on cloud computing.
Cloud technology refers to a hosting technology for unifying serial resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data. Cloud technology (Cloud technology) is based on a general term of network technology, information technology, integration technology, management platform technology, application technology and the like applied in a Cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources, such as video websites, picture-like websites and more web portals. With the high development and application of the internet industry, each article may have its own identification mark and needs to be transmitted to a background system for logic processing, data in different levels are processed separately, and various industrial data need strong system background support and can only be realized through cloud computing.
Cloud computing (cloud computing) is a computing model that distributes computing tasks over a pool of resources formed by a large number of computers, enabling various application systems to obtain computing power, storage space, and information services as needed. The network that provides the resources is referred to as the "cloud". Resources in the "cloud" appear to the user as being infinitely expandable and available at any time, available on demand, expandable at any time, and paid for on-demand.
As a basic capability provider of cloud computing, a cloud computing resource pool (called as an ifas (Infrastructure as a Service) platform for short is established, and multiple types of virtual resources are deployed in the resource pool and are selectively used by external clients.
According to the logic function division, a PaaS (Platform as a Service) layer can be deployed on an IaaS (Infrastructure as a Service) layer, a SaaS (Software as a Service) layer is deployed on the PaaS layer, and the SaaS can be directly deployed on the IaaS. PaaS is a platform on which software runs, such as a database, a web container, etc. SaaS is a variety of business software, such as web portal, sms, and mass texting. Generally speaking, SaaS and PaaS are upper layers relative to IaaS.
Cloud computing (cloud computing) refers to a delivery and use mode of an IT infrastructure, and refers to obtaining required resources in an on-demand and easily-extensible manner through a network; the generalized cloud computing refers to a delivery and use mode of a service, and refers to obtaining a required service in an on-demand and easily-extensible manner through a network. Such services may be IT and software, internet related, or other services. Cloud Computing is a product of development and fusion of traditional computers and Network Technologies, such as Grid Computing (Grid Computing), distributed Computing (distributed Computing), Parallel Computing (Parallel Computing), Utility Computing (Utility Computing), Network Storage (Network Storage Technologies), Virtualization (Virtualization), Load balancing (Load Balance), and the like.
With the development of diversification of internet, real-time data stream and connecting equipment and the promotion of demands of search service, social network, mobile commerce, open collaboration and the like, cloud computing is rapidly developed. Different from the prior parallel distributed computing, the generation of cloud computing can promote the revolutionary change of the whole internet mode and the enterprise management mode in concept.
Each optional embodiment of the present application can also be implemented based on Big data, which is a data set that cannot be captured, managed, and processed by a conventional software tool within a certain time range, and is a massive, high-growth-rate, and diversified information asset that needs a new processing mode to have stronger decision-making power, insight discovery power, and process optimization capability. With the advent of the cloud era, big data has attracted more and more attention, and the big data needs special technology to effectively process a large amount of data within a tolerance elapsed time. The method is suitable for the technology of big data, and comprises a large-scale parallel processing database, data mining, a distributed file system, a distributed database, a cloud computing platform, the Internet and an extensible storage system.
The video data display method provided by the embodiment of the present application may be applicable to any application scenario related to video, such as a recommended scenario of video, a search scenario of video, and the like, and for better understanding and explaining the scheme of the embodiment of the present application, first, the scheme of the present application is described in detail with reference to a specific optional embodiment, in which the scheme provided by the embodiment of the present application may be applied to a search scenario of video, specifically, the scheme provided by the embodiment of the present application may be implemented as a video application or a functional plug-in the video application, a user may install the application in any terminal device, and may implement presentation of video through the application installed on the terminal device, for example, a user may search video in a video application, present a video search result by the video application, the video application program can also display a video recommendation result, and the video recommendation result can be a default recommended video or a video which is recommended in a personalized mode according to the preference of the user. Alternatively, the video application may be a separate application downloaded and installed in the terminal device, or may be a video application opened through a browser or the like.
In the embodiment of the present application, a preview video (may also be referred to as a preview segment) refers to a target segment mentioned later, the target segment (preview segment/preview video) is generated based on a target key frame in the target video mentioned later, an original video corresponding to the preview video refers to the target video, and a search keyword refers to one of the target keywords mentioned later.
As an alternative implementation manner, as shown in fig. 1A, fig. 1A is a schematic diagram of an architecture of a video data display system according to an embodiment of the present application. The user can perform information interaction with the terminal device, the terminal device is in communication connection with the server, wherein the terminal device is installed with the video application program, and the server in communication connection with the terminal device (i.e. the server in fig. 1A) may be a server corresponding to the video application program. The video application program on the terminal device may obtain and display video data from a server or a server cluster corresponding to the video application program, where the server cluster includes at least two servers, and any server may be a physical server or the aforementioned cloud server. For convenience of description, in the embodiments of the present application and in various alternative embodiments referred to below, a server or a server cluster corresponding to a video application may be referred to as a server.
In the embodiment of the present application, as shown in fig. 1B, fig. 1B is a schematic view of an interaction flow of a video data display system provided in the embodiment of the present application. When the user performs information interaction with the terminal equipment, the user can input information into the terminal equipment, and the terminal equipment receives the input information of the user and sends the input information to the server. After receiving the input information, the server can determine search keywords in the input information, tag the search keywords, match the search keyword tags with video frame tags in each video, determine preview videos by using video frames corresponding to the successfully matched video frame tags, and splice the preview videos in front of original videos of the preview videos to obtain video data. The server may transmit video data to the terminal device, and the terminal device may receive and display the video data. The above-described flow will be described in detail below from the perspective of the terminal device and the server, respectively.
In this embodiment of the application, a video application is installed on a terminal device, and a user can input information into the terminal device through the video application, specifically, as shown in fig. 2A, fig. 2A is a schematic view of a user interface provided in this embodiment of the application. Wherein the user interface can be a home page of a video application, the user interface includes various controls, a user can trigger any control and switch to the user interface corresponding to the control, the controls include controls corresponding to various video categories, wherein the user interface shown in fig. 2A is a user interface corresponding to a carefully selected control, the controls further include a drama control, a movie control, an anarchism control, a kids control, and a control 22 for selecting a video category, the user interface corresponding to the drama control includes at least one drama video information, the user interface corresponding to the movie control includes at least one movie video information, the user interface corresponding to the anarchism control includes at least one anarchism video information, the user interface corresponding to the kids control includes at least one kids video information, the user can click on the control 22 to select one video category from the various video categories corresponding to the control 22, the video types corresponding to the control 22 include, but are not limited to, animation, love, sports, and the like. The controls further comprise a search control 21, a guess-you-will-chase control and the like, in addition, related information of a video s and videos t1, t2, t3 and t4 corresponding to the guess-you-chase control are also displayed in the display page, the video s can be a video recommended by default in a home page of the video application, and the videos t1, t2, t3 and t4 can be videos recommended in a personalized mode according to the preference of the user. The user can input information in the search box (also referred to as an input information box) where the search control 21 is located, and the user can click the search box to display the user interface shown in fig. 2B.
As shown in fig. 2B, fig. 2B is another schematic view of a user interface provided in the embodiment of the present application, in the user interface, a search control is located in a search box, and a user can click the search box, invoke a virtual keyboard, and input information in the search box by using the virtual keyboard, for example, in fig. 2B, the user inputs information "star L" in the search box by using the virtual keyboard. In the process of inputting information by the user, the user interface may display information related to the input information, such as the star L movie, the star L movie universe, the star L concert universe, the star L concert, the star L character M, the star L birthday evening live broadcasting, the star L star N, and the like shown in fig. 2B. The user may delete the input information within the search box by clicking on the "x" control or the cancel control.
In the embodiment of the application, a user inputs the star L in the search box through the virtual keyboard, clicks the search control, and the terminal equipment acquires the input information of the user and sends the input information to the server.
In the embodiment of the application, after receiving the input information, the server can determine the search keywords in the input information, mark the search keywords, match the search keyword labels with the video frame labels in the videos, determine the preview video by using the video frames corresponding to the successfully matched video frame labels, and splice the preview video before the original video of the preview video to obtain the video data. Specifically, as shown in fig. 3, fig. 3 is a schematic flowchart of a video processing method according to an embodiment of the present disclosure. The server can identify the content of each video frame in the video and label each video frame; the server can obtain the input information words of the user and tag the search keywords in the input information. The server can match the search keyword tag with each video frame tag, determine whether a successfully matched video frame exists, if so, make a preview video by using the successfully matched video frame, splice the preview video in front of an original video corresponding to the preview video, and then end the process, and if not, end the process. The flow related to fig. 3 will be described in detail below.
It should be noted that, in the video search scenario in the embodiment of the present application, the keyword in the input information may be a search keyword, and the keyword may be different for different scenarios, for example, in a video recommendation scenario, there may be no input information, and the server directly obtains the user tag and recommends the video corresponding to the user tag for the user.
In this embodiment of the application, the server may obtain a plurality of videos, and the obtaining manner is not limited herein. For each acquired video, the server may identify the content of each video frame in the video and tag each video frame, the content of the video frame may include at least one of text content, image content, and audio content, and the manner in which the server identifies the content of the video frame is not limited, for example, the server may identify the image content by using an image identification algorithm or an image identification model, may identify the audio content by using a speech identification model, and may identify the text content by using a text identification model.
Specifically, for each video frame, the server may identify the content of the video frame, determine a keyword included in the content, then mark the video frame with a tag corresponding to the keyword included in the content, and record a timestamp of the video frame, where the timestamp is the time of the video frame in the video. The tag may also include a timestamp of the video frame.
For example, as shown in fig. 4, fig. 4 is a schematic diagram of a video frame tag according to an embodiment of the present application. The preset keywords include "star L", "street view", and "neon light". The server may identify the content of each video frame in the video. If the time corresponding to the video frame A of the video is 00:001, the content of the video frame A comprises a star L, a street view and a neon lamp; the time corresponding to the video frame B of the video is 00:002, and the content of the video frame B comprises a star L; the time corresponding to the video frame C of the video is 00:040, and the content of the video frame C comprises a star L; the time corresponding to the video frame D of the video is 00:100, the content of the video frame D comprises a star L, when the server identifies the content of the video frame A of the video, the server can determine that the content of the video frame A comprises the star L, a street view and a neon lamp, the server can respectively mark the video frame A with labels corresponding to the star L, the street view and the neon lamp, namely respectively mark the video frame A with tag-mingxingL (00:001), tag-joining (00:001) and tag-nihongdeng (00:001), and record the timestamp 00:001 of the video frame A; when the server identifies the content of the video frame B of the video, the server can determine that the content of the video frame B contains the star L, the server can print a label corresponding to the star L on the video frame B, i.e. print a tag-mingxingL (00:002) on the video frame B, and record a timestamp 00:002 of the video frame B; when the server identifies the content of the video frame C of the video, the server can determine that the content of the video frame C contains the star L, the server can mark a label corresponding to the star L on the video frame C, namely mark a tag-mingxingL (00:040) on the video frame C, and record a timestamp 00:040 of the video frame C; when the server identifies the content of the video frame D of the video, the server may determine that the content of the video frame D includes "star L", and the server may print a tag corresponding to "star L" on the video frame D, that is, print a tag-mingxingL (00:100) on the video frame D, and record a timestamp 00:100 of the video frame D.
In the embodiment of the present application and the following embodiments, 00:001 refers to 0.1 second (i.e., a video frame is a video frame of a video with a playing time of 0.1 second), and accordingly, 00:002 refers to 0.2 second, 00:040 refers to 4 seconds, 00:100 refers to 10 seconds, and so on, and 10:000 refers to 10 minutes.
It should be noted that, for each video, the server may store the tag corresponding to each video frame in the video (i.e., the video frame tag) and the timestamp of each video frame in the table shown in fig. 4, and then establish the correspondence between the video and the table.
In the embodiment of the application, the server can acquire the input information of the user and tag the search keyword in the input information, specifically, the terminal device can receive the information input by the user in the search box in the video application program, generate a video acquisition request and send the video acquisition request to the server, wherein the video acquisition request comprises the input information of the user.
For example, if the user inputs the information "star L" in the video application, the terminal device may generate a video acquisition request carrying the input information "star L" and send the video acquisition request to the server.
For another example, if the user inputs the information "star L is shopping" in the video application program, the terminal device may generate a video acquisition request carrying the input information "star L is shopping", and send the video acquisition request to the server.
In the embodiment of the application, after the server receives the video acquisition request sent by the terminal device, based on the input information carried in the video acquisition request, the search keyword in the input information, namely the keyword input by the user, can be determined, and the search keyword is labeled.
For example, after receiving a video acquisition request sent by a terminal device, a server may determine that input information "star L" carried in the video acquisition request includes a search keyword "star L", and then the server prints a corresponding tag "tag-mingxingL" for the search keyword "star L".
For another example, after the server receives the video acquisition request sent by the terminal device, it may be determined that the input information "star L shopping" carried in the video acquisition request includes the search keyword "star L" and the search keyword "shopping", and then the server applies a corresponding tag "tag-mingxingL" to the search keyword "star L" and applies a corresponding tag "tag-guangjie" to the search keyword "shopping".
Further, the server may match the search keyword tag with each video frame tag to determine whether there is a successfully matched video frame. Specifically, the server may match the search keyword tag with each video frame tag, and determine whether a successfully matched video frame tag exists, so as to determine whether a successfully matched video frame exists.
In the embodiment of the application, for each video, the search keyword tag and the video frame tag can be matched, and whether a successfully matched video frame exists in the video or not is determined.
As shown in fig. 5, fig. 5 is a schematic diagram of tag matching provided in an embodiment of the present application. For the video corresponding to the video frame tag shown in fig. 4, the video includes video frames a-D, where the tag corresponding to the video frame a includes tag-mingxingL (00:001), tag-joining (00:001), tag-nihongdeng (00:001), and the timestamp of the video frame a is 00: 001; the tag corresponding to video frame B contains tag-mingxingL (00:002), and the timestamp of video frame B is 00: 002; the tag corresponding to video frame C contains tag-mingxingL (00:040), and the timestamp of video frame C is 00: 040; the tag corresponding to video frame D includes tag-mingxingL (00:100), and the timestamp of video frame D is 00: 100. The tags in the video include: time of video frame a: 00:001, and video frame a corresponding tag: tag-mingxingL (00:001), tag-joining (00:001), tag-nihongdeng (00: 001); time of video frame B: 00:002, and video frame B corresponding to the label: tag-mingxingL (00: 002); time of video frame C: 00:040, and the label corresponding to video frame C: tag-mingxingL (00: 040); time of video frame D: 00:100, and the label corresponding to video frame D: tag-mingxingL (00: 100). And if the search keyword tag is tag-mingxingL, the matching result of the video frames A-D can be determined to be successful when the search keyword tag is matched with the tag in the video.
In the embodiment of the application, if the successfully matched video frame tag exists, it is indicated that the successfully matched video frame exists, and the video frame corresponding to the successfully matched video frame tag is the successfully matched video frame, and a preview video can be made by using the successfully matched video frame. Specifically, for each video frame successfully matched in the video, a time difference may be determined according to respective times of any two adjacent successfully matched video frames, and a time difference greater than a preset time is determined, the time difference greater than the preset time is recorded as a target time difference, two adjacent successfully matched video frames corresponding to the target time difference are divided into two video frame segments, that is, two adjacent successfully matched video frames corresponding to the time difference greater than the preset time are divided into two video frame segments, and two adjacent successfully matched video frames corresponding to the time difference less than or equal to the preset time are divided into the same video frame segment. In this way, the successfully matched video frames in the video can be segmented to obtain video frame segments, and each video frame segment is composed of at least one successfully matched video frame.
And for each video frame segment, selecting each video frame from the video within the range from the starting time point to the ending time point by taking the time of the first video frame in the video frame segment as a starting time point, taking the time of the last video frame in the video frame segment as a reference time point and taking the time point which is behind the reference time point and has the time difference with the reference time point as preset time as an ending time point, and obtaining the video segment corresponding to the video frame segment.
For example, the preset time may be 3 seconds. For the successfully matched video frames a-D shown in fig. 5, the time for video frame a is 00:001, the time for video frame B is 00:002, the time for video frame C is 00:040, and the time for video frame D is 00:100, it may be determined that the time difference between the time of video frame a and the time of video frame B is less than 3 seconds, may not be considered a target time difference, therefore, the video frame A and the video frame B are in the same video frame section, the time difference between the time of the video frame B and the time of the video frame C is more than 3 seconds, as a target time difference, therefore, the time difference between the time of video frame C and the time of video frame D is greater than 3 seconds in different video frame segments for video frame B and video frame C, can be a target time difference, and thus, video frame C is in a different video frame segment than video frame D. That is, the video frames a-D may be divided into three video frame segments according to two target time differences. The first segment of video frames comprises video frame a and video frame B, the second segment of video frames comprises video frame C, and the third segment of video frames comprises video frame D.
For the first video frame segment, the time of the video frame a may be taken as a starting time point, that is, 00:001 is taken as the starting time point, the time of the video frame B is taken as a reference time point, that is, 00:002 is taken as the reference time point, a time point after the reference time point and having a time difference of 3 seconds with the reference time point is taken as an ending time point, that is, 00:032 is taken as the ending time point, and each video frame within 00:001-00:032 is selected from the video to obtain a video segment 1 corresponding to the first video frame segment; for the second video frame segment, the time of the video frame C is taken as a starting time point, the first time of the video frame C is taken as a reference time point, that is, 00:040 is taken as the starting time point and the reference time point, a time point which is after the reference time point and has a time difference of 3 seconds with the reference time point is taken as an ending time point, that is, 00:070 is taken as the ending time point, and each video frame within 00:040-00:070 is selected from the video to obtain a video segment 2 corresponding to the second video frame segment; for the third video frame segment, the time of the video frame D may be used as a starting time point, the first time of the video frame D is used as a reference time point, that is, 00:100 is used as the starting time point and the reference time point, a time point after the reference time point and having a time difference of 3 seconds with the reference time point is used as an ending time point, that is, 00:130 is used as the ending time point, and each video frame within 00:100-00:130 is selected from the video, so as to obtain the video segment 3 corresponding to the third video frame segment.
In this embodiment of the present application, the ending time point of the video frame segment may also be adjusted, and the adjustment manner may be that the ending time point of the video frame segment is adjusted to be an integer and is greater than the ending time point of the video frame segment. For example, the end time point of the first video frame segment may be adjusted to 00: 040. Correspondingly, each video frame from the starting time point to the adjusted ending time point in the video may be selected to obtain a video segment corresponding to the video frame segment, for example, each video frame from 00:001 to 00:040 is selected from the video to obtain a video segment 1 corresponding to the first video frame segment.
In the embodiment of the application, the video segments corresponding to the video frame segments can be spliced to obtain the preview video.
For example, video segments 1-3 can be spliced to obtain a preview video, i.e., the preview video is a video segment spliced from 00:001-00:040, 00:040-00:070, and 00:100-00:130 in the original video.
In a possible implementation manner, after the ending time point of the video frame segment is adjusted, for some two video frame segments, there may be a portion where the video segments corresponding to the two video frame segments have overlap. In this case, when the two video segments are spliced, the video frame of the overlapping portion of one of the video segments may be deleted, and the video segment from which the overlapping portion is deleted may be spliced with the other video segment.
For example, if a video segment corresponding to one video frame segment is 00:001-00:040 and a video segment corresponding to another video frame segment adjacent to the video frame segment is 00:038-00:070, the two video segments have an overlapped portion of 00:038-00:040, the overlapped portion in one of the two video segments can be deleted, and the video segment from which the overlapped portion is deleted and the other video segment are spliced to obtain 00:001-00: 070.
In the embodiment of the present application, the duration of the preview video is not limited.
In one possible implementation, the duration of the preview video may be a fixed duration, for example, the duration of the preview video may be 15 seconds; in another possible implementation, the duration of the preview video is related to the duration of the original video, for example, the duration of the preview video may be 1% of the duration of the original video.
In the embodiment of the application, the server can directly send the preview video to the terminal device, can splice the preview video with the original video corresponding to the preview video, and sends the spliced video to the terminal device. The preview video can be spliced in front of the original video corresponding to the preview video, so that when a user views the preview video, the user can see the preview video preferentially, and whether the preview video is the video wanted by the user is confirmed.
In the embodiment of the application, the server can splice the preview video before the original video of the preview video to obtain the video data, and send the video data to the terminal equipment, and the terminal equipment can receive and display the video data, and the number of the video data is not limited.
In the embodiment of the application, a user inputs a star L in a search box through a virtual keyboard, a search control is clicked, terminal equipment obtains input information of the user and sends the input information to a server, the server determines video data according to the input information and sends the video data to the terminal equipment, the terminal equipment generates a user interface shown in fig. 6A, and the video data received by the terminal equipment are displayed in the user interface, namely spliced videos obtained by splicing preview videos before original videos of the preview videos are spliced.
As shown in fig. 6A, fig. 6A is a schematic view of another user interface provided in the embodiment of the present application. In the user interface, a search control, an "x" control, a cancel control, and four videos indicated by reference numerals 61a, 61b, 61c, and 61d, respectively, are displayed. The search control is located in the search box, input information 'star L' is arranged in the search box, and the four videos are all videos related to the star L.
In the embodiment of the present application, the first video shown in fig. 6A is a video indicated by reference numeral 61a, an area indicated by reference numeral 62 represents identification information that the first video is in a playing state, an area indicated by reference numeral 63 represents playing progress information of the first video, wherein a thick line portion in the playing progress information represents a played portion of the video, a thin line portion of the playing progress bar represents an unplayed portion of the video, a circle located between the thick line portion and the thin line portion in the playing progress information represents a position of a current video frame in the video, the current video frame is a video frame corresponding to the area indicated by reference numeral 64a, and the area indicated by reference numeral 64a represents a character image of a star L, that is, a character image of the star L is included in the video frame corresponding to the playing progress information of the first video. The user interface further includes related information of the first video, for example, description information "star L album" of the first video, publisher information "entertainment tabloid" of the first video, the number of views "234 views" of the first video, and the distribution time "2019-08-01" of the first video, and the like. Each of the second through fourth videos in the user interface shown in fig. 6A displays a video cover, a preview control, and video duration information.
In this embodiment, the second video shown in fig. 6A is the video indicated by reference numeral 61b, the second video is in an unplayed state, and the user can click the preview control to play the preview video in the second video, where the area indicated by reference numeral 64b represents the character image of star L, that is, the cover of the second video contains the character image of star L, and the cover may be a video frame matching with star L in the second video. The user interface further includes related information of the second video, for example, a time length "1: 28: 20" of the second video, explanatory information "star L movie" of the second video, distributor information "entertainment bulletin" of the second video, a number of times of watching "204 times of the second video, and a distribution time" 2019-08-01 "of the second video, and the like.
In this embodiment, the third video shown in fig. 6A is a video indicated by reference numeral 61c, the third video is in an unplayed state, and the user can click the preview control to play a preview video in the third video, where the area indicated by reference numeral 64c represents a character image of star L, that is, the cover of the third video contains a character image of star L, and the cover may be a video frame matching with star L in the third video. The user interface further comprises associated information of a third video, for example, the time length of the third video is 0:38:20, description information of the third video is 'favorite star L', publisher information of the third video is 'entertainment tabloid', the watching times of the third video are '34 times watching', the publishing time of the third video is '2019-08-01', and the like.
In this embodiment, the fourth video shown in fig. 6A is the video indicated by reference numeral 61d, the fourth video is in an unplayed state, the user can click the preview control to play the preview video in the fourth video, the area indicated by reference numeral 64d represents the character image of star L, that is, the cover of the fourth video contains the character image of star L, and the cover may be a video frame matching with star L in the fourth video. The user interface further comprises associated information of a fourth video, for example, the time length of the fourth video is 0:28:20, description information of the fourth video is "star L song", publisher information of the fourth video is "entertainment newspaper", the number of times of watching the fourth video is "1234 times watching", the publishing time of the fourth video is "2019-08-01", and the like.
The user may click on the video or description information of the video in the user interface shown in fig. 6A to transfer to the corresponding user interface, for example, the user may transfer to the user interface shown in fig. 7 by clicking on the first video.
In this embodiment, when the user interface displays a plurality of videos related to the input information, the user interface shown in fig. 6A may display the plurality of videos related to the input information, where any one of the videos may be a preview video, or may be a video obtained by splicing the preview video before and after an original video corresponding to the preview video. When a plurality of videos are displayed, the first displayed video can be automatically played in a circulating mode, the rest videos are in an unplayed state, when a user clicks any one of the preview controls of the videos, the preview video of the video corresponding to the preview control is played, namely the preview video of the video selected by the user is played, and the rest videos in the user interface are in the unplayed state.
It can be understood that, when the video related to the input information is displayed in the user interface, the preview video may be displayed, videos obtained by splicing the preview video before and after the original video corresponding to the preview video may also be displayed in other manners, for example, the preview video and the original video corresponding to the preview video may not be spliced, but the preview video and the original video of the preview video may be displayed separately, or only the original video may be displayed, and each preview sub-video constituting the preview video is marked in the original video, where the marking manner is not limited herein.
For example, the video segments 1 to 3 may be spliced to obtain a preview video, the preview video may be displayed in the user interface, the preview video may also be spliced before the original video corresponding to the preview video, the spliced video is displayed in the user interface, the video segments 1 to 3 may also be marked in the original video, and the video segments 1 to 3 are three preview sub-videos.
In a possible implementation manner, a playing progress bar of the original video and the original video may be displayed, each preview sub-video is marked on the playing progress bar, and the marking manner may be to thicken a portion of the playing progress bar corresponding to each preview sub-video, or to mark a playing start position and a playing end position of each preview sub-video on the playing progress bar.
As shown in fig. 6B, fig. 6B is a schematic display diagram of a video provided in the embodiment of the present application, where the video shown in fig. 6B corresponds to a fourth video in the user interface shown in fig. 6A. The video shown in fig. 6B is the original video of the preview video, which is the original video matching the input information. The area indicated by reference numeral 65 contains a video playing control, and a user can click the video playing control to play the original video; the area indicated by reference numeral 66 includes a playing progress bar corresponding to the original video, the length of the playing progress bar corresponds to the duration of the original video, and each thick line portion of the playing progress bar corresponds to each preview sub-video that can constitute the preview video. That is to say, the preview video can be obtained by splicing the video frames corresponding to the thick line portions of the play progress bar, and further, the obtained preview video can be spliced before the original video of the preview video.
The user can play the original video by clicking the video playing control to watch the video. In the process that the user watches the video, the user can quickly position each thick line part of the playing progress bar by dragging the playing progress bar so as to check each thick line part of the playing progress bar, so that whether the video is the video which the user is interested in is quickly determined, and the user can also watch the video normally.
As shown in fig. 7, fig. 7 is a schematic view of another user interface provided in the embodiment of the present application. The user interface displays the detailed information of the first video and the second video shown in fig. 6A. The first video is in a playing state, the video duration information is 2:20, wherein the first 15 seconds (unit is s) are preview videos obtained by the server through the method, the preview videos are spliced in front of original videos corresponding to the preview videos, the position information of the preview videos is displayed on the playing progress information of the first video, for example, a thick line part of the playing progress information in the figure represents the preview videos of the first video, the total time is 15 seconds, a thin line part of the playing progress information represents the original videos corresponding to the preview videos in the first video, and a circular ring located at the left end of the thick line part of the playing progress information represents the position of a current video frame in the videos. The detailed information of the first video further includes description information "star L album" of the video, publisher information "entertainment tabloid", viewing times "234 times viewing", and publication time "2019-08-01", as well as a return control shown by reference numeral 71, a comment control shown by reference numeral 72, a favorite control shown by reference numeral 73, a download control shown by reference numeral 74, a forward control shown by reference numeral 75, and the like; the second video is in an unplayed state, and the detailed information thereof further includes related information of the user uploading the video, for example, identification information "everything a day" and an avatar of the user uploading the video in fig. 7 (the area indicated by reference numeral 76 represents the avatar of the user uploading the video), and a focus control, and the like.
It is understood that the preview video and the original video corresponding to the preview video can be identified in different manners on the play progress information, including but not limited to those shown in fig. 6A, 6B and 7, where the preview video is represented by a thick line on the play progress bar and the original video corresponding to the preview video is represented by a thin line. In practical application, the identification mode may be set at will, for example, different colors may be used to identify the preview video and the original video on the play progress information.
It should be noted that, in the embodiment of the present application, when a user transfers to a corresponding user interface by clicking description information of a video or a video, and if a preview video of the video has been played, a video after the preview video is automatically played in the user interface shown in fig. 7, that is, an original video corresponding to the preview video is automatically played; if the preview video of the video is not played completely, the preview video is played in a seamless manner in the user interface shown in fig. 7, and after the preview video is played completely, the video after the preview video is played, that is, the original video corresponding to the preview video is automatically played.
While the video data display method has been specifically described above from the perspective of the specific embodiment, the video data display method of the embodiment of the present application, which may be executed by a terminal device, will be described in detail below from the perspective of method steps, and specifically, as shown in fig. 8, the method includes step S81 and step S82.
At step S81, at least one video data is acquired.
And step S82, displaying at least one piece of video data through the video display interface.
The method comprises the steps that corresponding to any video data, the video data comprise prompt information of at least one target video frame matched with a target keyword in a target video, and the target video is the video matched with the target keyword.
In this embodiment of the application, the terminal device may obtain video data from the server, and display the obtained video data through a video display interface, where the video display interface may be as shown in fig. 6A.
The server can determine each video matched with the target keyword as the target video. For each target video, at least one target video frame matched with the target keyword in the target video can be determined, and prompt information of the at least one target video frame is displayed.
Wherein the prompt message may include at least one of:
a target segment of a target video; hint information of a position of at least one target video frame in the target video.
In a possible implementation manner, after the server determines at least one target video frame in the target video that matches the target keyword, a target segment of the target video may be determined based on the at least one target video frame, and the target segment may be used as one piece of video data, or one piece of video data may be determined based on the target segment. The server can send each determined video data to the terminal device corresponding to the target keyword.
For example, the target keyword may be star L, the server may determine a target video corresponding to star L, determine at least one target video frame matching star L from the target video, determine a target segment of the target video based on the at least one target video frame, and treat the target segment as one video data.
As an optional implementation manner, if the prompt information includes the target segment of the target video, the video data further includes the target video.
In this embodiment of the application, for any video data, if the video data includes a target segment of a target video, the video data may include the target video, where the target video may be spliced with the target segment or may not be spliced together. For example, the target video may be spliced before or after the target segment, the terminal device displays the spliced video, or the target video may not be spliced with the target segment, and the terminal device displays the target video and the target segment respectively.
When the terminal device respectively displays the target video and the target segment, the target video can be displayed on any side of the target segment, the target video and the target segment corresponding to the target video can be regarded as a video pair, and when the terminal device displays at least two video pairs, the video pairs can be regularly displayed, so that a user can accurately determine the corresponding relation between the target segment and the target video.
For example, the terminal device may uniformly display a plurality of target segments on the left side of the video display interface, and for each target segment, the corresponding target video is displayed on the right side of the target segment, so that a corresponding relationship exists between two videos on the left side and the right side of the same horizontal line of the video display interface, which is convenient for a user to accurately determine the corresponding relationship between the target segment and the target video, and the user may view the target segment on the left side to quickly know the video content of the target video on the right side.
As an alternative implementation manner, the video data is formed by splicing a target segment and a target video, and the target segment is located before the target video.
That is to say, when the server determines the video data based on the target segment, the target segment can be spliced before the target video corresponding to the target segment, and when the user views the video data, the user can view the target segment preferentially, so that whether the video data is a video in which the user is interested or not can be determined quickly, the utilization rate of time resources of the user can be improved, and the user experience can be improved.
As an optional implementation manner, if the prompt information includes the target segment of the target video, the target segment further includes an associated video frame of the target video frame. Wherein associating the video frame may include any one of:
a preset number of video frames adjacent to the target video frame in the target video;
at least one of a video frame in the target video that is within a first preset duration after the target video frame or a video frame within a second preset duration before the target video frame.
In the embodiment of the present application, the target segment may include, in addition to the target video frame, a video frame adjacent to the target video frame, where the adjacent includes direct adjacent and indirect adjacent.
In an alternative implementation, the video frame adjacent to the target video frame may be a preset number of video frames adjacent to the target video frame in the video, for example, 15 video frames adjacent to the target video frame may be selected.
In another alternative implementation, the video frame adjacent to the target video frame may be at least one of a video frame in the video within a first preset time period after the target video frame or a video frame in a second preset time period before the target video frame, for example, a video frame in the video within 3 seconds after the target video frame may be selected.
In another possible implementation manner, after the server determines at least one target video frame in the target video that matches the target keyword, prompt information of a position of the at least one target video frame in the target video may be generated, which is not limited in this embodiment.
As an optional implementation manner, the server may mark each target video frame in the progress bar of the target video, so that the terminal device may display the progress bar of the marked target video, and the marked position in the progress bar is a position of each target video frame in the target video.
In actual execution, each target video frame and its associated video frame may be marked in the progress bar of the target video, and the foregoing related description about fig. 6B may be referred to in detail.
It is understood that the target segment is composed of each target video frame or each target video frame and each associated video frame, and therefore, each preview sub-video composing the preview video (i.e., the target segment) referred to in fig. 6B correspondingly includes each target video frame or each target video frame and each associated video frame.
As an alternative implementation manner, the target keywords include search keywords, and the video data is a video search result.
The user can input a search keyword in a search box in the video application program, the terminal device obtains the search keyword and sends the search keyword to the server, the server can determine at least one video search result corresponding to the search keyword based on the search keyword in the manners of step S81 and step S82 and sends the video search result to the terminal device, and the terminal device receives the at least one video search result sent by the server and displays the at least one video search result through a search result display interface.
As another optional implementation manner, the target keyword is a recommended keyword, and the target video is a recommended video.
In a possible implementation manner, the recommended keywords may be keywords corresponding to user preferences, and may also be referred to as user tags, the server may obtain the user tags, determine at least one recommended video corresponding to the user tags in the manners of step S81 and step S82 based on the user tags, and send each recommended video to the terminal device, and the terminal device receives each recommended video sent by the server and displays each recommended video through a recommended video display interface.
In another possible implementation manner, the recommendation keyword may be a keyword corresponding to a hotspot event and may also be referred to as a hotspot tag, the server may obtain the hotspot tag, determine at least one recommendation video corresponding to the hotspot tag based on the hotspot tag in the manners of step S81 and step S82, send each recommendation video to the terminal device, and the terminal device receives each recommendation video sent by the server and displays each recommendation video through a recommendation video display interface.
The hot spot event refers to an event relatively concerned by the general public, for example, the hot event may be a recently upcoming holiday, a star of a recent comparative fire, a movie, a tv drama, music, a social event, and the like.
It should be noted that, in the embodiment of the present application, at least one target video frame is a video frame in which all or part of the target video matches the keyword.
In a possible implementation manner, if the number of video frames matched with the keywords in the target video is small, all the video frames matched with the keywords can be selected as target video frames respectively; if the number of the video frames matched with the keywords in the target video is large, part of the video frames matched with the keywords can be selected as the target video frames respectively.
In one possible implementation manner, partial video frames located at least one of the front end, the middle end or the tail end of the target video can be selected from the video frames matched with the keywords as the target video frames respectively. As an optional implementation manner, partial video frames located in the front end, the middle end and the tail end may be selected, so that a user can respectively know contents of each part of the front end, the middle end and the tail end of the target video by looking at the target video frames in the front end, the middle end and the tail end, and thus can integrally know the video content.
In the actual execution process, the video frames matched with the keywords can be used as candidate video frames, and all or part of the candidate video frames are determined to be respectively used as target video frames by judging the time length formed by each candidate video frame and the associated video frame. The candidate video frames can be referred to the related content, and are not described herein again.
For example, if the duration formed by each candidate video and its associated video frame exceeds a preset duration, for example, exceeds 15 seconds, then part of the candidate video frames may be respectively used as target video frames; if the duration formed by each candidate video and the associated video frame thereof does not exceed the preset duration, all the candidate video frames can be respectively used as the target video frames.
It can be understood that the manner in which all or part of the candidate video frames are determined to be the target video frames respectively is determined by the number of the candidate video frames or the duration of the group length of the candidate video frames and the associated video frames, which is only two optional implementation manners, and in actual execution, the determination may be performed by other manners, which is not limited in the embodiment of the present application.
Compared with the prior art, the video data display method provided by the embodiment of the application can display at least one piece of video data through a video display interface, wherein any piece of video data comprises the prompt information of at least one target video frame matched with the target keyword in the target video, the target video is the video matched with the target keyword, the prompt information of the target video frame matched with the keyword can be displayed to a user by a video application program, the prompt information can prompt which video frames in the video data of the user are the target video frames, the user can conveniently and quickly position the target video frames in the video data to browse the target video frames, the video content can be quickly known, the user can quickly determine the desired video, the time of the user is saved, the user experience is improved, and the user can be effectively encouraged to use the video application program, the duration and frequency of the video application program used by the user are improved.
In another possible implementation manner of the embodiment of the application, if the prompt information includes a target segment of a target video, and the video data are sequentially displayed in the video display interface, the video data display method may further include at least one of:
playing a target segment of the video data located at the designated display position; playing a target segment of the video data at the appointed display position, and playing a target video corresponding to the target segment if a playing operation instruction of a user for any video data is not received when the playing of the target segment is finished; sequentially playing the target segments of the video data according to the display sequence; and responding to the preview playing operation aiming at any video data, and playing the target segment corresponding to the preview playing operation.
As a possible implementation manner, the video data of the designated display position may be the first video data displayed in the video display interface. And when the video application program on the terminal equipment receives the video data and displays the video data through the video display interface, automatically playing the target segment of the first video data displayed in the video display interface.
For example, in the video display interface shown in fig. 6A, the video indicated by the reference numeral 61a is the first video data displayed in the video display interface, and the target segment of the video indicated by the reference numeral 61a can be automatically played.
The playing of the target segment of the video data located at the designated display position may specifically include: and circularly playing the target segment of the video data at the designated display position.
That is to say, when the video application program on the terminal device receives each piece of video data and displays each piece of video data through the video display interface, the video application program automatically plays the target segment of the first piece of video data displayed in the video display interface in a circulating manner.
For example, in the video display interface shown in fig. 6A, the video indicated by the reference numeral 61a is the first video data displayed in the video display interface, and the target segment of the video indicated by the reference numeral 61a can be automatically played in a loop.
As another possible implementation manner, the play operation instruction of the user for any video data refers to an operation related to play of any video data by the user, for example, a preview play operation of the user for any video data, a video play operation of the user for any video data, and the like, which all belong to the play operation instruction of the user for any video data.
In the embodiment of the application, when the target segment of the video data located at the designated display position is played, if a playing operation instruction of a user for any video data is not received, the target video corresponding to the target segment is played. After the target video corresponding to the target segment is played, if a playing operation instruction of a user for any video data is not received, the target segment and the target video of the video data at the designated display position can be played in a circulating manner.
As another possible implementation manner, when the video application on the terminal device receives each piece of video data and displays each piece of video data through the video display interface, the video application sequentially plays the target segments of each piece of video data displayed in the video display interface according to the display order.
For example, in the video display interface shown in fig. 6A, a video indicated by reference numeral 61a is first video data displayed in the video display interface, a video indicated by reference numeral 61b is second video data displayed in the video display interface, a video indicated by reference numeral 61c is third video data displayed in the video display interface, and a video indicated by reference numeral 61d is fourth video data displayed in the video display interface. The target segment of the video indicated by the reference numeral 61a may be automatically played, the target segment of the video indicated by the reference numeral 61b may be automatically played after the target segment of the video indicated by the reference numeral 61a is completely played, the target segment of the video indicated by the reference numeral 61c may be automatically played after the target segment of the video indicated by the reference numeral 61b is completely played, and the target segment of the video indicated by the reference numeral 61d may be automatically played after the target segment of the video indicated by the reference numeral 61c is completely played. I.e. the target segments of the video indicated by reference numerals 61a-d are automatically played in sequence in the display order.
If the video data comprises the target video, the target video of each video data displayed in the video display interface can be sequentially played according to the display sequence after the target segment of each video data is played, and the target video of each video data displayed in the video display interface can be sequentially played according to the display sequence after the target segment of each video data is played; the video data displayed in the video display interface can also be sequentially played according to the display sequence, that is, after the target segment and the target video of one video data are played, the target segment and the target video of the next video data are played according to the display sequence.
In another possible implementation manner of the embodiment of the application, if the prompt information includes a target segment of the target video, a preview play operation for any video data is responded, and the target segment corresponding to the preview play operation is played.
In the embodiment of the present application, a manner in which the user triggers the preview playing operation is not limited, for example, the user may trigger the preview playing operation by voice, clicking a preview playing control, and the like.
The terminal equipment can receive the preview playing operation of a user for any video data displayed on the video display interface, and play the target segment corresponding to the preview playing operation. When the target segment corresponding to the preview playing operation is performed, the preview playing segments of other video data on the video display interface are in an unplayed state.
In the process of practical application, after the target segment corresponding to the preview playing operation is played, the playing of the target segment of any video data on the video display interface is stopped, and the target segment corresponding to the preview playing operation can also be played circularly.
In another possible implementation manner of the embodiment of the application, if the prompt information includes a target segment of the target video, the video data display method may further include: in response to a video playback operation for any one of the video data, performing at least one of:
playing a spliced video corresponding to the video playing operation; playing a target video corresponding to the video playing operation; playing a target segment corresponding to the video playing operation; and playing the target segment and the target video corresponding to the video playing operation.
The spliced video is formed by splicing a target segment and a target video.
In the embodiment of the application, the manner in which the user triggers the video playing operation is not limited, for example, the user may trigger the video playing operation by voice, clicking a video playing control, clicking a video or a video title, and the like.
As an optional implementation manner, the terminal device may receive a video playing operation of a user for any video data displayed on the video display interface, directly play a target video corresponding to the video playing operation, or switch to a corresponding display interface, and play the target video corresponding to the video playing operation in the display interface. The corresponding display interface includes, but is not limited to, a target video corresponding to the video playing operation.
As another optional implementation manner, the terminal device may receive a video playing operation of a user for any video data displayed on the video display interface, directly play a spliced video corresponding to the video playing operation, or switch to a corresponding display interface, and play the spliced video corresponding to the video playing operation in the display interface. The corresponding display interface includes, but is not limited to, a spliced video corresponding to the video playing operation.
The spliced video is spliced by a target segment and a target video, and the target segment can be positioned before or after the target video. As an optional implementation mode, the target segment is located in front of the target video, and when the user views the spliced video, the user can view the target segment preferentially, so that whether the spliced video is the video which the user is interested in or not can be determined rapidly, the utilization rate of time resources of the user is improved, and the user experience is improved.
As another optional implementation manner, the terminal device may receive a video playing operation of a user for any video data displayed on the video display interface, directly play a target segment corresponding to the video playing operation, or switch to a corresponding display interface, and play a target segment corresponding to the video playing operation in the display interface.
As another optional implementation manner, the terminal device may receive a video playing operation of a user for any video data displayed on the video display interface, directly play a target segment and a target video corresponding to the video playing operation, or switch to a corresponding display interface, and play the target segment and the target video corresponding to the video playing operation in the display interface.
The target segment and the target video are not spliced together and can be two independent videos, and the terminal equipment can play the target segment firstly and then play the target video, or can play the target video firstly and then play the target segment.
In a possible implementation manner, if the spliced video is played, the video data display method may further include: and displaying prompt information of the position of the target segment in the spliced video.
In the embodiment of the application, the terminal device can respond to the video playing operation of the user for any video data, directly play the spliced video corresponding to the video playing operation, or switch to the corresponding display interface, and play the spliced video corresponding to the video playing operation in the display interface. When the terminal device plays the spliced video, the prompt information of the position of the target segment in the spliced video can be displayed, and the prompt information of the position of the target segment is not limited.
In a possible implementation manner, the prompt information of the position of the target segment may be the position of the target segment identified in the playing progress information of the spliced video, for example, the first video in fig. 7 may be the spliced video, the target segment is identified by a thick line on the playing progress information of the spliced video, that is, the first 15 seconds of the spliced video shown in fig. 7 is the target segment, and the position of the target segment in the spliced video is represented by a thick line on the playing progress bar of the spliced video.
It can be understood that, the use of a thick line to identify the position of the target segment in the playing progress information of the spliced video is only one possible implementation manner, and other identification manners may also be used, for example, different colors are used to identify the position of the target segment in the playing progress information of the spliced video, or the start position and the end position of the target segment are identified in the playing progress information of the spliced video, and the position of the target segment in the spliced video can be determined by the start position and the end position.
As a possible implementation manner, when video data is formed by splicing a target segment and a target video, and when a terminal device plays a spliced video or a target video corresponding to a video playing operation, the terminal device may directly play the spliced video or the target video corresponding to the video playing operation without requesting the server for the spliced video or the target video, and under other conditions, the terminal device needs to request the server for a video corresponding to the video playing operation, for example, when the video data is the target segment, the terminal device needs to request the server for the spliced video or the target video corresponding to the video playing operation.
In response to a video playing operation for any video data, playing a spliced video corresponding to the video playing operation may specifically include:
responding to the video playing operation, and if the playing of the target segment of the spliced video is finished, starting to play the spliced video from the playing starting point of the target video in the spliced video; responding to the video playing operation, if the target segment of the spliced video is not played completely, playing the spliced video from the playing starting point of the target segment of the spliced video, or playing the spliced video from the starting point of the unplayed part of the target segment of the spliced video.
In the embodiment of the application, the terminal device can play the target segment of the spliced video, and when receiving a video playing operation of a user for any video data displayed on the video display interface if the target segment is played completely, the terminal device can respond to the video playing operation and start playing the spliced video from the playing starting point of the target video in the spliced video, that is, when playing the spliced video, if the target segment is played completely, the target video is directly played; if the target segment is not played completely, the terminal device may play the spliced video from the play start point of the target segment of the spliced video in response to the video play operation when receiving a video play operation of a user for any video data displayed on the video display interface, that is, when playing the spliced video, if the target segment is not played completely, the target segment and the target video are played from the beginning, that is, the complete spliced video is played, or, in response to the video play operation, the spliced video is played from the start point of the unplayed part of the target segment of the spliced video, that is, when playing the spliced video, if the target segment is not played completely, the target segment and the target video are played seamlessly.
In this embodiment of the application, if the prompt information includes prompt information of a position of at least one target video frame in the target video, in step S82, displaying at least one video data through the video display interface may specifically include:
and displaying at least one target video through the video display interface and displaying prompt information of the position of the corresponding at least one target video frame in the playing progress information of each target video.
In the embodiment of the application, at least one piece of video data can be displayed in the video display interface, and for each piece of video data, the video data comprises prompt information of the position of at least one target video frame in the target video. When the at least one piece of video data is displayed on the video display interface, for each piece of video data, prompt information of the position of at least one target video frame in the video data can be displayed in the playing progress information of the video data.
When the prompt information of the position of at least one target video frame in the video data is displayed in the playing progress information of the video data, the prompt information of the position may be represented in any one of different colors, thicker or thinner progress bars, and the like, which is not limited in the embodiment of the application.
As shown in fig. 6B, when the video display interface displays each piece of video data, the target video and the prompt information for displaying the position of the at least one target video frame in the playing progress information may be displayed, that is, the position of the at least one target video frame in the target video is indicated by a thick line portion in the playing progress bar.
In the embodiment of the present application, the at least one target video frame may be determined by:
determining a target video according to the target keyword; and for any target video, determining each target video frame in the target video according to the matching degree of the target keyword and the content of each video frame in the target video.
In the embodiment of the application, the server may obtain the target keyword and determine the target video according to the target keyword, for example, the server may determine the video matched with "star L" according to the target keyword "star L" as the target video.
For the target video, the server may identify the content of each video frame in the target video, and tag each video frame with a corresponding tag according to the content of each video frame, where the content of the video frame may include at least one of text content, audio content, and image content, as described above with reference to fig. 4. The labeling of each video frame of the target video may be before the target video is determined according to the target keyword, that is, the labeling of each video frame of the video may be performed in advance, the target video is determined according to the target keyword, and the label corresponding to each video frame of the target video is obtained; the labeling of each video frame of the target video may also be performed after each target video is determined according to the target keyword, that is, each target video may be determined according to the target keyword, and for each target video, each video frame of the target video is labeled with a corresponding label.
In the embodiment of the application, the server may print a corresponding tag for the target keyword, for example, if the target keyword is star L, then print a corresponding tag-mingxingL for the target keyword.
For any target video, the server can determine each target video frame in the target video according to the matching degree of the target keyword and the content of each video frame in the target video. Specifically, the tag of the target keyword may be matched with the tag of each video frame in the target video, and if the tag of the target keyword is successfully matched with the tag of any video frame in the target video, the video frame may be determined as the target video frame.
Further, in this embodiment of the present application, for any target video, the target segment is generated in the following manner:
and generating a target segment according to each target video frame in the target video.
The server may generate a target segment according to each target video frame, specifically, generate a target segment according to each target video frame in the target video, and specifically, the generating of the target segment may include:
for each target video frame in the target video, determining an associated video frame of the target video frame; and generating a target segment according to each target video frame and each associated video frame.
Wherein associating the video frame may include any one of:
a preset number of video frames adjacent to the target video frame in the target video; at least one of a video frame in the target video that is within a first preset duration after the target video frame or a video frame within a second preset duration before the target video frame.
In this embodiment of the application, the server may generate the target segment according to each target video frame and the associated video frame of each target video frame, and may generate the target segment according to the sequence of the positions of each video frame (including the target video frame and the associated video frame) in the target video, where the associated video frame may be described in the foregoing, and is not described herein again.
It should be noted that the target segment according to the embodiments of the present application and the foregoing embodiments may be a video without audio information, or a video with audio information, that is, when the server generates the target segment using each target video frame, the target segment may be generated based only on the image information of each target video frame, or may be generated based on the image information and audio information of each target video frame, and accordingly, when the server generates the target segment using each target video frame and the associated video frame of each target video frame, the target segment may be generated based only on the image information of each target video frame and the image information of each associated video frame, or may be generated based on the image information and audio information of each target video frame and the image information and audio information of each associated video frame.
In addition, the Format of the target segment according to the embodiments of the present application and the foregoing embodiments may be a video Format, or may also be an image Format, for example, the Format of the target segment may be a Graphics Interchange Format (GIF), a video application on the terminal device has a variable-speed playing function, and can perform variable-speed playing on the target segment, for example, the target segment may be played at 4-speed, 1.5-speed, or 2-speed.
The above method specifically explains the video data display method from the perspective of the method steps, and the following introduces the video data display apparatus from the perspective of the virtual module, specifically as follows: the video data display device 90 may include: a video data acquisition module 901 and a video data display module 902, wherein,
a video data obtaining module 901, configured to obtain at least one piece of video data;
a video data display module 902, configured to display at least one piece of video data through a video display interface;
the video data comprises prompt information of at least one target video frame matched with the target keywords in the target video corresponding to any video data, and the target video is the video matched with the target keywords.
In one possible implementation, the target keywords include search keywords, and the video data is a video search result; alternatively, the first and second electrodes may be,
the target keywords are recommendation keywords, and the target video is a recommendation video.
In one possible implementation, the prompt message includes at least one of:
a target segment of a target video;
hint information of a position of at least one target video frame in the target video.
In a possible implementation manner, when the prompt message includes a target segment of a target video, and the video data are sequentially displayed in the video display interface, the video data display module 902 is further configured to at least one of:
playing a target segment of the video data located at the designated display position;
playing a target segment of the video data at the appointed display position, and playing a target video corresponding to the target segment if a playing operation instruction of a user for any video data is not received when the playing of the target segment is finished;
sequentially playing the target segments of the video data according to the display sequence;
and responding to the preview playing operation aiming at any video data, and playing the target segment corresponding to the preview playing operation.
In a possible implementation manner, when playing the target segment of the video data located at the designated display position, the video data display module 902 is specifically configured to:
and circularly playing the target segment of the video data at the designated display position.
In one possible implementation, when the hint information includes a target segment of the target video, the video data also includes the target video.
In one possible implementation, the video data is formed by splicing a target segment and a target video, and the target segment is located before the target video.
In a possible implementation manner, when the prompt message includes a prompt message of a position of at least one target video frame in the target video, the video data display module 902 is specifically configured to:
and displaying at least one target video through a video display interface and displaying prompt information of the position of the corresponding at least one target video frame in the playing progress information of each target video.
In one possible implementation, when the prompt message includes a target segment of the target video, the video data display module 902 is further configured to:
in response to a video playback operation for any one of the video data, performing at least one of:
playing a spliced video corresponding to the video playing operation;
playing a target video corresponding to the video playing operation;
playing a target segment corresponding to the video playing operation;
playing a target segment and a target video corresponding to the video playing operation;
the spliced video is formed by splicing a target segment and a target video.
In one possible implementation, when playing the spliced video, the video data display module 902 is further configured to:
and displaying prompt information of the position of the target segment in the spliced video.
In a possible implementation manner, when responding to a video playing operation for any video data and playing a spliced video corresponding to the video playing operation, the video data display module 902 is specifically configured to:
responding to the video playing operation, and if the playing of the target segment of the spliced video is finished, starting to play the spliced video from the playing starting point of the target video in the spliced video;
responding to the video playing operation, if the target segment of the spliced video is not played completely, playing the spliced video from the playing starting point of the target segment of the spliced video, or playing the spliced video from the starting point of the unplayed part of the target segment of the spliced video.
In one possible implementation, when the prompt message includes a target segment of the target video, the target segment further includes an associated video frame of the target video frame;
wherein associating the video frame comprises any one of:
a preset number of video frames adjacent to the target video frame in the target video;
at least one of a video frame in the target video that is within a first preset duration after the target video frame or a video frame within a second preset duration before the target video frame.
In one possible implementation, the at least one target video frame is determined by:
determining a target video according to the target keyword;
and for any target video, determining each target video frame in the target video according to the matching degree of the target keyword and the content of each video frame in the target video.
In one possible implementation, for any target video, the target segment is generated by:
and generating a target segment according to each target video frame in the target video.
In one possible implementation, generating a target segment from each target video frame in a target video includes:
for each target video frame in the target video, determining an associated video frame of the target video frame;
generating target segments according to the target video frames and the associated video frames;
wherein associating the video frame comprises any one of:
a preset number of video frames adjacent to the target video frame in the target video;
at least one of a video frame in the target video that is within a first preset duration after the target video frame or a video frame within a second preset duration before the target video frame.
The video data display apparatus of the present embodiment can execute the video data display method provided in the embodiments of the present application, which is similar to the original implementation and will not be described herein again.
The video data display means may be a computer program (comprising program code) running on a computer device, for example the video data display means being an application software; the apparatus may be used to perform the corresponding steps in the methods provided by the embodiments of the present application.
In some embodiments, the video data display apparatus provided in the embodiments of the present Application may be implemented by a combination of hardware and software, and by way of example, the video data display apparatus provided in the embodiments of the present Application may be a processor in the form of a hardware decoding processor, which is programmed to execute the video data display method provided in the embodiments of the present Application, for example, the processor in the form of the hardware decoding processor may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
In other embodiments, the video data display apparatus provided in the embodiments of the present application may be implemented in a software manner, and the video data display apparatus stored in the memory may be software in the form of programs, plug-ins, and the like, and includes a series of modules, including a video data acquisition module 901 and a video data display module 902; the video data acquisition module 901 and the video data display module 902 are used to implement the video data display method provided by the embodiment of the present application.
Compared with the prior art, the video data display device provided by the embodiment of the application can display at least one piece of video data through a video display interface, wherein any piece of video data comprises the prompt information of at least one target video frame matched with the target keyword in the target video, the target video is the video matched with the target keyword, the prompt information of the target video frame matched with the keyword can be displayed to a user by a video application program, the prompt information can prompt which video frames in the video data of the user are the target video frames, the user can conveniently and quickly position the target video frames in the video data to browse the target video frames, the video content can be quickly known, the user can quickly determine the desired video, the time of the user is saved, the user experience is improved, and the user can be effectively encouraged to use the video application program, the duration and frequency of the video application program used by the user are improved.
The video data display device of the present application is described above from the perspective of a virtual module, and the electronic device of the present application is described below from the perspective of a physical device.
An embodiment of the present application provides an electronic device, as shown in fig. 10, an electronic device 4000 shown in fig. 10 includes: a processor 4001 and a memory 4003. Processor 4001 is coupled to memory 4003, such as via bus 4002. Optionally, the electronic device 4000 may further comprise a transceiver 4004. In addition, the transceiver 4004 is not limited to one in practical applications, and the structure of the electronic device 4000 is not limited to the embodiment of the present application.
Processor 4001 may be a CPU, general purpose processor, DSP, ASIC, FPGA or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 4001 may also be a combination that performs a computational function, including, for example, a combination of one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Bus 4002 may include a path that carries information between the aforementioned components. Bus 4002 may be a PCI bus, EISA bus, or the like. The bus 4002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 10, but this is not intended to represent only one bus or type of bus.
Memory 4003 may be, but is not limited to, a ROM or other type of static storage device that can store static information and instructions, a RAM or other type of dynamic storage device that can store information and instructions, an EEPROM, a CD-ROM or other optical disk storage, an optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The memory 4003 is used for storing computer programs for executing the present scheme, and is controlled by the processor 4001 for execution. Processor 4001 is configured to execute a computer program stored in memory 4003 to implement what is shown in any of the foregoing method embodiments.
The embodiment of the application provides electronic equipment, which comprises a memory and a processor, wherein the memory is stored with a computer program; the processor, when running the computer program, performs the video data display method provided in any of the alternative embodiments of the present application.
The electronic device of the present application is described above from the perspective of a physical device, and the computer-readable storage medium of the present application is described below from the perspective of a storage medium.
An embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the video data display method provided in any optional embodiment of the present application.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations to which the above-described method embodiments relate.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (15)

1. A method for displaying video data, comprising:
acquiring at least one piece of video data, and displaying the at least one piece of video data through a video display interface;
the video data comprises prompt information of at least one target video frame matched with a target keyword in a target video corresponding to any one of the video data, and the target video is a video matched with the target keyword.
2. The method of claim 1, wherein the prompting message comprises at least one of:
a target segment of the target video;
prompt information of a location of the at least one target video frame in the target video.
3. The method of claim 1, wherein the target keywords comprise search keywords, and the video data is video search results; alternatively, the first and second electrodes may be,
the target keywords are recommendation keywords, and the target video is a recommendation video.
4. The method according to claim 2, wherein if the prompt message includes a target segment of the target video, each of the video data is displayed in the video display interface in sequence, the method further comprising any one of:
playing a target segment of the video data located at a designated display position;
playing a target segment of the video data at a designated display position, and if a playing operation instruction of a user for any video data is not received when the playing of the target segment is completed, playing a target video corresponding to the target segment;
sequentially playing the target segments of the video data according to the display sequence;
and responding to the preview playing operation aiming at any video data, and playing the target segment corresponding to the preview playing operation.
5. The method of claim 4, wherein the playing the target segment of the video data at the designated display location comprises:
and circularly playing the target segment of the video data at the designated display position.
6. The method of any one of claims 2, 4 or 5, wherein if the hint information includes a target segment of the target video, the video data further includes the target video.
7. The method of claim 6, wherein the video data is spliced from the target segment and the target video, and wherein the target segment precedes the target video.
8. The method of claim 2, wherein displaying the at least one video data via a video display interface if the hint information includes a hint information of a location of the at least one target video frame within the target video comprises:
and displaying at least one target video through a video display interface and displaying prompt information of the position of the corresponding at least one target video frame in the playing progress information of each target video.
9. The method according to any one of claims 2, 4 or 5, wherein if the prompt message includes a target segment of the target video, further comprising:
in response to a video playback operation for any of the video data, performing at least one of:
playing the spliced video corresponding to the video playing operation;
playing a target video corresponding to the video playing operation;
playing a target segment corresponding to the video playing operation;
playing a target segment and a target video corresponding to the video playing operation;
and the spliced video is spliced by the target segment and the target video.
10. The method of claim 9, wherein if the stitched video is played, the method further comprises:
and displaying prompt information of the position of the target segment in the spliced video.
11. The method of claim 9, wherein in response to a video playing operation for any of the video data, playing a spliced video corresponding to the video playing operation comprises:
responding to the video playing operation, and if the playing of the target segment of the spliced video is finished, starting to play the spliced video from the playing starting point of the target video in the spliced video;
responding to the video playing operation, if the target segment of the spliced video is not played completely, playing the spliced video from the playing starting point of the target segment of the spliced video, or playing the spliced video from the starting point of the unplayed part of the target segment of the spliced video.
12. The method according to any one of claims 2, 4 or 5, wherein if the prompt message includes a target segment of the target video, the target segment further includes an associated video frame of the target video frame;
wherein the associated video frame comprises any one of:
a preset number of video frames adjacent to the target video frame in the target video;
at least one of a video frame in the target video within a first preset time period after the target video frame or a video frame in a second preset time period before the target video frame.
13. A video data display apparatus, comprising:
the video data acquisition module is used for acquiring at least one piece of video data;
the video data display module is used for displaying the at least one piece of video data through a video display interface;
the video data comprises prompt information of at least one target video frame matched with a target keyword in a target video corresponding to any one of the video data, and the target video is a video matched with the target keyword.
14. An electronic device, comprising a memory and a processor, wherein the memory has stored therein a computer program; the processor, when executing the computer program, performs the method of any of claims 1 to 12.
15. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 12.
CN202110018414.7A 2021-01-07 2021-01-07 Video data display method and device, electronic equipment and storage medium Pending CN113536036A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110018414.7A CN113536036A (en) 2021-01-07 2021-01-07 Video data display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110018414.7A CN113536036A (en) 2021-01-07 2021-01-07 Video data display method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113536036A true CN113536036A (en) 2021-10-22

Family

ID=78124245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110018414.7A Pending CN113536036A (en) 2021-01-07 2021-01-07 Video data display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113536036A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023142913A1 (en) * 2022-01-29 2023-08-03 北京有竹居网络技术有限公司 Video processing method and apparatus, readable medium and electronic device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023142913A1 (en) * 2022-01-29 2023-08-03 北京有竹居网络技术有限公司 Video processing method and apparatus, readable medium and electronic device

Similar Documents

Publication Publication Date Title
RU2614137C2 (en) Method and apparatus for obtaining information
CN110719524B (en) Video playing method and device, intelligent playing equipment and storage medium
KR101635876B1 (en) Singular, collective and automated creation of a media guide for online content
US8788584B2 (en) Methods and systems for sharing photos in an online photosession
CN104065979A (en) Method for dynamically displaying information related with video content and system thereof
US10372769B2 (en) Displaying results, in an analytics visualization dashboard, of federated searches across repositories using as inputs attributes of the analytics visualization dashboard
US10013704B2 (en) Integrating sponsored media with user-generated content
WO2011038296A1 (en) Method for presenting user-defined menu of digital content choices, organized as ring of icons surrounding preview pane
CN111447489A (en) Video processing method and device, readable medium and electronic equipment
CN110139162A (en) The sharing method and device of media content, storage medium, electronic device
EP3322192A1 (en) Method for intuitive video content reproduction through data structuring and user interface device therefor
CN110933460B (en) Video splicing method and device and computer storage medium
CN106844705B (en) Method and apparatus for displaying multimedia content
CN104090899B (en) A kind of method and apparatus of feedback display content information
US20170220869A1 (en) Automatic supercut creation and arrangement
CN108737903B (en) Multimedia processing system and multimedia processing method
US20180130499A9 (en) Method for intuitively reproducing video contents through data structuring and the apparatus thereof
JP2024056704A (en) Dynamic integration of customized supplemental media content
CN114679621A (en) Video display method and device and terminal equipment
CN105516348A (en) Method and system for sharing information
CN113536036A (en) Video data display method and device, electronic equipment and storage medium
US20190385192A1 (en) Digital media generation
CN115379136A (en) Special effect prop processing method and device, electronic equipment and storage medium
CN115269886A (en) Media content processing method, device, equipment and storage medium
CN104866563A (en) Album searching method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40053996

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination