CN112667936A - Video processing method, device, terminal, server and storage medium - Google Patents

Video processing method, device, terminal, server and storage medium Download PDF

Info

Publication number
CN112667936A
CN112667936A CN202011569042.9A CN202011569042A CN112667936A CN 112667936 A CN112667936 A CN 112667936A CN 202011569042 A CN202011569042 A CN 202011569042A CN 112667936 A CN112667936 A CN 112667936A
Authority
CN
China
Prior art keywords
video
continuous
page
group
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011569042.9A
Other languages
Chinese (zh)
Inventor
周甜甜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202011569042.9A priority Critical patent/CN112667936A/en
Publication of CN112667936A publication Critical patent/CN112667936A/en
Priority to PCT/CN2021/106862 priority patent/WO2022134555A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure relates to a video processing method, apparatus, terminal, server and storage medium, wherein the method comprises: displaying a first video on a display interface; receiving a first operation of entering a preset page; in response to the first operation, a second video strongly correlated with the first video is highlighted on a predetermined page. By the method and the device, the problems of low searching efficiency and poor user experience when searching the videos needing to be browsed in the related technology are solved, and the user experience of watching the videos is effectively improved.

Description

Video processing method, device, terminal, server and storage medium
Technical Field
The present disclosure relates to the field of computers, and in particular, to a video processing method, apparatus, terminal, server, and storage medium.
Background
At present, video contents on browsing pages are more and more, when a user needs to browse a certain video, the user needs to manually search one video or search according to certain keywords, the searching efficiency is low, and the user experience is poor.
Therefore, in the related art, when searching for a video to be browsed, there are problems of low searching efficiency and poor user experience.
Disclosure of Invention
The present disclosure provides a video processing method, device, terminal, server and storage medium, so as to at least solve the problems of low searching efficiency and poor user experience when searching for a video to be browsed in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a video processing method, including: displaying a first video on a display interface; receiving a first operation of entering a preset page; and highlighting a second video which is strongly related to the first video on the preset page in response to the first operation.
Optionally, the second video strongly correlated with the first video comprises: one or more videos belonging to the same continuous video group as the first video.
Optionally, the step of highlighting the second video strongly related to the first video on the predetermined page comprises: and displaying a second video which is strongly related to the first video in the form of dynamic resources on the preset page.
Optionally, the step of highlighting the second video strongly related to the first video on the predetermined page comprises: and displaying a second video which is strongly related to the first video in a top-set mode on the preset page.
Optionally, the step of highlighting the second video strongly related to the first video on the predetermined page comprises: displaying a continuous video group and first identification information of the continuous video group, wherein the continuous video group comprises the first video and the second video, and the first identification information is a video frame in the continuous video group.
Optionally, the step of displaying the set of consecutive videos comprises: and displaying videos included in the continuous video group and second identification information for identifying the videos, wherein positioning information is displayed in the first video displayed in the continuous video group, and the positioning information is used for identifying the position of the first video in the continuous video group.
Optionally, after the highlighting of the second video strongly related to the first video on the predetermined page, the method further includes: receiving a playing operation of the second video; and responding to the playing operation, and playing the second video.
Optionally, the method further comprises: displaying a third video on the display interface; receiving a second operation of entering a preset page; and responding to the second operation, and displaying a fourth video included in the preset page in a default display mode.
Optionally, the default display manner includes: and displaying the third video in a mode of displaying the video distribution time, wherein the third video is a video which does not belong to any continuous video group.
Optionally, the first video, the second video, the third video, and the fourth video each include at least one of: and recording the video and live broadcasting the video.
According to a second aspect of the embodiments of the present disclosure, there is provided a video processing method, including: the control terminal displays a first video on a display interface; receiving a first operation of entering a preset page; detecting whether a second video which is strongly related exists in the first video or not in response to the first operation; and controlling the terminal to highlight the second video on the preset page if the detection result is yes.
Optionally, the step of detecting whether there is a strongly correlated second video in the first video comprises: detecting whether the first video belongs to a continuous video; and if the detection result is yes, determining that a second video which is strongly related to the first video exists in the first video, wherein the first video and the second video belong to the same continuous video group.
Optionally, the step of controlling the terminal to highlight the second video on the predetermined page includes: determining a dynamic resource for displaying the second video, and controlling the terminal to display the second video on the predetermined page based on the determined dynamic resource; or sequencing the contents included in the preset page, and displaying the second video on the preset page by controlling the terminal to set the second video at the top.
Optionally, before receiving the first operation of entering the predetermined page, the method further includes: receiving an object to be issued; determining a type of the object, wherein the type comprises one of: continuous video and discontinuous video; under the condition that the type of the object is a continuous video, distributing a continuous type identification for the object; and under the condition that the type of the object is the non-continuous video, distributing a non-continuous type identification for the object.
Optionally, after assigning the object with the continuous type class identifier, the method further comprises: and determining the video content to which the object belongs, and distributing the same content identification for the objects belonging to the same video content, wherein different objects belonging to the same video content have different object identifications.
Optionally, the method further comprises: receiving click operation on a video displayed on a preset page; acquiring a video identifier of a video corresponding to the clicking operation; and controlling the terminal to play the clicked video on a display interface according to the video identification.
According to a third aspect of the embodiments of the present disclosure, there is provided a video processing method, including: receiving a sliding operation, wherein the sliding operation is used for requesting to display a first video; responding to the sliding operation, and playing the first video on a display interface; receiving a first operation of entering a preset page; highlighting a second video which is strongly related to the first video on the preset page in response to the first operation; receiving click operation on the second video; and responding to the clicking operation, and playing the second video.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a video processing apparatus including: the first display module is used for displaying a first video on a display interface; the first receiving module is used for receiving a first operation of entering a preset page; and the second display module is used for responding to the first operation and highlighting a second video which is strongly related to the first video on the preset page.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a video processing apparatus including: the first control module is used for controlling the terminal to display a first video on the display interface; the second receiving module is used for receiving a first operation of entering a preset page; the first detection module is used for responding to the first operation and detecting whether a second video which is strongly related exists in the first video; and the second control module is used for controlling the terminal to highlight the second video on the preset page under the condition that the detection result is yes.
According to a sixth aspect of the embodiments of the present disclosure, there is provided a video processing apparatus comprising: the third receiving module is used for receiving a sliding operation, wherein the sliding operation is used for requesting to display the first video; the third display module is used for responding to the sliding operation and playing the first video on a display interface; the fourth receiving module is used for receiving a first operation of entering a preset page; a fourth display module, configured to highlight, in response to the first operation, a second video that is strongly related to the first video on the predetermined page; a fifth receiving module, configured to receive a click operation on the second video; and the playing module is used for responding to the clicking operation and playing the second video.
A seventh aspect of the disclosed embodiments provides a terminal, including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the video processing method according to any one of the above.
In an eighth aspect of the embodiments of the present disclosure, a server is provided, including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the video processing method of any of the above.
A ninth aspect of the embodiments of the present disclosure provides a storage medium, wherein instructions, when executed by a processor of a terminal, enable the terminal to perform any one of the video processing methods described above.
According to a tenth aspect of embodiments of the present disclosure, there is provided a computer program product, which, when executed by a processor of a terminal, enables the terminal to perform any one of the video processing methods described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the method comprises the steps that a first video is displayed on a display interface, namely, when a browsing user watches the first video, a first operation of entering a preset page triggered by the browsing user is received, and a second video strongly related to the first video is highlighted on the preset page in response to the first operation of the browsing user. The strongly correlated second video is highlighted, so that the browsing user can quickly find the strongly correlated second video, the video watching experience of the user is improved, and the problems of low searching efficiency and poor user experience in searching the to-be-browsed video in the related technology are effectively solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a block diagram illustrating a hardware configuration of a computer terminal for implementing a video processing method according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating a first video processing method according to an example embodiment.
Fig. 3 is a flow chart illustrating a second video processing method according to an exemplary embodiment.
Fig. 4 is a flow diagram illustrating a video processing method three in accordance with an exemplary embodiment.
Fig. 5 is a flow diagram illustrating a video processing method four in accordance with an exemplary embodiment.
Fig. 6 is a flow diagram illustrating a video processing method five in accordance with an exemplary embodiment.
Fig. 7 is a flow diagram illustrating a video processing method six in accordance with an exemplary embodiment.
Fig. 8 is an apparatus block diagram illustrating a first video processing apparatus according to an example embodiment.
Fig. 9 is a device block diagram of a second video processing device shown in accordance with an example embodiment.
Fig. 10 is a device block diagram of a video processing device three shown according to an exemplary embodiment.
Fig. 11 is an apparatus block diagram of a terminal shown in accordance with an example embodiment.
FIG. 12 is a block diagram illustrating the structure of a server in accordance with an exemplary embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Example 1
According to an embodiment of the present disclosure, a method embodiment of a video processing method is presented. It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
The method provided by the embodiment 1 of the present disclosure can be executed in a mobile terminal, a computer terminal or a similar operation device. Fig. 1 is a block diagram illustrating a hardware structure of a computer terminal (or mobile device) for implementing a video processing method according to an exemplary embodiment. As shown in fig. 1, the computer terminal 10 (or mobile device) may include one or more (shown as 102a, 102b, … …, 102 n) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), memories 104 for storing data, and a transmission device for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuit may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the disclosed embodiments, the data processing circuit acts as a processor control (e.g., selection of a variable resistance termination path connected to the interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the video processing method in the embodiment of the present disclosure, and the processor 102 executes various functional applications and data processing by running the software programs and modules stored in the memory 104, that is, implementing the vulnerability detection method of the application program. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
It should be noted here that in some alternative embodiments, the computer device (or mobile device) shown in fig. 1 described above may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium), or a combination of both hardware and software elements. It should be noted that fig. 1 is only one example of a particular specific example and is intended to illustrate the types of components that may be present in the computer device (or mobile device) described above.
In the above operating environment, the present disclosure provides a video processing method as shown in fig. 2. Fig. 2 is a flowchart illustrating a first video processing method according to an exemplary embodiment, which is used in the computer terminal described above and includes the following steps, as shown in fig. 2.
In step S21, a first video is presented on the display interface.
In step S22, a first operation to enter a predetermined page is received.
In step S23, in response to the first operation, the second video strongly correlated with the first video is highlighted on the predetermined page.
By adopting the processing, the first video is displayed on the display interface, namely when the browsing user watches the first video, the first operation of entering the preset page triggered by the browsing user is received, the first operation of the browsing user is responded, and the second video strongly related to the first video is highlighted on the preset page. The method for highlighting the strongly-correlated second video enables the browsing user to quickly find the strongly-correlated second video without finding the video to be watched on the preset page in a mode of checking a plurality of published works one by one, improves the experience of the user in watching the video, and further effectively solves the problems of low finding efficiency and poor user experience when finding the video to be browsed in the related technology.
In one or more alternative embodiments, the above method may be applied to browse scenes of a video, for example, a scene of a video may be brushed by using an application program, a scene of a video may be watched on a video webpage, and the like. The above-mentioned predetermined page may be a page corresponding to the above-mentioned various scenes, for example, when the above-mentioned method is applied to a scene of a brushing video, the above-mentioned predetermined page may be an anchor page (i.e., a private home page of an anchor user); when the method is applied to a scene of viewing a video on a web page, the predetermined page may be a video web page including a plurality of videos (wherein the plurality of videos included in the video web page may be of a plurality of types, etc.). In the following alternative embodiments, a scene of a video is taken as an example for description.
The video brushing becomes a large way for the public to enjoy, and more people do not watch television at home when free, but like to brush the video while lying. However, as mentioned above, in the related art, when the browsing user slides the video, if the current video may be part of a set of continuous streams (for example, it may be a episode of a tv series or a part of a movie clip), when the browsing user wants to continue watching other segments, the browsing user needs to go to the work shown in the personal homepage of the anchor user. If the anchor user has a very large number of works and the distribution time is not continuous, it is very inconvenient to find an adjacent segment stream. The user experience is poor, and the user churn rate is high.
When an application is used to brush a video or watch a live video, the video is presented in various ways, namely a continuous type and a non-continuous type. Continuous type means that the current video is part of a set of continuous videos (possibly a episode of a certain tv show or part of a movie clip), in the personal work page of the anchor user there may still be videos strongly related to the current video; non-continuous means that the current video is independent and the anchor user's personal work has nothing to do with the current video.
In one or more alternative embodiments, the second video that is strongly correlated with the first video may be one or more videos that belong to the same contiguous video group as the first video. It should be noted that the same continuous video group referred to herein may be a video group related to a plurality of contents, i.e. different contents are divided into different video groups. For example, the video group may be a video group of a clip of a television drama, a video group of a clip combination of a movie, a video group of a clip of a documentary, a video group of a clip combination of series exhibition contents, a video group of a clip combination of videos of a singer-start concert, or the like.
In one or more optional embodiments, when the first video is presented on the display interface, the browsing user may slide to the first video by touching the sliding screen when the browsing user swipes a short video through the application program, so as to trigger presentation of the first video on the display interface. Or, in a manner that a friend in the application program shares the first video, the browsing user opens a link shared by the friend, so that the first video is triggered to be displayed on the display interface. The method can also be used for triggering the display of the first video on the display interface by opening the recommended link by the browsing user in a mode that the first video is recommended by the application program.
In one or more optional embodiments, the first operation of receiving the entry into the predetermined page may also be implemented in various ways, for example, when the predetermined page is an anchor page in a video-swiping scene or a video detail page, the browsing user may trigger the first operation by clicking an avatar of the anchor user; for another example, the browsing user may trigger the first operation by performing a predetermined screen sliding operation (e.g., sliding the screen left, sliding the screen right) on the display interface. In any of the above manners, the first operation of receiving the entry into the predetermined page is used to indicate that the browsing user requests to enter the predetermined page to view the second video strongly related to the first video while browsing the first video.
In one or more alternative embodiments, when a predetermined page highlights a second video that is strongly related to the first video, the manner in which the second video is presented is different because the manner of highlighting is different. The following examples are given.
For example, a second video that is strongly related to the first video may be displayed in the form of a dynamic resource on a predetermined page. Fig. 3 is a flowchart illustrating a second video processing method according to an exemplary embodiment, where, as shown in fig. 3, step S23 includes:
in step S31, a second video that is strongly related to the first video is displayed in the form of a dynamic resource on a predetermined page. Wherein the manner of highlighting is embodied by dynamic resources. The dynamic resource referred to herein is a content surface form, and may be displayed in a small window manner, for example, by playing a second video strongly related to a first video in a small window, so as to highlight the second video; for another example, the second video strongly related to the first video may be displayed on an upper page of the displayed work layer, and the upper page may shield the work layer page to a certain extent, so that the second video may be highlighted. Both the small window and the upper page are dynamic resources, and the second video is highlighted.
For another example, when the predetermined page highlights the second video strongly related to the first video, the second video strongly related to the first video may also be displayed in a set-top manner on the predetermined page. Fig. 4 is a flowchart illustrating a third video processing method according to an exemplary embodiment, where, as shown in fig. 4, step S23 includes:
in step S41, a second video that is strongly related to the first video is displayed in a set-top manner on a predetermined page. Wherein, the highlighting mode is embodied by a mode of setting the top of a preset page. When the first video is played, an operation instruction which is input by a browsing user and enters a preset page is received, namely the browsing user is considered to want to watch the second video which is strongly related to the first video, therefore, the operation instruction of the browsing user is responded, works in the preset page are reordered, and the second video which is strongly related is directly placed on top, so that the browsing user can watch the second video which is strongly related at a glance when entering the preset page. In order to further improve the user experience, under the condition that the number of the second videos is multiple, the second videos adjacent to the first video can be directly ranked at the forefront, so that the user can see the videos needing to be watched at first sight, and the user browsing experience is excellent.
In one or more optional embodiments, when the predetermined page highlights the second video strongly related to the first video, some information related to the second video may also be displayed, for example, to improve the recognition degree of the second video, or to enable the browsing user to know the continuous video group to which the second video belongs, when the second video is displayed, the continuous video group and the first identification information of the continuous video group may also be displayed, wherein the continuous video group includes the first video and the second video, and the first identification information is a video frame in the continuous video group, or a screenshot in the continuous video group, or the like. For example, it may be a classic piece of the group of consecutive videos representing a person, or a classic picture of the group of consecutive videos, or directly a poster of the group of consecutive videos, etc. For example, when displaying the second video, the second video may be displayed in a continuous video group, that is, in a continuous video group manner, and the continuous video group is identified by using the identification information. The method adopts a continuous video group mode to display, and can embody the video content of the second video to a certain extent when displaying the first identification information of the continuous video group, thereby improving the identification degree of the second video. The first identification information may be a video frame in a continuous video group, or a screenshot in the continuous video group, or the like. The first identification information may be in the form of graphics, characters, pictures, animations, or a variation or combination of the above forms. For example, when the second video is a cut in a tv series, the first identification information may be a poster of the tv series; for another example, when the second video is a cut of a singer's video of a concert, the first identification information may be an avatar of the singer and a song track sung in the section.
In one or more alternative embodiments, when the consecutive video group is displayed, videos included in the consecutive video group and second identification information for identifying the videos may be displayed. When displaying the videos included in the consecutive video group, the videos may be displayed in various manners, for example, the videos may be displayed in a video list manner, and the videos in the list are identified by using the numbers as the second identification information. For another example, the videos included in the continuous video group may be displayed in a thumbnail mode, and the arrangement position of the thumbnail may be used as the second identification information to identify the videos. In addition, positioning information is displayed in a first video displayed in the set of consecutive videos, the positioning information identifying a position of the first video in the set of consecutive videos. For example, a word "just played" is displayed on the first video, and since the word may be set only on the first video, the first video may be distinguished from other videos in the continuous video group, thereby achieving positioning of the first video. Through the positioning, the browsing user can know the position of the video watched by the browsing user in the whole continuous video group, and further know the progress of watching the whole continuous video by the browsing user, so that the time for watching the video is controlled by the browsing user. For example, in a scenario, a browsing user is late at night when brushing a first video, but from the position of the first video in the whole continuous video, the first video is only the beginning part of the whole continuous video, or only a small front part, and then the next part is watched continuously, the rest is influenced definitely, so that the browsing user can watch the next part when the main broadcasting user has plenty of time to watch, or directly quit the main broadcasting page to stop the rest, and the additional experience of the user is fully realized.
In one or more optional embodiments, after the second video strongly related to the first video is highlighted on the predetermined page, the displayed second video may be operated, for example, a play operation of the second video may be received; and responding to the playing operation, playing the second video and finishing the complete playing experience of the browsing user.
In one or more optional embodiments, before the first video is displayed on the display interface, or after the second video displayed on the display interface is played, a third video is also displayed on the display interface, wherein the third video is a video not belonging to any one of the continuous video groups; receiving a second operation of entering a preset page; and responding to the second operation, and displaying the fourth video included in the preset page in a default display mode. The default display mode includes: the display mode may be a mode of displaying the video at the distribution time, or another random display mode. Since the third video is a video that does not belong to any one of the consecutive video groups, that is, the third video does not belong to the consecutive video group, the fourth video on the predetermined page may be regarded as a video unrelated to the third video. That is, when the browsing user enters the predetermined page, the user can browse each work displayed in the predetermined page at will.
It should be noted that the first video, the second video, the third video, and the fourth video are not unique, and for example, all of the videos may be recorded videos or live videos.
Fig. 5 is a flowchart illustrating a fourth video processing method according to an exemplary embodiment, which is used in a server communicating with the computer terminal as shown in fig. 5, and includes the following steps.
In step S51, the control terminal presents the first video on the display interface.
In step S52, a first operation to enter a predetermined page is received.
In step S53, in response to the first operation, it is detected whether there is a strongly correlated second video in the first video.
In step S54, in the case where the detection result is yes, the control terminal highlights the second video on a predetermined page.
By adopting the processing, after the control terminal plays the first video, responding to a first operation of entering a preset page triggered by a browsing user, detecting whether the first video has a strongly-related second video, and under the condition that the detection result is yes, the control terminal highlights the second video on the preset page. The method for highlighting the strongly-correlated second video by the control terminal enables the browsing user to quickly find the strongly-correlated second video without finding the video to be watched on a preset page in a mode of checking a plurality of published works one by one, improves the experience of the user in watching the video, and further effectively solves the problems of low finding efficiency and poor user experience in the related technology in which the related video is checked when the video is browsed.
In one or more optional embodiments, when detecting whether the first video has the second video with strong correlation, it may be detected whether the first video belongs to a continuous video because whether the video and the video belong to the same continuous video group may be considered as whether the video and the video are strongly correlated; and in the case that the detection result is yes, determining that a second video which is strongly related exists in the first video, wherein the first video and the second video belong to the same continuous video group.
In one or more optional embodiments, the terminal is controlled to highlight the second video on the predetermined page, which may be implemented in various ways, for example, the dynamic resource for displaying the second video may be determined first, that is, the server allocates the dynamic resource for the terminal to display the second video, and then the terminal is controlled to display the second video on the predetermined page based on the determined dynamic resource; or the server sorts the content included in the preset page, and then the sorted video is issued to the terminal, so that the second video is displayed on the preset page in a mode of controlling the terminal to set the second video at the top.
In the related art, for example, a scene in which an application is adopted to brush videos is taken as an example, when the application is used to distribute works, except that some works (for example, live-broadcast-type works) are set, the works in other forms are sorted according to a queue model, for example, the works distributed first are arranged behind the works distributed later, and the works distributed later are arranged in front of the works distributed later. When the method is adopted to distribute the works, other browsing users always see the latest works of the anchor user when entering the personal homepage (namely the preset page) of the anchor user, and the works are only distinguished in the distribution time and are not related to the content distinction, so that the content distinction of the users cannot be met, and the requirement of quickly finding the videos needing to be watched is realized.
In one or more optional embodiments, before receiving the first operation of entering the predetermined page, the content published on the predetermined page is classified, for example, the following process may be adopted: receiving an object to be published, wherein the object is the video; determining a type of the object, wherein the type comprises one of: continuous video and discontinuous video; under the condition that the type of the object is a continuous video, distributing a continuous type identification for the object; and in the case that the type of the object is the non-continuous type video, distributing non-continuous type identification to the object. All contents of the preset page are integrally divided through different types of the contents, so that the browsing user can conveniently classify and search, the searching time of the browsing user is saved, and the browsing experience is improved.
In one or more optional embodiments, after assigning the object with the continuous type class identification, it is further possible to: determining the video content to which the object belongs, and distributing the same content identification for the objects belonging to the same video content, wherein different objects belonging to the same video content have different object identifications. Distributing content identification for objects belonging to the same video content to realize division of the content of which video content the object belongs to; different object identifications are distributed to different objects of the same video content, and the division of the same video content on different segments is realized. Through the processing, not only is integral division realized, but also detail division is realized, namely, large division is combined with small division, classification and division of the objects of the anchor page are realized, the interestingness of browsing users is increased, and the stickiness of the users is improved.
In one or more optional embodiments, a click operation on a video displayed on a predetermined page is received; acquiring a video identifier of a video corresponding to the click operation; and controlling the terminal to play the clicked video on the display interface according to the video identification. After the videos displayed in the preset page are divided into the whole and the details, each video corresponds to a unique identifier, so that when the videos displayed on the preset page are clicked, the clicking operation can be responded according to the identifiers of the videos, and the videos corresponding to the identifiers are played.
Fig. 6 is a flow chart illustrating a video processing method five according to an exemplary embodiment, as shown in fig. 6, including the following steps.
In step S61, a slide operation is received, wherein the slide operation is used to request that the first video be displayed;
in step S62, in response to the sliding operation, playing a first video on the display interface;
in step S63, a first operation to enter a predetermined page is received;
in step S64, in response to the first operation, highlighting a second video that is strongly related to the first video on a predetermined page;
in step S65, a click operation on the second video is received;
in step S66, in response to the click operation, the second video is played.
Adopting the interactive processing with the terminal, responding to the sliding operation of a browsing user, and playing a first video on a display interface; responding to a first operation of entering a preset page requested by a browsing user, and highlighting a second video which is strongly related to the first video on the preset page; and responding to the click operation of the second video, and playing the second video. The method for highlighting the strongly-correlated second video enables a browsing user to quickly find the strongly-correlated second video without finding the video to be watched on a preset page in a mode of checking a plurality of published works one by one, improves the experience of the user in watching the video, and further effectively solves the problems that in the related technology, the searching efficiency is low and the user experience is poor when the related video is checked in browsing the video.
In the embodiment of the present disclosure, an optional implementation manner is provided to solve the problem in the related art that, in a scene where an application is used to brush videos, when a browsing user wants to view a next set or a next portion of a certain video, the browsing user needs to search for the next set or the next portion of the certain video one by one in a personal homepage of an anchor user, which consumes user time relatively, and results in poor user experience. Fig. 7 is a flowchart illustrating a video processing method six according to an example embodiment, where the flowchart includes the following processes, as shown in fig. 7:
1. differentiating works released by the anchor user, for example, into a continuous stream a and a discontinuous stream B;
2. in the continuous stream a, the different pieces of content are differentiated, for example a television show or a movie, for example a-conutus transmission, a-month transmission, representing different groups of videos which are continuous streams;
3. each time the anchor user issues a work, the anchor user corresponds to a unique identifier, for example, a-conutus-01, a-conconutus-02, which represents a part of the continuous-flow conutus-01, 02; b, direct impact site, which represents a discontinuous flow direct impact site;
4.1 after looking over the current continuous flow first set (for example, a-conutus conut: [ A-Coconutus transmitting 02, A Coconutus transmitting-03. ] is;
4.2 after the browsing user views the current non-continuous flow (such as a B-direct-click scene), randomly viewing a certain video, and returning from near to far according to a default sequence-a time release sequence;
5. and returning the content of the response according to the unique identification of the video.
For example:
suppose that: the anchor user's personal homepage contains the following works: coconutleaf Tu 01-10, MI Yue 01-10, dancing, singing, playing cards.
Scene 1: the browsing user watches independent videos (singing) at present, so that works are sequentially displayed on the personal homepage of the anchor user according to the release time sequence of the works when the personal homepage of the anchor user is entered; as shown in fig. 4
Scene 2: the browsing user currently watches a continuous video (a-conututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututututut. Therefore, the browsing user can conveniently and quickly find the video to be watched, and the function of facilitating the user to check is realized. Therefore, the problem that a browsing user is difficult to search for strongly-related videos can be effectively solved, and meanwhile, the user conversion rate can be effectively improved.
Through the optional implementation mode, the release works are divided into continuous flow and non-continuous flow, namely, the release works are integrally classified, so that the classification and the search are convenient; the classic segments of the representative characters in the video are extracted, so that the video can be clear at a glance when a browsing user searches for the classic segments, and the target video can be quickly obtained; the classical fragments of the representative characters in the video are used for identifying the video group to which the current video belongs, the identification degree of the video group is increased, meanwhile, the interestingness is increased, and the user stickiness is improved.
It is noted that while for simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present disclosure is not limited by the order of acts, as some steps may, in accordance with the present disclosure, occur in other orders and concurrently. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required for the disclosure.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method of the embodiments of the present disclosure.
Example 2
According to an embodiment of the present disclosure, there is also provided an apparatus for implementing the first video processing method, and fig. 8 is an apparatus block diagram of the first video processing apparatus according to an exemplary embodiment. Referring to fig. 8, the apparatus includes: a first display module 81, a first receiving module 82 and a second display module 83, which will be described below.
The first display module 81 is configured to display a first video on a display interface; a first receiving module 82, connected to the first display module 81, for receiving a first operation of entering a predetermined page; and a second display module 83, connected to the first receiving module 82, for highlighting a second video that is strongly related to the first video on a predetermined page in response to the first operation.
It should be noted that the first display module 81, the first receiving module 82 and the second display module 83 correspond to steps S21 to S23 in embodiment 1, and the modules are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
According to an embodiment of the present disclosure, there is also provided an apparatus for implementing the above-described video processing method four, and fig. 9 is an apparatus block diagram of a video processing apparatus two shown according to an exemplary embodiment. Referring to fig. 9, the apparatus includes: a first control module 91, a second receiving module 92, a first detecting module 93 and a second control module 94, which will be described below.
The first control module 91 is used for controlling the terminal to display a first video on the display interface; a second receiving module 92, connected to the first control module 91, for receiving a first operation of entering a predetermined page; a first detecting module 93, connected to the second receiving module 92, for responding to the first operation and detecting whether there is a strongly correlated second video in the first video; and a second control module 94, connected to the first detection module 93, for controlling the terminal to highlight the second video on a predetermined page if the detection result is yes.
It should be noted that the first control module 91, the second receiving module 92, the first detecting module 93 and the second control module 94 correspond to steps S51 to S54 in embodiment 1, and the modules are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
According to an embodiment of the present disclosure, there is also provided an apparatus for implementing the video processing method three described above, and fig. 10 is an apparatus block diagram of the video processing apparatus three shown according to an exemplary embodiment. Referring to fig. 10, the apparatus includes: a third receiving module 101, a third display module 102, a fourth receiving module 103, a fourth display module 104, a fifth receiving module 105 and a playing module 106, which will be described below.
A third receiving module 101, configured to receive a sliding operation, where the sliding operation is used to request to display the first video; a third display module 102, connected to the third receiving module 101, for responding to the sliding operation and playing the first video on the display interface; a fourth receiving module 103, connected to the third displaying module 102, for receiving a first operation of entering a predetermined page; a fourth display module 104, connected to the fourth receiving module 103, for displaying the second video strongly related to the first video on the predetermined page in response to the first operation; a fifth receiving module 105, connected to the fourth displaying module 104, for receiving a click operation on the second video; and a playing module 106, connected to the fifth receiving module 105, for playing the second video in response to the click operation.
It should be noted that the third receiving module 101, the third displaying module 102, the fourth receiving module 103, the fourth displaying module 104, the fifth receiving module 105 and the playing module 106 correspond to steps S61 to S66 in embodiment 1, and the modules are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Example 3
Embodiments of the present disclosure may provide a terminal, which may be any one computer terminal device in a computer terminal group. Optionally, in this embodiment, the terminal may also be a terminal device such as a mobile terminal.
Optionally, in this embodiment, the terminal may be located in at least one network device of a plurality of network devices of a computer network.
Alternatively, fig. 11 is a block diagram illustrating a structure of a terminal according to an example embodiment. As shown in fig. 11, the terminal may include: one or more (only one shown) processors 111, a memory 112 for storing processor-executable instructions; wherein the processor is configured to execute the instructions to implement the video processing method of any of the above.
The memory may be configured to store software programs and modules, such as program instructions/modules corresponding to the video processing method and apparatus in the embodiments of the present disclosure, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, so as to implement the video processing method. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the computer terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: displaying a first video on a display interface; receiving a first operation of entering a preset page; in response to the first operation, a second video strongly correlated with the first video is highlighted on a predetermined page.
Optionally, the processor may further execute the program code of the following steps: the second video that is strongly correlated with the first video includes: one or more videos belonging to the same continuous video group as the first video.
Optionally, the processor may further execute the program code of the following steps: the step of highlighting the second video strongly correlated with the first video on the predetermined page includes: and displaying a second video which is strongly related to the first video in the form of dynamic resources on a preset page.
Optionally, the processor may further execute the program code of the following steps: the step of highlighting the second video strongly correlated with the first video on the predetermined page includes: and displaying a second video which is strongly related to the first video in a top-positioned mode on the preset page.
Optionally, the processor may further execute the program code of the following steps: the step of highlighting the second video strongly correlated with the first video on the predetermined page includes: displaying a continuous video group and first identification information of the continuous video group, wherein the continuous video group comprises a first video and a second video, and the first identification information is a video frame in the continuous video group.
Optionally, the processor may further execute the program code of the following steps: the step of displaying the set of consecutive videos comprises: and displaying videos included in the continuous video group and second identification information for identifying the videos, wherein positioning information is displayed in a first video displayed in the continuous video group, and the positioning information is used for identifying the position of the first video in the continuous video group.
Optionally, the processor may further execute the program code of the following steps: after the second video which is strongly related to the first video is highlighted on the preset page, the method further comprises the following steps: receiving a playing operation of a second video; and responding to the playing operation to play the second video.
Optionally, the processor may further execute the program code of the following steps: displaying the third video on the display interface; receiving a second operation of entering a preset page; and responding to the second operation, and displaying the fourth video included in the preset page in a default display mode.
Optionally, the processor may further execute the program code of the following steps: the default display mode comprises the following steps: the third video is a video that does not belong to any one of the consecutive video groups, and is displayed at the video distribution time.
Optionally, the processor may further execute the program code of the following steps: the first video, the second video, the third video and the fourth video each comprise at least one of: and recording the video and live broadcasting the video.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: receiving a sliding operation, wherein the sliding operation is used for requesting to display a first video; responding to the sliding operation, and playing a first video on a display interface; receiving a first operation of entering a preset page; highlighting a second video which is strongly related to the first video on a preset page in response to the first operation; receiving click operation on a second video; and responding to the click operation, and playing the second video.
An embodiment of the present disclosure may provide a server, and fig. 12 is a block diagram illustrating a structure of a server according to an exemplary embodiment. As shown in fig. 12, the server 120 may include: one or more (only one shown) processing components 121, a memory 122 for storing instructions executable by the processing components 121, a power supply component 123 for supplying power, a network interface 124 for implementing communication with an external network, and an I/O input/output interface 125 for data transmission with the outside; wherein the processing component 121 is configured to execute instructions to implement the video processing method of any of the above.
The memory may be configured to store software programs and modules, such as program instructions/modules corresponding to the video processing method and apparatus in the embodiments of the present disclosure, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, so as to implement the video processing method. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the computer terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processing component can call the information and the application program stored in the memory through the transmission device to execute the following steps: the control terminal displays a first video on a display interface; receiving a first operation of entering a preset page; responding to the first operation, and detecting whether a second video with strong correlation exists in the first video; and in the case that the detection result is yes, the control terminal highlights the second video on a preset page.
Optionally, the processing component may further execute program codes of the following steps: the step of detecting whether the first video has a strongly correlated second video comprises: detecting whether the first video belongs to a continuous video; and in the case that the detection result is yes, determining that a second video which is strongly related exists in the first video, wherein the first video and the second video belong to the same continuous video group.
Optionally, the processing component may further execute program codes of the following steps: the step of controlling the terminal to highlight the second video on the predetermined page includes: determining a dynamic resource for displaying the second video, and controlling the terminal to display the second video on a predetermined page based on the determined dynamic resource; or sequencing the contents included in the preset page, and displaying the second video on the preset page in a mode of setting the second video at the top through the control terminal.
Optionally, the processing component may further execute program codes of the following steps: before receiving a first operation of entering a predetermined page, the method further comprises: receiving an object to be issued; determining a type of the object, wherein the type comprises one of: continuous video and discontinuous video; under the condition that the type of the object is a continuous video, distributing a continuous type identification for the object; and in the case that the type of the object is the non-continuous type video, distributing non-continuous type identification to the object.
Optionally, the processing component may further execute program codes of the following steps: after assigning the object with the continuous type class identification, the method further comprises: determining the video content to which the object belongs, and distributing the same content identification for the objects belonging to the same video content, wherein different objects belonging to the same video content have different object identifications.
Optionally, the processing component may further execute program codes of the following steps: receiving click operation on a video displayed on a preset page; acquiring a video identifier of a video corresponding to the click operation; and controlling the terminal to play the clicked video on the display interface according to the video identification.
The processing component can call the information and the application program stored in the memory through the transmission device to execute the following steps: receiving a sliding operation, wherein the sliding operation is used for requesting to display a first video; responding to the sliding operation, and playing a first video on a display interface; receiving a first operation of entering a preset page; highlighting a second video which is strongly related to the first video on a preset page in response to the first operation; receiving click operation on a second video; and responding to the click operation, and playing the second video.
It can be understood by those skilled in the art that the structures shown in fig. 11 and fig. 12 are only schematic, for example, the terminal may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 11 and 12 do not limit the structure of the electronic device. For example, it may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in fig. 11, 12, or have a different configuration than shown in fig. 11, 12.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Example 4
In an exemplary embodiment, there is also provided a storage medium including instructions that, when executed by a processor of a terminal, enable the terminal to perform the video processing method of any one of the above. Alternatively, the storage medium may be a non-transitory computer readable storage medium, for example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Alternatively, in this embodiment, the storage medium may be configured to store program codes executed by the video processing method provided in embodiment 1.
Optionally, in this embodiment, the storage medium may be located in any one of computer terminals in a computer terminal group in a computer network, or in any one of mobile terminals in a mobile terminal group.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: displaying a first video on a display interface; receiving a first operation of entering a preset page; in response to the first operation, a second video strongly correlated with the first video is highlighted on a predetermined page.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: the second video that is strongly correlated with the first video includes: one or more videos belonging to the same continuous video group as the first video.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: the step of highlighting the second video strongly correlated with the first video on the predetermined page includes: and displaying a second video which is strongly related to the first video in the form of dynamic resources on a preset page.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: the step of highlighting the second video strongly correlated with the first video on the predetermined page includes: and displaying a second video which is strongly related to the first video in a top-positioned mode on the preset page.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: the step of highlighting the second video strongly correlated with the first video on the predetermined page includes: displaying a continuous video group and first identification information of the continuous video group, wherein the continuous video group comprises a first video and a second video, and the first identification information is a classic segment representing a person in the continuous video group.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: the step of displaying the set of consecutive videos comprises: and displaying videos included in the continuous video group and second identification information for identifying the videos, wherein positioning information is displayed in a first video displayed in the continuous video group, and the positioning information is used for identifying the position of the first video in the continuous video group.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: after the second video which is strongly related to the first video is highlighted on the preset page, the method further comprises the following steps: receiving a playing operation of a second video; and responding to the playing operation to play the second video.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: displaying the third video on the display interface; receiving a second operation of entering a preset page; and responding to the second operation, and displaying the fourth video included in the preset page in a default display mode.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: the default display mode comprises the following steps: the third video is a video that does not belong to any one of the consecutive video groups, and is displayed at the video distribution time.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: the first video, the second video, the third video and the fourth video each comprise at least one of: and recording the video and live broadcasting the video.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: the control terminal displays a first video on a display interface; receiving a first operation of entering a preset page; responding to the first operation, and detecting whether a second video with strong correlation exists in the first video; and in the case that the detection result is yes, the control terminal highlights the second video on a preset page.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: the step of detecting whether the first video has a strongly correlated second video comprises: detecting whether the first video belongs to a continuous video; and in the case that the detection result is yes, determining that a second video which is strongly related exists in the first video, wherein the first video and the second video belong to the same continuous video group.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: the step of controlling the terminal to highlight the second video on the predetermined page includes: determining a dynamic resource for displaying the second video, and controlling the terminal to display the second video on a predetermined page based on the determined dynamic resource; or sequencing the contents included in the preset page, and displaying the second video on the preset page in a mode of setting the second video at the top through the control terminal.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: before receiving a first operation of entering a predetermined page, the method further comprises the following steps: receiving an object to be issued; determining a type of the object, wherein the type comprises one of: continuous video and discontinuous video; under the condition that the type of the object is a continuous video, distributing a continuous type identification for the object; and in the case that the type of the object is the non-continuous type video, distributing non-continuous type identification to the object.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: after the object is allocated with the continuous type category identification, the method further comprises the following steps: determining the video content to which the object belongs, and distributing the same content identification for the objects belonging to the same video content, wherein different objects belonging to the same video content have different object identifications.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: receiving click operation on a video displayed on a preset page; acquiring a video identifier of a video corresponding to the click operation; and controlling the terminal to play the clicked video on the display interface according to the video identification.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: receiving a sliding operation, wherein the sliding operation is used for requesting to display a first video; responding to the sliding operation, and playing a first video on a display interface; receiving a first operation of entering a preset page; highlighting a second video which is strongly related to the first video on a preset page in response to the first operation; receiving click operation on a second video; and responding to the click operation, and playing the second video.
In an exemplary embodiment, a computer program product is also provided, in which a computer program, when executed by a processor of a terminal, enables the terminal to perform the video processing method of any of the above.
The above-mentioned serial numbers of the embodiments of the present disclosure are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present disclosure, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a division of a logic function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A video processing method, comprising:
displaying a first video on a display interface;
receiving a first operation of entering a preset page;
and highlighting a second video which is strongly related to the first video on the preset page in response to the first operation.
2. The method of claim 1, wherein the second video that is strongly correlated with the first video comprises: one or more videos belonging to the same continuous video group as the first video.
3. The method according to claim 1, wherein the step of highlighting the second video strongly related to the first video on the predetermined page comprises:
and displaying a second video which is strongly related to the first video in the form of dynamic resources on the preset page.
4. The method according to claim 1, wherein the step of highlighting the second video strongly related to the first video on the predetermined page comprises:
and displaying a second video which is strongly related to the first video in a top-set mode on the preset page.
5. The method according to claim 1, wherein the step of highlighting the second video strongly related to the first video on the predetermined page comprises:
displaying a continuous video group and first identification information of the continuous video group, wherein the continuous video group comprises the first video and the second video, and the first identification information is a video frame in the continuous video group.
6. The method of claim 5, wherein the step of displaying the set of consecutive videos comprises:
and displaying videos included in the continuous video group and second identification information for identifying the videos, wherein positioning information is displayed in the first video displayed in the continuous video group, and the positioning information is used for identifying the position of the first video in the continuous video group.
7. The method of claim 1, wherein after the predetermined page highlights a second video that is strongly related to the first video, further comprising:
receiving a playing operation of the second video;
and responding to the playing operation, and playing the second video.
8. A video processing method, comprising:
the control terminal displays a first video on a display interface;
receiving a first operation of entering a preset page;
detecting whether a second video which is strongly related exists in the first video or not in response to the first operation;
and controlling the terminal to highlight the second video on the preset page if the detection result is yes.
9. The method of claim 8, wherein the step of detecting whether there is a strongly correlated second video in the first video comprises:
detecting whether the first video belongs to a continuous video;
and if the detection result is yes, determining that a second video which is strongly related to the first video exists in the first video, wherein the first video and the second video belong to the same continuous video group.
10. A terminal, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video processing method of any of claims 1 to 7.
CN202011569042.9A 2020-12-25 2020-12-25 Video processing method, device, terminal, server and storage medium Pending CN112667936A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011569042.9A CN112667936A (en) 2020-12-25 2020-12-25 Video processing method, device, terminal, server and storage medium
PCT/CN2021/106862 WO2022134555A1 (en) 2020-12-25 2021-07-16 Video processing method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011569042.9A CN112667936A (en) 2020-12-25 2020-12-25 Video processing method, device, terminal, server and storage medium

Publications (1)

Publication Number Publication Date
CN112667936A true CN112667936A (en) 2021-04-16

Family

ID=75409865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011569042.9A Pending CN112667936A (en) 2020-12-25 2020-12-25 Video processing method, device, terminal, server and storage medium

Country Status (2)

Country Link
CN (1) CN112667936A (en)
WO (1) WO2022134555A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022134555A1 (en) * 2020-12-25 2022-06-30 北京达佳互联信息技术有限公司 Video processing method and terminal
CN115052196A (en) * 2022-05-23 2022-09-13 北京达佳互联信息技术有限公司 Video processing method and related equipment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115237315B (en) * 2022-07-08 2024-05-07 北京字跳网络技术有限公司 Information display method, information display device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104202657A (en) * 2014-08-29 2014-12-10 北京奇虎科技有限公司 Multi-video selective playing method and device for video group with same theme
JP2018519679A (en) * 2016-04-22 2018-07-19 北京小米移動軟件有限公司Beijing Xiaomi Mobile Software Co.,Ltd. Video processing method, apparatus, program, and recording medium
CN109309860A (en) * 2018-10-16 2019-02-05 腾讯科技(深圳)有限公司 Methods of exhibiting and device, storage medium, the electronic device of prompt information
CN110012339A (en) * 2019-04-11 2019-07-12 北京字节跳动网络技术有限公司 Video playing display methods, device, equipment and storage medium
CN111783001A (en) * 2020-06-29 2020-10-16 北京达佳互联信息技术有限公司 Page display method and device, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170264973A1 (en) * 2016-03-14 2017-09-14 Le Holdings (Beijing) Co., Ltd. Video playing method and electronic device
CN110691281B (en) * 2018-07-04 2022-04-01 北京字节跳动网络技术有限公司 Video playing processing method, terminal device, server and storage medium
CN111405318B (en) * 2020-03-24 2022-09-09 聚好看科技股份有限公司 Video display method and device and computer storage medium
CN111770376A (en) * 2020-06-29 2020-10-13 百度在线网络技术(北京)有限公司 Information display method, device, system, electronic equipment and storage medium
CN112667936A (en) * 2020-12-25 2021-04-16 北京达佳互联信息技术有限公司 Video processing method, device, terminal, server and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104202657A (en) * 2014-08-29 2014-12-10 北京奇虎科技有限公司 Multi-video selective playing method and device for video group with same theme
JP2018519679A (en) * 2016-04-22 2018-07-19 北京小米移動軟件有限公司Beijing Xiaomi Mobile Software Co.,Ltd. Video processing method, apparatus, program, and recording medium
CN109309860A (en) * 2018-10-16 2019-02-05 腾讯科技(深圳)有限公司 Methods of exhibiting and device, storage medium, the electronic device of prompt information
CN110012339A (en) * 2019-04-11 2019-07-12 北京字节跳动网络技术有限公司 Video playing display methods, device, equipment and storage medium
CN111783001A (en) * 2020-06-29 2020-10-16 北京达佳互联信息技术有限公司 Page display method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022134555A1 (en) * 2020-12-25 2022-06-30 北京达佳互联信息技术有限公司 Video processing method and terminal
CN115052196A (en) * 2022-05-23 2022-09-13 北京达佳互联信息技术有限公司 Video processing method and related equipment

Also Published As

Publication number Publication date
WO2022134555A1 (en) 2022-06-30

Similar Documents

Publication Publication Date Title
CN104469508B (en) Method, server and the system of video location are carried out based on the barrage information content
CN108989297B (en) Information access method, client, device, terminal, server and storage medium
CN106658199B (en) Video content display method and device
CN112667936A (en) Video processing method, device, terminal, server and storage medium
US9230352B2 (en) Information processing apparatus, information processing method, and computer program product
CN105163178B (en) A kind of video playing location positioning method and device
US20150012840A1 (en) Identification and Sharing of Selections within Streaming Content
US20130144891A1 (en) Server apparatus, information terminal, and program
CN108848401A (en) Video plays broadcasting method and device
CN103501449A (en) Method and device for recommending video source associated with television program
CN104486339A (en) Method and device for displaying recommendation data in social application
CN113965811A (en) Play control method and device, storage medium and electronic device
CN103731691A (en) Method and device for direct-broadcast program video-on-demand of smart television
CN112328816A (en) Media information display method and device, electronic equipment and storage medium
CN105144736A (en) Information processing device and information processing method
CN105872717A (en) Video processing method and system, video player and cloud server
CN113094521A (en) Multimedia resource searching method, device, system, equipment and storage medium
CN112616064B (en) Live broadcasting room information processing method and device, computer storage medium and electronic equipment
US20150142778A1 (en) Presenting Previously Selected Search Results
CN112333463A (en) Program recommendation method, system, device and readable storage medium
US10158983B2 (en) Providing a summary of media content to a communication device
US20150026744A1 (en) Display system, display apparatus, display method, and program
CN111881357A (en) Information recommendation method and device, electronic equipment and computer-readable storage medium
CN114257873B (en) Information pushing method and card display method in network live broadcast scene
CN114257827B (en) Game live broadcast room display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination