CN116975480A - Content preview method, device, computer equipment and storage medium - Google Patents

Content preview method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116975480A
CN116975480A CN202211440248.0A CN202211440248A CN116975480A CN 116975480 A CN116975480 A CN 116975480A CN 202211440248 A CN202211440248 A CN 202211440248A CN 116975480 A CN116975480 A CN 116975480A
Authority
CN
China
Prior art keywords
content
preview
target
fragment
card
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211440248.0A
Other languages
Chinese (zh)
Inventor
张振伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211440248.0A priority Critical patent/CN116975480A/en
Publication of CN116975480A publication Critical patent/CN116975480A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present application relates to a content preview method, apparatus, computer device, storage medium and computer program product, and relates to artificial intelligence technology. The method comprises the following steps: displaying a target content identifier pointing to target content; the target content identification is used for responding to preview triggering operations of at least two categories; responding to a preview triggering operation of a target category triggered by target content identification, and displaying at least one fragment card associated with the target content; the target category belongs to at least two categories, each fragment card points to a content preview fragment of the target content, and the fragment classification dimension of the content preview fragment pointed by each fragment card is matched with the target category; and previewing the pointed content preview segment in the target segment card in the at least one segment card. By adopting the method, the interaction efficiency of content preview can be improved.

Description

Content preview method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technology, and in particular, to a content preview method, apparatus, computer device, storage medium, and computer program product.
Background
With the development of computer technology, various resource contents in the internet are increasing, and the time spent by people on browsing the resource contents such as pictures, web pages, texts, videos and the like is also increasing. When selecting from various contents, a user often needs to enter the contents and repeatedly read or play the contents to know information in the contents, so that effective screening is performed.
However, the preview method in which the content is repeatedly viewed or played is entered, and the operation of the content preview is complicated, resulting in low interaction efficiency of the content preview.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a content preview method, apparatus, computer device, computer-readable storage medium, and computer program product that enable interactive efficiency of content preview.
In a first aspect, the present application provides a content preview method. The method comprises the following steps:
displaying a target content identifier pointing to target content; the target content identification is used for responding to preview triggering operations of at least two categories;
responding to a preview triggering operation of a target category triggered by target content identification, and displaying at least one fragment card associated with the target content; the target category belongs to at least two categories, each fragment card points to a content preview fragment of the target content, and the fragment classification dimension of the content preview fragment pointed by each fragment card is matched with the target category;
And previewing the pointed content preview segment in the target segment card in the at least one segment card.
In one embodiment, the fragment acquisition request is generated based on a preview trigger operation, including at least one of: determining a fragment classification dimension matched with the target category according to the category mapping relation, and generating a fragment acquisition request according to the fragment classification dimension; and generating a fragment acquisition request according to the target category of the preview triggering operation.
In one embodiment, the method further comprises: displaying authorization notification information for a user account; and responding to confirmation authorization triggered by the user account to the authorization notification information, and acquiring account behavior information of the user account in the activity process of the user account.
In a second aspect, the application further provides a content preview device. The device comprises:
the content identifier display module is used for displaying a target content identifier pointing to target content; the target content identification is used for responding to preview triggering operations of at least two categories;
the preview trigger response module is used for responding to the preview trigger operation of the target category triggered by the target content identifier and displaying at least one fragment card associated with the target content; the target category belongs to at least two categories, each fragment card points to a content preview fragment of the target content, and the fragment classification dimension of the content preview fragment pointed by each fragment card is matched with the target category;
And the preview display module is used for displaying the pointed content preview segment in the target segment card in the at least one segment card.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
displaying a target content identifier pointing to target content; the target content identification is used for responding to preview triggering operations of at least two categories;
responding to a preview triggering operation of a target category triggered by target content identification, and displaying at least one fragment card associated with the target content; the target category belongs to at least two categories, each fragment card points to a content preview fragment of the target content, and the fragment classification dimension of the content preview fragment pointed by each fragment card is matched with the target category;
and previewing the pointed content preview segment in the target segment card in the at least one segment card.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Displaying a target content identifier pointing to target content; the target content identification is used for responding to preview triggering operations of at least two categories;
responding to a preview triggering operation of a target category triggered by target content identification, and displaying at least one fragment card associated with the target content; the target category belongs to at least two categories, each fragment card points to a content preview fragment of the target content, and the fragment classification dimension of the content preview fragment pointed by each fragment card is matched with the target category;
and previewing the pointed content preview segment in the target segment card in the at least one segment card.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
displaying a target content identifier pointing to target content; the target content identification is used for responding to preview triggering operations of at least two categories;
responding to a preview triggering operation of a target category triggered by target content identification, and displaying at least one fragment card associated with the target content; the target category belongs to at least two categories, each fragment card points to a content preview fragment of the target content, and the fragment classification dimension of the content preview fragment pointed by each fragment card is matched with the target category;
And previewing the pointed content preview segment in the target segment card in the at least one segment card.
According to the content preview method, the device, the computer equipment, the storage medium and the computer program product, the displayed target content identifier pointing to the target content can respond to at least two types of preview triggering operations, for the triggered preview triggering operations of the target types, the fragment cards pointing to the content preview fragments of the target content are displayed, the fragment classification dimension of the content preview fragments pointed to by each fragment card is matched with the target type, and preview display is carried out on the pointed content preview fragments in the target fragment cards, so that preview is carried out on the content fragments with different fragment classification dimensions by triggering preview triggering operations of different types on the target content identifier, interaction operations of the content preview are simplified, and interaction efficiency of the content preview is improved.
In a sixth aspect, the present application provides a content preview method. The method comprises the following steps:
displaying a target content identifier pointing to target content;
responding to a preview triggering operation of a target category triggered by target content identification, and displaying at least one fragment card associated with the target content; each fragment card points to a content preview fragment of the target content, and the fragment classification dimension of the content preview fragment pointed by each fragment card is matched with the target class;
And previewing the pointed content preview segment in the target segment card in the at least one segment card.
In a seventh aspect, the present application further provides a content previewing apparatus. The device comprises:
the content identifier display module is used for displaying a target content identifier pointing to target content;
the preview response module is used for responding to a preview triggering operation of a target category triggered by the target content identification and displaying at least one fragment card associated with the target content; each fragment card points to a content preview fragment of the target content, and the fragment classification dimension of the content preview fragment pointed by each fragment card is matched with the target class;
and the preview display module is used for displaying the pointed content preview segment in the target segment card in the at least one segment card.
In an eighth aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
displaying a target content identifier pointing to target content;
responding to a preview triggering operation of a target category triggered by target content identification, and displaying at least one fragment card associated with the target content; each fragment card points to a content preview fragment of the target content, and the fragment classification dimension of the content preview fragment pointed by each fragment card is matched with the target class;
And previewing the pointed content preview segment in the target segment card in the at least one segment card.
In a ninth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
displaying a target content identifier pointing to target content;
responding to a preview triggering operation of a target category triggered by target content identification, and displaying at least one fragment card associated with the target content; each fragment card points to a content preview fragment of the target content, and the fragment classification dimension of the content preview fragment pointed by each fragment card is matched with the target class;
and previewing the pointed content preview segment in the target segment card in the at least one segment card.
In a tenth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
displaying a target content identifier pointing to target content;
responding to a preview triggering operation of a target category triggered by target content identification, and displaying at least one fragment card associated with the target content; each fragment card points to a content preview fragment of the target content, and the fragment classification dimension of the content preview fragment pointed by each fragment card is matched with the target class;
And previewing the pointed content preview segment in the target segment card in the at least one segment card.
According to the content preview method, the device, the computer equipment, the storage medium and the computer program product, the displayed target content identifier pointing to the target content can respond to the triggered preview triggering operation of the target category, the fragment cards of the content preview fragments pointing to the target content can be displayed, the fragment classification dimension of the content preview fragments pointed to by each fragment card is matched with the target category, and the pointed content preview fragments are subjected to preview display in the target fragment cards, so that the preview of the content fragments with the matched fragment classification dimension is supported by triggering the preview triggering operation of the target content identifier, the interaction operation of the content preview is simplified, and the interaction efficiency of the content preview is improved.
Drawings
FIG. 1 is an application environment diagram of a content preview method in one embodiment;
FIG. 2 is a flow diagram of a content preview method in one embodiment;
FIG. 3 is a diagram of an interface change for a slide-up preview of a television show cover in one embodiment;
FIG. 4 is a diagram of an interface change for a television show cover slide preview in one embodiment;
FIG. 5 is a schematic diagram of an interface for previewing a cover trigger menu for a television in one embodiment;
FIG. 6 is a flow diagram of generating a segment card in one embodiment;
FIG. 7 is a flowchart of a content preview method according to another embodiment;
FIG. 8 is a diagram of an interface change for a slide-up preview of a television show cover in another embodiment;
FIG. 9 is a diagram of an interface change for a television show cover slide preview in accordance with another embodiment;
FIG. 10 is a diagram of an interface change for a TV show cover right slide preview in one embodiment;
FIG. 11 is a schematic diagram of an interface for previewing a episode of a television show in one embodiment;
FIG. 12 is a schematic diagram of an interface for video platform authorization in one embodiment;
FIG. 13 is a schematic diagram of an interface for graphics platform authorization in one embodiment;
FIG. 14 is a schematic diagram of an interface of a video platform displaying a video cover according to one embodiment;
FIG. 15 is a schematic diagram of an interface for selecting a cover in a video platform in one embodiment;
FIG. 16 is a schematic illustration of an interface for a slide up operation with respect to a selected cover in one embodiment;
FIG. 17 is a schematic diagram of an interface for displaying video preview segments in a timeline sequence, in accordance with one embodiment;
FIG. 18 is a schematic diagram of an interface for displaying video preview segments in order of preference in one embodiment;
FIG. 19 is a schematic diagram of an interface for time stamp triggered ordering in one embodiment;
FIG. 20 is a diagram of an interface for triggering ranking for a preference label in one embodiment;
FIG. 21 is a schematic diagram of an interface for sliding video preview segments up and down in one embodiment;
FIG. 22 is a diagram of an interface for selecting a video preview clip to trigger playback in one embodiment;
FIG. 23 is a diagram of an interface for playing a countdown in one embodiment;
FIG. 24 is a schematic diagram of an interface for playing a video preview segment at an intermediate position in one embodiment;
FIG. 25 is a schematic diagram of an interface for automatically switching playback in one embodiment;
FIG. 26 is a schematic diagram of an interface for a slide down operation with respect to a selected cover in one embodiment;
FIG. 27 is a diagram of an interface for displaying video preview segments in a person relationship ranking in accordance with one embodiment;
FIG. 28 is a schematic diagram of an interface for persona relationship tag trigger ordering in one embodiment;
FIG. 29 is a schematic diagram of an interface for controlling playback of a video preview segment by sliding up and down in one embodiment;
FIG. 30 is a diagram of an interface for selecting a video preview clip to trigger playback according to another embodiment;
FIG. 31 is a diagram of an interface for playing a countdown in another embodiment;
FIG. 32 is a schematic diagram of an interface for playing a video preview segment at an intermediate position in another embodiment;
FIG. 33 is a schematic diagram of an interface for automatically switching playback according to another embodiment;
FIG. 34 is a schematic illustration of an interface for a right-slide operation with respect to a selected cover in one embodiment;
FIG. 35 is a schematic diagram of an interface for displaying video preview segments in accordance with a stressor level in one embodiment;
FIG. 36 is a diagram illustrating an interface for displaying video preview segments according to the degree of screen elegance in one embodiment;
FIG. 37 is a schematic diagram of an interface for stressor tag trigger sequencing in one embodiment;
FIG. 38 is a schematic diagram of an interface for trigger ordering of visual beauty labels in one embodiment;
FIG. 39 is a schematic diagram of an interface for controlling playback of a video preview segment by sliding up and down in another embodiment;
FIG. 40 is a schematic diagram of an interface for selecting a video preview clip to trigger playback according to another embodiment;
FIG. 41 is a schematic diagram of an interface for playing a countdown in yet another embodiment;
FIG. 42 is a schematic diagram of an interface for playing a video preview segment at a middle position in yet another embodiment;
FIG. 43 is a schematic diagram of an interface for automatically switching playback according to another embodiment;
FIG. 44 is a schematic diagram of an interface of a novel platform displaying a video cover in one embodiment;
FIG. 45 is a schematic illustration of an interface for selecting a cover in a novel platform in one embodiment;
FIG. 46 is a schematic diagram of an interface for a slide up operation for a selected novel cover in one embodiment;
FIG. 47 is an interface diagram showing novel sections sorted by comment volume in one embodiment;
FIG. 48 is a diagram of an interface for triggering ranking for comment volume tags in one embodiment;
FIG. 49 is a schematic diagram of an interface for a slide down operation for a selected novel cover in one embodiment;
FIG. 50 is an interface diagram showing novel sections in accordance with a highlight index ranking, in one embodiment;
FIG. 51 is a schematic diagram of an interface for a highlight label trigger ordering in one embodiment;
FIG. 52 is a diagram of an interface for previewing a hot chapter to an intermediate position in one embodiment;
FIG. 53 is a diagram of an interface for selecting hot chapters for preview display in one embodiment;
FIG. 54 is an interface diagram illustrating the contents of a section in a countdown presentation, in one embodiment;
FIG. 55 is a schematic diagram of an interface for automatically sliding to an intermediate position for preview display in one embodiment;
FIG. 56 is a diagram of an interface for automatically switching presentation sections in one embodiment;
FIG. 57 is an interface diagram of a stay-present chapter in one embodiment;
FIG. 58 is an interface diagram of a scroll presentation section in one embodiment;
FIG. 59 is a schematic diagram of an interface for a switch chapter page presentation in one embodiment;
FIG. 60 is a schematic diagram of an interface for switching presentations after scrolling presentations are completed in one embodiment;
FIG. 61 is a schematic diagram of an interface for selecting pages by a page filter in one embodiment;
FIG. 62 is a schematic diagram of an interface for selecting pages by expanding the pages in one embodiment;
FIG. 63 is a schematic view of an interface for selecting a cover in a caricature platform in one embodiment;
FIG. 64 is a schematic illustration of an interface for a slide-up operation with respect to a selected comic cover in one embodiment;
FIG. 65 is a schematic diagram of an interface for displaying caricature chapters in accordance with a bullet screen volume ranking in one embodiment;
FIG. 66 is a timing diagram of recording user data in one embodiment;
FIG. 67 is a timing diagram for previewing content in one embodiment;
FIG. 68 is a diagram illustrating a correspondence between gestures and presentation content in one embodiment;
FIG. 69 is a timing diagram of a complete preview of single page content in one embodiment;
FIG. 70 is a timing diagram of a single page content scroll preview in one embodiment;
FIG. 71 is a timing diagram of previewing multi-page content in one embodiment;
FIG. 72 is a diagram of a recommendation list process in one embodiment;
FIG. 73 is a schematic diagram of a person relationship identification process in one embodiment;
FIG. 74 is a diagram illustrating a screen extent identification process in one embodiment;
FIG. 75 is a block diagram of a content previewing device in one embodiment;
FIG. 76 is a block diagram of a content previewing device according to another embodiment;
FIG. 77 is an internal block diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The content preview method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on the cloud or other servers. A target content identifier pointing to target content is displayed in the terminal 102, and the user may trigger interaction with respect to the target content identifier, specifically may trigger at least two types of preview triggering operations. The terminal 102 displays fragment cards pointing to content preview fragments of the target content in response to a preview triggering operation of the user on the target category triggered by the target content identification, wherein the fragment classification dimension of the content preview fragment pointed to by each fragment card is matched with the target category. Wherein the content preview clip pointed to in the clip card may be requested by the terminal 102 to be obtained from the server 104. The terminal 102 previews the pointed content preview segment in the target segment card. In addition, the content preview method may be implemented by the terminal 102 or the server 104 alone.
When the content preview method provided by the embodiment of the application is applied to an application environment as shown in fig. 1, a target content identifier pointing to target content is displayed in the terminal 102, and a user can trigger interaction aiming at the target content identifier, and specifically can trigger preview triggering operation of a target category. The terminal 102 displays at least one clip card pointing to a content preview clip of the target content in response to a preview trigger operation of the user on the target category triggered by the target content identification, and a clip classification dimension to which the content preview clip pointed to by each clip card belongs matches the target category. Wherein the content preview clip pointed to in the clip card may be requested by the terminal 102 to be obtained from the server 104. The terminal 102 previews the pointed content preview segment in the target segment card. In addition, the content preview method may be implemented by the terminal 102 or the server 104 alone.
The terminal 102 may be, but not limited to, various desktop computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision. The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The Computer Vision technology (CV) Computer Vision is a science of researching how to make a machine "look at", and more specifically, it means to replace a human eye with a camera and a Computer to perform machine Vision such as identifying and measuring on a target, and further perform graphic processing, so that the Computer processing becomes an image more suitable for the human eye to observe or transmit to an instrument for detection. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, and map construction, among others, as well as common biometric recognition techniques such as face recognition, fingerprint recognition, and others. Machine Learning (ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like. Key technologies to the speech technology (Speech Technology) are automatic speech recognition technology (ASR) and speech synthesis technology (TTS) and voiceprint recognition technology. The method can enable the computer to listen, watch, say and feel, is the development direction of human-computer interaction in the future, and voice becomes one of the best human-computer interaction modes in the future. Natural language processing (Nature Language processing, NLP) is an important direction in the fields of computer science and artificial intelligence. It is studying various theories and methods that enable effective communication between a person and a computer in natural language. Natural language processing is a science that integrates linguistics, computer science, and mathematics. Thus, the research in this field will involve natural language, i.e. language that people use daily, so it has a close relationship with the research in linguistics. Natural language processing techniques typically include text processing, semantic understanding, machine translation, robotic questions and answers, knowledge graph techniques, and the like.
With research and advancement of artificial intelligence technology, research and application of artificial intelligence technology is being developed in various fields, such as common smart home, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, automatic driving, unmanned aerial vehicles, robots, smart medical treatment, smart customer service, etc., and it is believed that with the development of technology, artificial intelligence technology will be applied in more fields and with increasing importance value. The scheme provided by the embodiment of the application relates to artificial intelligence computer vision, machine learning, voice technology, natural language processing and other technologies, so as to classify contents according to different fragment classification dimensions, and various content preview fragments are obtained.
In one embodiment, as shown in fig. 2, a content previewing method is provided, where the method is executed by a computer device, specifically, may be executed by a computer device such as a terminal or a server, or may be executed by the terminal and the server together, and in an embodiment of the present application, the method is applied to the terminal in fig. 1, and is described by taking the example as an example, including the following steps:
step 202, showing a target content identifier pointing to target content; the target content identification is used to respond to preview trigger operations of at least two categories.
Wherein the content may be various types of internet resources such as video, text, images, etc. The target content is a content to be subjected to preview processing, for example, video content such as a movie or a television show, text content such as a novel, or image content such as an image set or a cartoon. The target content identifier is used for identifying target content, and specifically can be various identification information such as a cover, a title, a link and the like of the target content, and through the target content identifier, the preview operation of the target content can be triggered. In addition, the target content identifier can be used as an entry for accessing the target content besides being used as an entry for previewing the target content, for example, a user can click on the target content identifier so as to access the target content, and specifically, the target content identifier can be used for triggering playing video, reading text or cartoon, and the like. The preview trigger operation is a trigger operation for previewing the target content. The preview trigger operations may be divided into different categories, e.g., preview trigger operations for different modes of operation may be considered to belong to different categories. As another example, preview trigger operations triggered by different operation paths may belong to different categories. The target content identifier can respond to preview triggering operations of at least two categories, namely, the target content identifier can respond to preview triggering operations of multiple categories, so that a user is supported to trigger the preview triggering operations of the multiple categories through the target content identifier, and content previewing in multiple modes is carried out.
Specifically, the terminal may display the target content identifier of the target content, specifically, may display the target content identifier of the target content in the content platform, such as may display the target video identifier of the target video in the video platform. The target content identifier is used for responding to at least two types of preview triggering operations, namely, a user can trigger different preview triggering operations aiming at the target content identifier, such as the user can perform preview triggering operations of various types such as up-sliding, down-sliding, left-sliding, right-sliding, double-clicking and the like aiming at the target content identifier. In addition, the preview triggering operation can be classified by different operation options, for example, at least two kinds of options can be set in the operation menu, and the user can select different options to trigger the preview triggering operation of the corresponding class. In a specific application, the terminal is used for triggering interactive operation for the target content identifier, such as long-press or right-click for the target content identifier, and displaying an operation menu, wherein the operation menu can comprise at least two types of options, and a user can select in the operation menu to trigger preview triggering operations of different types.
Step 204, responding to a preview triggering operation of a target category triggered by the target content identification, and displaying at least one fragment card associated with the target content; the target category belongs to at least two categories, each fragment card points to a content preview fragment of the target content, and the fragment classification dimension of the content preview fragment pointed to by each fragment card is matched with the target category.
The target category belongs to at least two categories, namely the category of which the target content identification supports responding. The clip card is associated with the target content, the clip card being for pointing to a content preview clip in the target content. The number of the segment cards can be set according to actual needs, such as one or more. The content preview segment is a segment cut from the target content for previewing. By previewing the content preview segments in the target content, targeted previewing of the target content may be achieved. The segment classification dimension is a classification dimension of the content preview segment, the segment classification dimension is matched with the target category, and the content preview segment belongs to the segment classification dimension, namely the content preview segment is obtained by intercepting the target content based on the segment classification dimension. For example, the segment classification dimension may be a dimension of interest to the user, and the content preview segments pointed to by the segment cards are all segments of interest to the user obtained by cutting out from the target content according to the dimension of interest to the user.
Specifically, the user may trigger interaction on the target content identifier of the target content, for example, the user may trigger a sliding operation in a specific direction on the target content identifier, or a preview triggering operation of the user for a specific option in a menu of target content identifier triggering operations, and the terminal displays at least one fragment card associated with the target content in response to the preview triggering operation of the user for the target category triggered by the target content identifier. Each displayed fragment card points to a content preview fragment in the target content, and the fragment classification dimension of the content preview fragment pointed to by each fragment card is matched with the target class. I.e. for preview triggering operations of different target categories, the terminal may respond to displaying content preview segments belonging to different segment classification dimensions. In a specific application, the relationship between the content preview segment and the segment classification dimension of each target content can be obtained by performing dimension analysis on each content segment in the target content based on the segment classification dimension. The matching relation between the fragment classification dimension and the target class can be configured in advance according to actual requirements. For example, a slide-up preview trigger operation may be configured to match a first segment classification dimension, a slide-down preview trigger operation to match a second segment classification dimension, a slide-left preview trigger operation to match a third segment classification dimension, and so on.
And 206, previewing the pointed content preview segment in the target segment card in the at least one segment card.
The content preview segment is a content segment which needs to be subjected to preview display, and the content preview segment can be subjected to preview display in a corresponding segment card. The target fragment card may be some or all of the displayed fragment cards, and the target fragment card is a fragment card that needs to be previewed for the pointed content preview fragment. The number of the target fragment cards can be preset according to actual needs, and can be customized and adjusted by a user according to own needs. Specifically, for the displayed fragment card, the content preview fragments pointed by the target fragment card can be previewed and displayed in the target fragment card, so that a user can quickly know the information of the target content through the content preview fragments previewed and displayed in the fragment card, and the interaction efficiency of the content preview is improved.
In a specific application, as shown in fig. 3, the target content displayed in the interfaces of various types of terminals is video content, specifically a television, the terminal displays a television cover, the user can trigger a preview triggering operation for the television cover, specifically trigger a preview triggering operation of sliding upwards, the terminal displays a plurality of fragment cards associated with the television, each fragment card points to a different content preview fragment in the television, each content preview fragment belongs to a user interest dimension of "guessing you like", and the user can click on the fragment card to preview and play the content preview fragment pointed by the fragment card. As shown in fig. 4, for the drama cover in fig. 3, the user may trigger a preview triggering operation that slides downward, and the terminal displays a plurality of clip cards associated with the drama, where each clip card points to a different content preview clip in the drama, and each content preview clip belongs to the dimension of "wonderful degree", and the user may click on the clip card to preview and play the content preview clip pointed to by the clip card.
In one specific application, as shown in fig. 5, a drama cover is displayed in the upper left corner of the interface of the terminal and an introduction to the drama is displayed in the right side. The user can trigger interaction for the television show cover, such as right clicking for the television show cover, the terminal displays an operation menu of the segment classification dimension, including options of "guess you like", "highlight" and "person relationship", the user can select the option "guess you like" to trigger preview triggering operation, multiple segment cards can be displayed below the interface, each segment card points to different content preview segments in the television play, each content preview segment belongs to the user interested dimension of 'guessing you like', and the user can click on the segment card to preview and play the content preview segment pointed by the segment card.
In the content preview method, the displayed target content identifier pointing to the target content can respond to at least two types of preview triggering operations, for the triggered preview triggering operations of the target types, the fragment cards of the content preview fragments pointing to the target content are displayed, the fragment classification dimension of the content preview fragments pointed to by each fragment card is matched with the target type, and the pointed content preview fragments are subjected to preview display in the target fragment cards, so that the preview of the content fragments with different fragment classification dimensions is supported by triggering the preview triggering operations of the target content identifier, the interaction operation of the content preview is simplified, and the interaction efficiency of the content preview is improved.
In one embodiment, the snippet classification dimension includes at least one of a user interest dimension, a persona information dimension, a picture effect dimension, a story type dimension, or a snippet popularity dimension.
The segment classification dimension may be matched with the categories of the preview trigger operation partition one by one, that is, the preview trigger operation of each category corresponds to one segment classification dimension. The dimension of interest of the user refers to the dimension of classifying the segments according to the preference of the user, and particularly can be determined by matching the account data of the user with the content segments. The character information dimension refers to a dimension in which segments are classified according to information of characters included in the target content, and specifically may include, but is not limited to, character information including various aspects such as character relation, actor information, character information, dubbing information, and the like. The frame effect dimension refers to a dimension of classifying segments according to the effect of the frame in the target content, and specifically may include, but not limited to, frame effects including a beautiful effect, a stimulating effect, a shocking effect, a suspense effect, a disaster effect, and the like. The plot type dimension refers to a dimension for classifying segments according to the type of a story plot in the target content, for example, the segments of the content in the target content may be classified according to the action of the plot. The plot is one of elements constituting the content of the narrative literary work, and the development process of a series of life events for representing the mutual relations among people in the narrative work is a microscopic concept, and specifically refers to a series of life events for representing the character formation and development change in the narrative literary work. It shows character, character and relation between character and environment, and the basic mode of plot is generation-development-climax-ending. The segment popularity dimension refers to a dimension of classifying segments according to popularity of each content segment in the target content, for example, content segments with high popularity can be divided into popular segments belonging to high popularity.
Specifically, each category of preview triggering operation is used for triggering the preview of the content preview segment belonging to one segment classification dimension. The specific dimension of the segment classification dimension can be flexibly set according to actual needs, and comprises at least one of a user interested dimension, a character information dimension, a picture effect dimension, a plot type dimension or a segment popularity dimension. In a specific application, a user can flexibly set a mapping relation between a fragment classification dimension and a target category according to actual needs, for example, for a slide-up preview triggering operation, the originally matched fragment classification dimension can be a dimension of interest of the user, and the user can adjust the slide-up preview triggering operation to be matched with a character information dimension according to actual needs, so that the dimension of content preview can be set according to actual needs.
In this embodiment, the preview triggering operation of each category is used to trigger previewing of the content preview segments belonging to one segment classification dimension, where the segment classification dimension includes at least one of a user interested dimension, a character information dimension, a picture effect dimension, a scenario type dimension or a segment popularity dimension, so as to support previewing of the content segments of different segment classification dimensions by triggering the preview triggering operation of different categories on the target content identifier, which can improve interaction efficiency of content preview.
In one embodiment, displaying at least one clip card associated with the target content in response to a preview trigger operation on a target category triggered by the target content identification, comprises: responding to the activation operation triggered by the target content identification, and displaying the target content identification in a preview activation mode; and in the process of displaying the target content identification in a preview activation mode, responding to a preview triggering operation of a target category triggered by the target content identification, and displaying at least one fragment card associated with the target content.
The activation operation is an operation of selecting a target content identifier to perform preview processing, and the preview activation mode is a display mode of activating the preview processing, and specifically may include, but not limited to, display effects including size change, light emission, generation of hierarchical shadows, and the like. In addition, the preview activation manner may further include displaying a prompt message for performing a preview triggering operation on the target content identifier, for example, a segment classification dimension corresponding to different categories may be prompted, so that a user may trigger an accurate preview operation as required. The target content identification is displayed in a preview activating mode, a user is identified to activate the preview processing for the target content identification, and the target content can be previewed according to the corresponding dimension through further preview triggering operation.
Specifically, the user may trigger an activation operation on the target content identifier, for example, the user may trigger a long-press operation on the target content identifier, when the duration of the long-press operation reaches the activation duration threshold, the terminal responds to the activation operation triggered on the target content identifier, and displays the target content identifier in a preview activation manner, for example, the terminal may highlight the target content identifier dynamically to prompt that preview processing directed to the target content for the target content identifier is activated, and at this time, the user may further preview the target content through the preview trigger operation. In the process of displaying the target content identifier in a preview activating mode, namely under the condition that the preview processing of the target content is activated, the terminal responds to the preview triggering operation of the target category triggered by the user on the target content identifier, for example, the user can trigger the sliding-up operation on the target content identifier, and the terminal displays at least one fragment card associated with the target content. In specific implementation, the number of the fragment cards displayed by the terminal can be flexibly set according to actual needs, for example, the number of the fragment cards can be set to be fixed number N, or the first M fragment cards after being sequenced according to a certain condition can be set.
In this embodiment, the terminal displays the target content identifier in a preview activation manner, so as to prompt the user that the preview process directed to the target content for the target content identifier is activated, which is favorable for the user to preview the target content accurately through the preview triggering operation, thereby ensuring the interaction efficiency of the content preview.
In one embodiment, displaying at least one clip card associated with the target content in response to a preview trigger operation on a target category triggered by the target content identification, comprises: responding to a preview triggering operation of a target category triggered by the target content identification, and arranging and displaying at least one fragment card associated with the target content according to a classification dimension arrangement condition; the sorting dimension arrangement condition is matched with the sorting dimension of the fragment to which the content preview fragment pointed by the fragment card belongs.
The classification dimension arrangement condition is matched with the fragment classification dimension of the content preview fragment pointed by the fragment card, namely the classification dimension arrangement condition is determined based on the fragment classification dimension. In a specific application, a classification quantization parameter of the content preview segment in the segment classification dimension in the target content can be determined, and the segment cards are displayed in sequence according to the classification dimension arrangement condition based on the classification quantization parameter. For example, the segment classification dimension is a dimension of interest of a user, the classification quantization parameter may be an interest parameter of a content preview segment in the target content in the dimension of interest of the user, the classification dimension arrangement condition may be that the segments are arranged according to the size of the interest parameter, and then the segment cards may be arranged and displayed according to the interest parameter, for example, the segments may be arranged and displayed according to the order of the interest parameter from large to small.
Specifically, the user can trigger a preview triggering operation of the target category aiming at the target content identification, the terminal responds to the preview triggering operation to determine the classification dimension arrangement condition, and the terminal can obtain the matched classification dimension arrangement condition based on the target category inquiry. And the terminal displays at least one fragment card associated with the target content in a arraying way according to the classifying dimension arraying condition. In a specific application, the terminal can determine the classification quantization parameter of each content segment in the target content in the segment classification dimension, and arrange and display the segment cards corresponding to each content segment based on the classification quantization parameter and the classification dimension arrangement condition.
In this embodiment, the segment cards displayed by the terminal are arranged and displayed according to the arrangement condition of the classification dimension matched with the classification dimension of the segment, so that the tightness degree of each content preview segment and the classification dimension of the segment can be embodied through the arrangement sequence, which is beneficial to the user to further screen and preview, thereby improving the interaction efficiency of the content preview.
In one embodiment, displaying at least one clip card associated with the target content in response to a preview trigger operation on a target category triggered by the target content identification, comprises: responding to a preview triggering operation of a target category triggered by the target content identification, and displaying at least one fragment card associated with the target content according to the distribution position; the distribution position comprises the content preview segment pointed by the segment card and the position of the content preview segment in the target content.
The distribution position comprises the position of the content preview segment in the target content, and particularly for different forms of content, the distribution position can also be in different forms. For example, when the target content is video content, the distribution position may be a time axis where the content preview segment is located in the target content; when the target content is the image-text content, the distribution position can be a section or paragraph of the content preview segment in the target content, and the like.
Specifically, the user can trigger a preview triggering operation of the target category aiming at the target content identifier, the terminal responds to the preview triggering operation to determine the position of the content preview segment in the target content, and at least one segment card associated with the target content is arranged and displayed according to the distribution position. In a specific application, the terminal can sequentially sequence and display at least one fragment card associated with the target content according to the sequence of the distribution positions, so that a user can orderly preview the content preview fragments according to the distribution positions of the content preview fragments, and the content preview fragments can be conveniently understood.
In this embodiment, the clip cards displayed by the terminal are arranged and displayed according to the positions of the content preview clips pointed by the clip cards in the target content, so that the user can conveniently and orderly preview the content preview clips based on the distribution positions of the content preview clips, and the content preview clips can be conveniently understood, thereby being beneficial to improving the interaction efficiency of the content preview.
In one embodiment, the target content comprises video content; in response to a preview trigger operation on a target category triggered by a target content identification, displaying at least one clip card associated with the target content, including at least one of: responding to a preview triggering operation of a first category triggered by the target content identification, and displaying at least one first fragment card associated with the target content; the content preview segment pointed by each first segment card belongs to the dimension of interest of the user; responding to a preview triggering operation of a second category triggered by the target content identification, and displaying at least one second fragment card associated with the target content; the content preview segment pointed by each second segment card belongs to the character information dimension; responding to a preview triggering operation of a third category triggered by the target content identification, and displaying at least one third fragment card associated with the target content; the content preview segment pointed by each third segment card belongs to the picture effect dimension.
The target content includes video content, that is, the target content includes various video content such as movies, television shows, animation, and the like, which can be played by a video player. The dimension of interest of the user refers to the dimension of classifying the segments according to the preference of the user, and particularly can be determined by matching the account data of the user with the content segments. The character information dimension refers to a dimension in which segments are classified according to information of characters included in the target content, and specifically may include, but is not limited to, character information including various aspects such as character relation, actor information, character information, dubbing information, and the like. The frame effect dimension refers to a dimension of classifying segments according to the effect of the frame in the target content, and specifically may include, but not limited to, frame effects including a beautiful effect, a stimulating effect, a shocking effect, a suspense effect, a disaster effect, and the like.
Specifically, for video content, the user may trigger a preview triggering operation of the first category for the target content identifier, e.g., the user may perform a sliding operation for the target content identifier, and the terminal displays at least one first fragment card associated with the target content in response to the preview triggering operation. The content preview segments pointed to by each first segment card belong to the dimension of interest to the user, i.e., the content preview segments pointed to in the first segment card are the content segments of interest to the user in the target content. The user can trigger a preview triggering operation of the second category for the target content identifier, for example, the user can perform a sliding operation for the target content identifier, and the terminal responds to the preview triggering operation to display at least one second fragment card associated with the target content. The content preview segments pointed by each second segment card belong to the dimension of the character information, namely the content preview segments pointed by the second segment cards are content segments which embody the character information in the target content, and can be particularly content segments which embody the character relationship. The user can trigger a preview triggering operation of a third category aiming at the target content identifier, for example, the user can perform a right-sliding operation aiming at the target content identifier, and the terminal responds to the preview triggering operation to display at least one third fragment card associated with the target content. The content preview segment pointed by each third segment card belongs to the dimension of the picture effect, namely the content preview segment pointed by the third segment card is a content segment which represents the picture effect in the target content, and can be specifically a content segment which represents the picture effect.
In this embodiment, for video content, the terminal supports previewing content segments in the video content according to at least one dimension of a dimension of interest of a user, a dimension of character information or a dimension of a picture effect, and previews content segments of different segment classification dimensions by triggering different types of preview triggering operations on video content identifiers, thereby improving interaction efficiency of previewing video content.
In one embodiment, the target content comprises teletext content; in response to a preview trigger operation on a target category triggered by a target content identification, displaying at least one clip card associated with the target content, including at least one of: responding to a fourth category preview triggering operation triggered by the target content identification, and displaying at least one fourth fragment card associated with the target content; the content preview segment pointed by each fourth segment card belongs to the segment hotness dimension; responding to a fifth category preview triggering operation triggered by the target content identification, and displaying at least one fifth fragment card associated with the target content; each fifth clip of cards points to a content preview clip that belongs to the story-type dimension.
The target content comprises image-text content, and can specifically comprise image content, text content and other forms of content. The segment popularity dimension refers to a dimension of classifying segments according to popularity of each content segment in the target content, for example, content segments with high popularity can be divided into popular segments belonging to high popularity. The plot type dimension refers to a dimension for classifying segments according to the type of a story plot in the target content, for example, the segments of the content in the target content may be classified according to the action of the plot.
Specifically, for the image-text content, the user can trigger a fourth category of preview triggering operation for the target content identifier, for example, the user can perform a sliding operation for the target content identifier, and the terminal responds to the preview triggering operation to display at least one fourth fragment card associated with the target content. The content preview segments pointed by each fourth segment card belong to the segment hotness dimension, namely the content preview segments pointed in the fourth segment card are content segments with different hotness in the target content. The user can trigger a preview triggering operation of a fifth category for the target content identifier, for example, the user can perform a sliding operation for the target content identifier, and the terminal responds to the preview triggering operation to display at least one fifth fragment card associated with the target content. The content preview segments pointed by each fifth segment card belong to the story type dimension, i.e. the content preview segments pointed in the fifth segment card are content segments of different types of stories in the target content.
In this embodiment, for the image-text content, the terminal supports previewing the content segments in the image-text content according to at least one dimension of the segment hotness dimension or the story type dimension, and supports previewing the content segments in the different segment classification dimensions by triggering different types of preview triggering operations on the image-text content identifier, thereby improving interaction efficiency of previewing the image-text content.
In one embodiment, the content preview method further comprises: for each segment card, displaying a card label associated with each segment card; in the card label, displaying the fragment information of the content preview fragment pointed by the fragment card associated with the card label; the clip information includes at least one of a distribution position of the content preview clip in the target content, or classification dimension quantization information of the content preview clip relative to a clip classification dimension.
The card label is associated with the fragment card, and specifically can be displayed in an area associated with the fragment card, for example, can be displayed in a corner area position of the fragment card. The card label can be used for displaying fragment information, wherein the fragment information refers to related information of a content preview fragment pointed by a fragment card, and specifically can comprise the distribution position of the content preview fragment in target content and can also comprise classification dimension quantization information of the content preview fragment relative to a fragment classification dimension. The classification dimension quantization information may include classification quantization parameters of the content segments in a segment classification dimension, and different segment classification dimensions may correspond to different classification quantization parameters. For example, for a user dimension of interest, the classification quantization parameter may be an interest parameter of the content preview segment in the target content in the user dimension of interest; for the picture effect dimension, the classification quantization parameter may be a picture effect parameter of the content preview segment in the target content in the picture effect dimension.
Specifically, for each of the displayed segment cards, the terminal displays the card tag associated with each segment card, and specifically may display the card tag in the associated area of each segment card. In the card label, the terminal displays the fragment information of the content preview fragment pointed by the fragment card associated with the card label, for example, the distribution position of the content preview fragment in the target content can be displayed, and particularly, the time axis information, the chapter information and the paragraph information can be displayed; the terminal may also display classification dimension quantization information of the content preview segment relative to the segment classification dimension, and may specifically display an interest parameter, a picture effect parameter, a comment amount, a bullet screen number, and the like.
In this embodiment, the terminal displays, in the card tag associated with each fragment card, fragment information of the content preview fragment, including a distribution position of the content preview fragment or classification dimension quantization information relative to the classification dimension of the fragment, so that the content preview fragment pointed by each fragment card can be effectively identified by the card tag, and further preview screening is performed by using the user according to the card tag, thereby improving interaction efficiency of previewing the target content.
In one embodiment, the content preview method further comprises: responding to triggering operation of the card label, and displaying a sequencing operation area; and responding to the sequencing control operation triggered in the sequencing operation area, and sequencing and displaying at least one fragment card according to the sequencing mode designated by the sequencing control operation.
The sorting operation area is an operation area for sorting display control of the fragment cards, and a user can perform custom adjustment on the display sequence of the fragment cards by triggering sorting control operation in the sorting operation area. Specifically, the user may trigger an interactive operation on the card label, for example, the user may click on the card label, and the terminal displays the sorting operation area in response to the triggering operation of the user on the card label. The method is used for triggering the sorting control operation in the sorting operation area so as to adjust the sorting mode of the fragment cards according to actual needs. And the terminal responds to the sorting control operation triggered by the user in the sorting operation area, determines a sorting mode designated by the sorting control operation, such as a positive sequence, a reverse sequence or a random sequence, and displays at least one fragment card in a sorting mode according to the determined sorting mode.
In this embodiment, the terminal responds to the sorting control operation triggered by the user through the card tag, and sorts and displays the fragment cards in a sorting manner determined according to the sorting control operation, so as to support the user to flexibly set the sorting manner of the fragment cards according to actual needs, facilitate the user to preview according to actual needs, and facilitate improving the interaction efficiency of previewing the target content.
In one embodiment, in a target fragment card in at least one fragment card, previewing the pointed content preview fragment includes: in response to a preview presentation event for a target segment card of the at least one segment card, a preview presentation is made for the pointed-to content preview segment in the target segment card.
The preview display event is an event for triggering a target fragment card in at least one fragment card to perform preview display, and specifically may be generated by user triggering, or may be automatically generated when a condition is met, for example, the preview display event may be automatically generated when a preset time is reached. Specifically, for at least one displayed fragment card, when the terminal detects a preview display event, for example, when the terminal detects a preview display operation triggered by a user or a preview display condition is met, the terminal determines a target fragment card from the at least one fragment card, and in the determined target fragment card, performs preview display on the pointed content preview fragment, thereby realizing preview display processing on the content preview fragment.
In this embodiment, when a preview display event is detected, the terminal responds to the preview display event and performs preview display on the pointed content preview segment in the displayed target segment card, so that the content preview segment can be subjected to preview display according to the preview display requirement, and a user can preview according to the actual requirement.
In one embodiment, in response to a preview presentation event for a target segment card of the at least one segment card, a preview presentation is made for the pointed-to content preview segment in the target segment card, including at least one of: in response to a target fragment card in the at least one fragment card moving to a preview display position, in the target fragment card, preview displaying the pointed content preview fragment; responding to a preview display operation triggered by a target fragment card in at least one fragment card, and displaying the pointed content preview fragment in the target fragment card in a preview manner; responding to the meeting of the preview display condition, sequentially moving the pointed content preview segments to the segment cards at the preview display position in at least one segment card, and sequentially displaying the pointed content preview segments; and responding to meeting the preview display condition, and sequentially displaying the pointed content preview fragments in target fragment cards in at least one fragment card.
The target fragment card is a fragment card triggering the preview display of the pointed content preview fragment. The preview display position is a position preset for performing preview display, and when the clip card moves to the preview display position, the pointed content preview clip is triggered to be subjected to preview display in the clip card. The preview display position can be flexibly set according to actual needs, for example, the preview display position can be the middle position of the interface. The preview display operation may be an interactive operation triggered by the user for the fragment card, for example, may be a click operation of the fragment card by the user. The preview display condition can be set according to actual needs, for example, the preview display condition can be various environmental conditions such as time condition, place condition, etc., and when the preview display condition is satisfied, a preview display event can be generated to trigger the preview display of the pointed content preview segment in the segment card.
Specifically, the user may trigger an interactive operation for the displayed segment cards, for example, the user may drag or slide each segment card, and when the target segment card moves to the preview display position, it indicates that the user needs to preview and display the content preview segment pointed by the target segment card, then the terminal performs preview and display on the pointed content preview segment in the target segment card, for example, playing video content, amplifying and displaying image-text content, and so on. The user can trigger the preview display operation aiming at the target fragment card, if the user directly clicks the target fragment card, the terminal displays the pointed content preview fragment in the target fragment card if the user needs to display the content preview fragment pointed by the target fragment card. And the terminal detects whether the preview display condition is met or not, if so, the terminal considers that the preview display condition is met, and sequentially moves each fragment card to a preview display position and sequentially displays the pointed content preview fragments in the fragment cards at the preview display position. In addition, the terminal can directly preview and display the pointed content preview segments in sequence in each segment card without moving the segment card.
In this embodiment, when the clip card moves to the preview display position, the user triggers the preview display operation, or the preview display condition is satisfied, the terminal triggers the preview display of the pointed content preview clip in the clip card, so that the preview display can be performed on the content preview clip according to the preview display requirement in various manners, and the user can preview according to the actual requirement.
In one embodiment, the target content comprises video content; in a target fragment card in at least one fragment card, performing preview display on the pointed content preview fragment, including: for a target fragment card in a preview display position in at least one fragment card, in the target fragment card, playing the audio and video of the pointed content preview fragment; and for the mute segment cards which are not in the preview display position in the at least one segment card, performing mute play on the pointed content preview segment in the mute segment cards.
The preview display position is a position preset for performing preview display, and when the fragment card moves to the preview display position, the pointed content preview fragment is triggered to be subjected to preview display in the fragment card. The target clip is a clip which needs to be played according to the playing mode of audio and video playing, and the mute clip is a clip which needs to be played according to the playing mode of mute playing. The audio and video playing means that the picture of the content preview segment is played and the audio in the content preview segment is also played; and the mute play is a play mode of only playing the picture of the content preview segment.
Specifically, the target content comprises video content, and the content preview segment pointed by the segment card is a video segment in the video content. When previewing video content, the terminal can determine the position of the fragment card, and for the target fragment card at the preview display position, the terminal plays the pointed content preview fragment in the target fragment card in an audio-video manner, namely plays the pointed content preview fragment by the target fragment card according to a conventional playing manner, namely plays the picture of the content preview fragment and plays the audio of the content preview fragment. And for the mute segment cards which are not in the preview display position, the terminal performs mute play on the pointed content preview segment in the mute segment cards, namely, the terminal also plays the pointed content preview segment of the mute segment cards in the mute segment cards, but only plays the picture of the pointed content preview segment, and shields the audio frequency to perform mute play.
In this embodiment, for video content, in a target clip at a preview display position, audio and video playing is performed on a pointed content preview clip, and in a mute clip not at a preview display position, mute playing is performed on a pointed content preview clip, so that a plurality of content preview clips can be played simultaneously, audio crosstalk between the plurality of content preview clips is avoided, and preview efficiency for video content can be improved.
In one embodiment, the content preview method further comprises: responding to a selected playing operation triggered in at least one fragment card, and playing the audio and video of the pointed content preview fragment in the selected fragment card selected by the selected playing operation; and in the fragment cards except the selected fragment card in the at least one fragment card, the pointed content preview fragment is silently played.
The selected playing operation is triggered by a user and is used for selecting a clip card needing to be played in audio and video from at least one clip card. For example, the user may trigger interaction with a selected clip of the at least one clip that requires audio/video playback, such as for clicking on the selected clip, thereby triggering a selected playback operation.
Specifically, the user determines a selected clip card from at least one clip card, and triggers a selected playing operation for the selected clip card, and the terminal responds to the selected playing operation of the user on the selected clip card to perform audio and video playing on the pointed content preview clip in the selected clip card, namely playing the picture of the content preview clip and playing the audio of the content preview clip. And for the segment cards except the selected segment card in at least one segment card, in other segment cards, the pointed content preview segment is subjected to mute play, namely, only the picture of the pointed content preview segment is played, and the audio is shielded, so that mute play is performed.
In this embodiment, for video content, for a selected clip selected by a user through a selected playing operation, audio and video playing is performed on a pointed content preview clip in the selected clip, and in other clip, mute playing is performed on the pointed content preview clip, so that a plurality of content preview clips can be played simultaneously according to a user's requirement, and audio crosstalk between the plurality of content preview clips is avoided, so that preview efficiency for video content can be improved.
In one embodiment, the target content comprises teletext content; in a target fragment card in at least one fragment card, performing preview display on the pointed content preview fragment, including: and in target fragment cards in at least one fragment card, sequentially carrying out preview display on the pointed content preview fragments in an enlarged display mode according to the preview interval duration.
The target content comprises image-text content, and specifically may comprise image content, text content and the like. The preview interval duration is duration for performing preview display, and the preview interval duration can be flexibly set according to actual needs, for example, 3 seconds.
Specifically, for the image-text content, the terminal may sequentially preview the pointed content preview segment in the target segment card in an enlarged display manner according to a preset preview interval duration. For example, the number of displayed fragment cards is 5, the serial numbers are respectively 0-4, and the preview interval time is 5 seconds, then the terminal can preview the pointed content preview fragment in the fragment card with the serial number of 0 in an enlarged display mode, so that the user previews the image-text content, the terminal can keep for 5 seconds, and after 5 seconds, the terminal switches to the fragment card with the serial number of 1 for previewing, namely, in the fragment card with the serial number of 1, the terminal previews the pointed content preview fragment in an enlarged display mode.
In this embodiment, for the image-text content, the terminal sequentially performs preview display on the pointed content preview segments in an enlarged display manner in a preview interval time, so that the user can preview the image-text content orderly without actively performing an operation, and the preview efficiency for the image-text content can be improved.
In one embodiment, the preview display of the pointed content preview segments in an enlarged display mode sequentially according to the preview interval duration comprises: sequentially previewing and displaying the pointed content preview segments in an enlarged display mode according to the first preview interval duration; and for each content preview segment for preview display, sequentially displaying at least one preview picture included in the content preview segment according to the second preview interval duration.
The first preview interval duration is the interval duration when the content preview fragments pointed by different fragment cards are subjected to preview display, and the second preview interval duration is the interval duration when the content preview fragments pointed by the same fragment card are subjected to preview display among different preview pictures. The content preview segment may include at least one preview picture, and when the number of preview pictures is multiple, each preview picture needs to be previewed in turn, where the interval duration between each preview picture is the second preview interval duration.
Specifically, when the terminal sequentially previews the pointed content preview fragments in each fragment card in an enlarged display mode, the interval duration between different fragment cards is the first preview interval duration. The terminal sequentially performs preview display on the pointed content preview segments in an enlarged display mode according to the first preview interval duration. For example, in the first segment card, after the preview display is finished on the pointed content preview segment in turn in an enlarged display mode, when the interval duration reaches the first preview interval duration, the terminal switches to the second segment card to sequentially display the pointed content preview segment in an enlarged display mode. And for each content preview segment for preview display, at least one preview picture is included in the content preview segment, and the terminal sequentially displays the preview pictures included in the content preview segment according to the second preview interval duration. For example, if the content preview segment pointed by the first segment card includes 3 preview frames, when the terminal sequentially performs preview display on the pointed content preview segment in the first segment card in an enlarged display manner, the terminal performs preview display on the first preview frame, when the interval duration reaches the first preview interval duration, the terminal performs preview display on the second preview frame, when the interval duration reaches the first preview interval duration, the terminal performs preview display on the third preview frame, and when the interval duration reaches the first preview interval duration, the terminal switches to sequentially perform preview display on the pointed content preview segment in the second segment card in an enlarged display manner.
In this embodiment, for the content preview segments pointed by different segment cards respectively, the terminal sequentially performs preview display at a first preview interval duration, and for the preview pictures in the content preview segments pointed by the same segment card, the terminal sequentially performs preview display at a second preview interval duration, so that a user can conveniently perform orderly preview on the content of the image, and the preview efficiency of the content of the image can be improved without active operation.
In one embodiment, the target content comprises teletext content; the content preview segment includes at least one preview screen; the content preview method further comprises the following steps: for each clip, displaying a preview screen operation entry of the content preview clip pointed to by each clip; and responding to the preview picture selection operation triggered by the preview picture operation entrance, and displaying the preview picture selected by the preview picture selection operation.
The target content comprises image-text content, and specifically may comprise image content, text content and the like. The preview picture operation entrance is used for triggering the selection of the preview picture in the content preview segment so that a user can select the preview picture which needs to be subjected to preview display according to actual needs.
Specifically, for each clip, the terminal may display a preview screen operation entry of the content preview clip pointed by each clip, and the specific expression form of the preview screen operation entry may be flexibly set according to actual needs, for example, may be a screen expansion control, a screen filter control, or the like. The display position of the preview screen operation entry may be set according to actual needs, for example, may be displayed at the corner position of the clip card. The user can trigger interactive operation for the preview screen operation entrance, for example, the user can trigger preview screen selection operation for the preview screen operation entrance, the terminal responds to the preview screen selection operation, determines the preview screen selected by the preview screen selection operation, and displays the preview screen selected by the preview screen in an enlarged display mode.
In this embodiment, the user may trigger the selection of the preview screen to preview through the preview screen operation entry associated with the clip card, so that the preview screen can be flexibly selected to preview according to the user's needs, and the preview efficiency for the image-text content can be improved.
In one embodiment, displaying at least one clip card associated with the target content in response to a preview trigger operation on a target category triggered by the target content identification, comprises: responding to a preview triggering operation of a target category triggered by the target content identifier, and displaying a floating layer area associated with the target content identifier; at least one clip of the target content association is displayed in the floating layer area.
The floating layer area is an area floating on the interface, and the area range and the display parameters of the floating layer area can be set according to actual needs. Such as a floating layer region, may cover the entire area of the interface and have some transparency. Specifically, the user may trigger a preview trigger operation of the target category for the target content identifier, the terminal displays a floating layer area associated with the target content identifier in response to the preview trigger operation, and the terminal displays at least one fragment card associated with the target content in the floating layer area. In a specific implementation, the terminal may display at least one segment card in a predetermined order in the floating layer region.
In this embodiment, the terminal triggers the display of at least one segment card associated with the target content in the floating layer area of the display, so that the segment card can be prevented from being blocked by other interface elements, and efficient previewing of the target content is facilitated.
In one embodiment, as shown in fig. 6, the content preview method further includes a process of generating a clip card, specifically including:
step 602, generating a fragment acquisition request based on a preview trigger operation.
The fragment acquisition request user requests the server to acquire the content preview fragments belonging to the fragment classification dimension in the target content. The fragment acquisition request may be generated based on the preview trigger operation, in particular may be based on an operating parameter of the preview trigger operation, such as may be generated based on a category of the preview trigger operation. Specifically, the terminal may generate a fragment acquisition request based on the preview trigger operation, specifically, may acquire an operation parameter of the preview trigger operation by the terminal, and generate the fragment acquisition request based on the operation parameter of the preview trigger operation.
Step 604, sending a fragment acquisition request to a server; the segment acquisition request is used to instruct the server to determine content preview segments that belong to the segment classification dimension based on the segment acquisition request and return the content preview segments.
Specifically, the terminal transmits the generated fragment acquisition request to the server to instruct the server to determine a content preview fragment belonging to the fragment classification dimension from the target content based on the fragment acquisition request, and return the determined content preview fragment to the terminal. In particular implementations, the server may determine a category of preview trigger operations based on the segment acquisition request, determine a matching segment classification dimension based on the category, and acquire content preview segments from the target content that belong to the segment classification dimension.
Step 606, generating respective fragment cards according to the content preview fragments returned by the server.
Specifically, the terminal receives the content preview segments returned by the server, generates respective segment cards based on the obtained content preview segments, each segment card can correspond to one content preview segment, and the segment cards can point to the corresponding content preview segments so as to preview the pointed content preview segments.
In this embodiment, the terminal acquires the content preview segments belonging to the segment classification dimension from the server by generating the segment acquisition request, and generates respective segment cards based on the acquired content preview segments to display the segment cards, so that the content preview segments belonging to the segment classification dimension can be accurately acquired.
In one embodiment, the fragment acquisition request is generated based on a preview trigger operation, including at least one of: determining a fragment classification dimension matched with the target category according to the category mapping relation, and generating a fragment acquisition request according to the fragment classification dimension; and generating a fragment acquisition request according to the target category of the preview triggering operation.
The category mapping relation records the matching relation between the category of the preview triggering operation and the quality inspection of the fragment classification dimension, and the terminal can determine the matched fragment classification dimension based on the category mapping relation according to the category of the preview triggering operation.
Specifically, the terminal may obtain a preset category mapping relationship, determine a segment classification dimension matched with the target category of the preview triggering operation according to the category mapping relationship, and generate a segment obtaining request based on the segment classification dimension. The segment acquisition request carries a segment classification dimension, and after the terminal sends the segment acquisition request to the server, the server can acquire a content preview segment belonging to the segment classification dimension from the target content based on the segment classification dimension, and return the content preview segment to the terminal. The terminal may also determine a target category of the preview trigger operation and generate the fragment acquisition request directly based on the target category. After the terminal sends the fragment acquisition request to the server, the server can determine the fragment classification dimension matched with the target category based on the category mapping relation and the target category carried by the fragment acquisition request, and the server can acquire the content preview fragment belonging to the fragment classification dimension from the target content according to the determined fragment classification dimension and return the content preview fragment to the terminal.
In this embodiment, the terminal may generate a segment acquisition request according to at least one of the segment classification dimension and the target category, so as to acquire the content preview segment belonging to the segment classification dimension from the server, and accurately acquire the content preview segment belonging to the segment classification dimension.
In one embodiment, the segment classification dimension includes a user dimension of interest; the content preview method further comprises the following steps: acquiring account attribute information and account behavior information of a user account; and performing interested matching on the account attribute information and the account behavior information and each content segment in the target content to obtain at least one content preview segment belonging to the interested dimension of the user.
The dimension of interest of the user refers to a dimension of classifying the segments according to preference of the user, and specifically can be determined by matching the account data of the user with the content segments. The user account number refers to an account number of a user logging in a content platform, and account number attribute information is information related to the attribute of the account number, can represent attribute characteristics of the user, and specifically can comprise various information such as gender, work, academic, native, age, friend relation and the like of the user. The account behavior information is information generated by a user performing activities in the content platform through a user account, and specifically can include but is not limited to various interaction information aiming at content, such as browsing, commenting, forwarding, deleting and the like.
Specifically, the terminal may obtain account attribute information and account behavior information of the user account, for example, the terminal may query the account information of the user account based on the account identifier of the user account, so as to obtain the account attribute information and the account behavior information. The terminal can perform interesting matching on account attribute information and account behavior information with each content segment in the target content to determine which content segments in the target content are interesting to the user, so as to obtain at least one content preview segment belonging to the interesting dimension of the user. In a specific application, the terminal can process account attribute information, account behavior information and fragment data of content fragments, go through a recall layer, a sequencing layer and a re-sequencing, and determine a recommendation list conforming to user preference content. Specific model algorithms may include various recommendation algorithms such as UserCF (user-based collaborative filtering), itemCF (item-based collaborative filtering), matrix factorization (expression of hidden vectors), and the like.
In this embodiment, the terminal performs the interest matching with each content segment in the target content based on the account attribute information and the account behavior information of the user account, so as to obtain at least one content preview segment belonging to the user interest dimension, thereby being capable of accurately determining the content preview segment of the user interest dimension from the target content, so that the user can preview the content in the user interest dimension, and being beneficial to improving the interaction efficiency of the content preview.
In one embodiment, the content preview method further comprises: displaying authorization notification information for a user account; and responding to confirmation authorization triggered by the user account to the authorization notification information, and acquiring account behavior information of the user account in the activity process of the user account.
The authorization notification information is used for prompting the user to authorize, and particularly, the authorization is collected aiming at the activity data of the user in the content platform so as to analyze and recommend interested contents aiming at the user. The account behavior information is account data acquired by a user in the process of the user account in the content platform.
Specifically, the terminal may display authorization notification information for the user account to prompt the user to perform authorization, and the user may trigger an operation for the authorization notification information, for example, the user triggers confirmation authorization for the authorization notification information, which indicates that the user agrees to authorize, so that the terminal obtains account behavior information of the user account in the activity process of the user account.
In this embodiment, after the authorization of the user is obtained, in the activity process of the user account, the terminal obtains the account behavior information of the user account, so as to analyze and recommend the interesting content for the user, and ensure the data security of the user account.
In one embodiment, the segment classification dimension includes a persona information dimension; the content preview method further comprises the following steps: identifying the character information aiming at the target content to obtain the character information in the target content; slicing the target content based on the character information to obtain at least one content preview segment belonging to the character information dimension.
The character information dimension refers to a dimension of classifying segments according to information of characters included in the target content, and specifically may include, but is not limited to, character information including various aspects such as a character relationship, actor information, character information, dubbing information, and the like.
Specifically, the terminal can perform character information identification on the target content, specifically, based on a computer vision technology and a natural language processing technology, perform character information identification on mutual names among characters in the target content, introduction information of the target content and the like, and obtain character information in the target content. The specific type of the personal information can be preconfigured according to actual needs, for example, the personal relationship can be realized. And the terminal slices the target content by the character information to obtain at least one content preview segment belonging to the dimension of the character information. In a specific application, the terminal may determine a content segment including the character information from the target content, and intercept at least one content preview segment belonging to the dimension of the character information from the target content.
In this embodiment, the terminal identifies the character information of the target content, and slices the target content through the character information obtained by identification, so as to obtain at least one content preview segment belonging to the character information dimension, thereby being capable of accurately determining the content preview segment of the character information dimension from the target content, so that a user can preview the content in the character information dimension, and being beneficial to improving the interaction efficiency of the content preview.
In one embodiment, the segment classification dimension includes a picture effect dimension; the content preview method further comprises the following steps: slicing the target content to obtain each content segment; respectively carrying out picture feature recognition on each content segment to obtain respective segment picture features of each content segment; and classifying each content segment based on the segment picture characteristics to obtain at least one content preview segment belonging to the picture effect dimension.
The dimension of the picture effect refers to a dimension of classifying segments according to the effect of the picture in the target content, and specifically may include, but not limited to, a picture effect including a beautiful effect, a stimulating effect, a shocking effect, a suspense effect, a disaster effect, and the like. The segment picture features are used to characterize the picture effect level in the content segment, and may specifically include a beautiful effect level, a stimulating effect level, a shocking effect level, a suspense effect level, and the like.
Specifically, the terminal may slice the target content, for example, may slice the target content at a certain interval, to obtain each content segment. The terminal respectively performs picture feature recognition on each content segment, and specifically can perform video structural analysis, target detection, character recognition, action recognition and the like on the content segment based on a computer vision technology to obtain respective segment picture features of each content segment. And the terminal classifies each content segment according to the segment picture characteristics to obtain at least one content preview segment belonging to the picture effect dimension. In a specific application, the terminal may divide content segments belonging to the same picture effect into the same category, thereby obtaining content preview segments of various picture effects.
In this embodiment, the terminal performs image feature recognition on each content segment in the target content, and classifies each content segment according to the obtained segment image features to obtain at least one content preview segment belonging to the image effect dimension, so that the content preview segment of the image effect dimension can be accurately determined from the target content, so that a user can preview the content in the image effect dimension, and the interactive efficiency of the content preview is improved.
In one embodiment, as shown in fig. 7, a content previewing method is provided, where the method is performed by a computer device, specifically, may be performed by a computer device such as a terminal or a server, or may be performed by the terminal and the server together, and in an embodiment of the present application, the method is applied to the terminal in fig. 1, and is described by taking the example as an example, including the following steps:
step 702, a target content identification pointing to target content is presented.
Step 704, in response to a preview trigger operation of a target category triggered by the target content identification, displaying at least one fragment card associated with the target content; each segment card points to a content preview segment of the target content, and the segment classification dimension of the content preview segment pointed to by each segment card is matched with the target class.
And step 706, in the target fragment card in the at least one fragment card, previewing and displaying the pointed content preview fragment.
The target content identifier of the target content can be used for responding to preview triggering operation of at least one category, the category division number of the preview triggering operation can be set according to actual needs, for example, the preview triggering operation can be divided into one category, two categories or more categories, and the preview triggering operation of different categories can trigger the preview of the content preview fragments aiming at different fragment classification dimensions. The target content-associated clip includes at least one. The matching relation between the fragment classification dimension and the target class can be preset according to actual requirements. Specifically, a target content identifier pointing to target content is displayed in the terminal, and the user can trigger interaction aiming at the target content identifier, specifically can trigger preview triggering operation of a target category. The terminal responds to the preview triggering operation of the user on the target category triggered by the target content identification, displays at least one segment card pointing to the content preview segment of the target content, wherein the segment classification dimension of the content preview segment pointed by each segment card is matched with the target category, and the terminal previews the pointed content preview segment in the target segment card.
In a specific application, as shown in fig. 8, the target content displayed in the interfaces of various types of terminals is video content, specifically a tv show, the terminal displays a tv show cover, the user can trigger a preview triggering operation for the tv show cover, specifically trigger a preview triggering operation of sliding upwards, the terminal displays a clip card associated with the tv show, the clip card points to a content preview clip in the tv show, the content preview clip belongs to a user interest dimension of "guessing you like", and the user can click on the clip card to preview and play the content preview clip pointed by the clip card. As shown in fig. 9, for the drama cover in fig. 8, the user may trigger a preview triggering operation that slides down, and the terminal displays a clip card associated with the drama, where the clip card points to a content preview clip in the drama, and the content preview clip belongs to the dimension of "highlight", and the user may click on the clip card to preview the content preview clip pointed to by the clip card. As shown in fig. 10, for the drama cover in fig. 8 and fig. 9, the user may trigger a preview triggering operation that slides to the right, and the terminal displays a plurality of clip cards associated with the drama, where each clip card points to a different content preview clip in the drama, and each content preview clip belongs to the dimension of "person relationship", and the user may click on the clip card to preview the content preview clip pointed to by the clip card. For preview triggering operations of different categories, the number of fragment cards for triggering and displaying the fragment cards with the matched fragment classification dimension can be flexibly set, for example, the data volume of fragment cards with the first fragment classification dimension and the second fragment classification dimension is 1, the number of fragment cards with the third fragment classification dimension is 5, and the number of fragment cards with the fourth fragment classification dimension is 3.
In one specific application, as shown in fig. 11, a drama cover is displayed in the upper left corner of the interface of the terminal and an introduction to the drama is displayed in the right side. The user can trigger interaction for the cover of the television play, for example, the user can click on the title of the television play to trigger preview triggering operation, a plurality of fragment cards can be displayed below the interface, each fragment card points to different content preview fragments in the television play respectively, each content preview fragment belongs to the interesting dimension of the user who guesses you like, and the user can click on the fragment card to preview and play the content preview fragment pointed by the fragment card.
In the content preview method, the displayed target content identifier pointing to the target content can respond to the triggered preview triggering operation of the target category, the fragment cards of the content preview fragments pointing to the target content are displayed, the fragment classification dimension of the content preview fragments pointed to by each fragment card is matched with the target category, and the pointed content preview fragments are previewed and displayed in the target fragment cards, so that the preview triggering operation of the target content identifier triggering the target category is supported, the content fragments of the matched fragment classification dimension are previewed, the interaction operation of the content preview is simplified, and the interaction efficiency of the content preview is improved.
The application also provides an application scene, which applies the content preview method. Specifically, the application of the content preview method in the application scene is as follows:
and displaying the content identifiers pointing to the contents in a content list of the content platform, wherein a user can trigger a preview triggering operation of a target category aiming at the target content identifier pointing to the target content, the terminal displays at least one fragment card associated with the target content, and previews and displays the pointed content preview fragments in target fragment cards in the at least one fragment card. In addition, the user can trigger access operation aiming at the target content identifier pointing to the target content, and enter a content introduction page of the target content, at least one fragment card associated with the target content is displayed in the content introduction interface, and the pointed content preview fragment is previewed in the target fragment card in the at least one fragment card.
The application also provides an application scene, which applies the content preview method. Specifically, the application of the content preview method in the application scene is as follows:
currently, for long content that requires users to watch for a long time, including movie, television drama, text, cartoon, etc., such content is often costly to screen, and various ways are required to know whether it is worth watching and reading before watching and reading. Generally, a user does not determine whether the content meets the requirement, and for video content, the user needs to enter a player to play and preview, and for image-text content, the user needs to preview through a reader, so that the screening of the content is realized. For long videos, such as movies, television shows, etc., key frames of the long videos are usually played quickly, or all video clips are watched on a video detail page, the cost of repeatedly checking and confirming is high, and users cannot know the attraction points of the content conveniently when watching the covers, so that the interaction efficiency of content previewing is low. For image-text contents, such as web literature, cartoon, etc., there is usually no fast preview mode, and users usually need to watch catalogues or various recommended comments to confirm whether the contents are worth reading, but because the content amount of the image-text contents is extremely large, it is difficult to fast browse and confirm whether interesting contents exist, and the interaction efficiency of the content preview is low.
Based on this, the embodiment provides a content preview method, which supports a user to expand contents such as movies, television shows, web literature and the like in various dimensions according to user interests, character relationships, wonderful degrees, plot development and the like by pressing the covers for a long time when the user sees the covers of the contents, and the user can quickly browse content fragments through different dimensions, wherein the content fragments can include but are not limited to video fragments, key frame pictures, text paragraphs and the like, so that the user can be supported to quickly and effectively learn information of the long content. According to the content preview method, through the mode that the long content is displayed in various dimensions quickly according to the cover, the content screening speed of the user is improved, the cost of repeatedly entering the detail page for viewing is reduced, the content viewing efficiency of the user is improved, the interaction efficiency of content preview can be effectively improved, and the experience and commercial value of products are improved.
Video generally refers to various techniques for capturing, recording, processing, storing, transmitting, and reproducing a series of still images as electrical signals. When the continuous image changes more than 24 frames of pictures per second, according to the persistence of vision principle, the human eyes cannot distinguish a single static picture; it appears as a smooth continuous visual effect, such that successive pictures are called videos. Long video generally refers to video over half an hour, mainly film and television drama, distinguished from small video such as 15 seconds. Web literary refers to novel content published on the web, including readable text content published directly on the web or published and then published on the web. Long content generally refers to content that requires a user to watch for a long period of time, including but not limited to long video, online novels, etc., which are often costly to screen by the user, and require various ways to see if it is worth watching and reading before watching and reading. The character relationship refers to a collision or an alliance between individual characters formed during the writing, and forms a relationship between people during the drama, and thus, the character relationship herein does not refer to a friend or the like of a relative, but the enemy, allied or the like is also called a character relationship, and the character relationship most likely to generate a collision during drama is referred to as a triangle relationship which is clamped to each other.
Specifically, in the content preview method provided in this embodiment, for analysis of the content of interest of the user, matching needs to be performed in combination with data such as viewing content and viewing behavior of the user, and then data acquisition authorization of the user needs to be obtained. That is, in the process of using a long content product, the user is required to authorize consent to operations such as recording, analyzing, and storing data of viewing content, viewing behavior, and the like. As shown in fig. 12, for video content, in a video list interface of a video Application (APP), an authorization notification pop-up may be displayed for recording analysis processing of its viewing content by a user authorization. As shown in fig. 13, for teletext content, in the novice list interface of a novice application, an authorization notification pop-up may be displayed for recording analysis processing of its viewing content authorized by the user.
Further, when the user views the cover of interest, the user can activate the cover and change accordingly by pressing the cover area for a long time, so as to display in a preview activation manner, which may include, but is not limited to, size change, lighting, generating a hierarchical shadow, etc. The long pressing duration of the cover activation change can be flexibly set according to actual needs, and particularly can be flexibly set according to different equipment and user types. As shown in fig. 14, in the cover list of the video content, corresponding covers of various videos that can be played by the video platform are displayed, and the video covers include an image part, a title part and a brief introduction part; the image portion may include an image of a video frame taken from the video, the title portion may include a title of the video content, and the profile portion may include descriptive text information for the video content. When a user wants to know the content of a certain video, he can press the cover of the video for a long time. For the long-pressed video cover selected by the user, the cover is activated to be unfolded, and is displayed in a cover list in a preview activation mode, as shown in fig. 15, the user highlights the long-pressed video cover, and a display effect of a hierarchical shadow is generated to prompt the user for the triggered video cover selected by the user.
For activating the video cover in the unfolded state, the user may further slide the cover upwards, and unfold the content clips that you like, that is, unfold each content clip that is obtained by matching content preferences based on the behavior data of the user, including but not limited to fixed video clips, such as clips each of 5 seconds, or non-fixed clips, such as 5 seconds, 10 seconds, etc., each of which may be different, or all of which may be keyframes, or a hybrid presentation of video clips and keyframes, etc. As shown in fig. 16, for a video cover in an active expanded state, the user can slide upward to expand the individual video clips in the video to which the cover is directed in a dimension of interest to the user that "guesses you like". As shown in fig. 17, in a floating layer interface with a certain transparency, each video clip is sequentially displayed according to a time axis sequence, and each video clip belongs to a dimension of interest of a user who "guesses you like". Each video clip is displayed in the form of a video card, a time axis of the corresponding video clip in the original video is displayed at the upper left corner of the video card, and a user can slide the video card up and down to browse different video clips, and can also directly play the corresponding video clip in the video card. As shown in fig. 18, the video clips may be displayed in a manner of video cards according to the order of interest degree in the dimension of interest of the user who "guesses you like", and the interest degree value of the corresponding video clip is displayed in the upper left corner of the video card, if the interest degree value of the currently selected video clip is 10, the user may slide the video clip up and down to browse different video clips, or may directly play the corresponding video clip in the video clip.
Further, the content segments can be displayed in sequence according to a time axis or guessing the favorite degree of the user, and can be ordered in various forms such as positive order, reverse order, random order and the like. The user can click on the position of the label or the position of the time axis that you like, triggering the adjustment of the sequence of the presentation of the individual video clips. The descending or ascending sequence can be from top to bottom or from bottom to top, or from the center point to the upper side and the lower side, etc. As shown in fig. 19, a time tag is displayed in the upper left corner of the video card, and is used for indicating the time axis position of the video clip pointed in the video card in the original video, for example, the time axis position of the currently selected video clip is 11 minutes and 05 seconds. The user can trigger the operation aiming at the time tag to select the content display sequence, for example, the user can click the time tag, the terminal displays a sorting mode list comprising positive sequence, reverse sequence and random, and the user can select a corresponding sorting mode to adjust and control the arrangement sequence of each video card. As shown in fig. 20, a guess-like label is displayed in the upper left corner of the video card to indicate the user's interest level in the video clip pointed to, e.g., the number of guesses-like of the currently selected video clip is 10. The user can trigger the operation for the guessing like degree label to select the content display sequence, for example, the user can click the guessing like degree label, the terminal displays a sorting mode list comprising positive sequence, reverse sequence and random, and the user can select a corresponding sorting mode to adjust and control the arrangement sequence of each video card.
Further, for content previewing of video clips, the user may slide or click on the clip content to quickly view all video clips or key frames. As shown in fig. 21, for each video card expanded according to the dimension of interest of the user, when the video clip slides to the middle position of the interface, the video card can be enlarged, the content of the video clip pointed to by the video clip is automatically played, the corresponding playing progress bar is displayed, and the other clips are in a pause state and are not played. The video clips can be triggered to play by clicking, the video clips visible to the user can be clicked and played at the original position, and the video in playing can be in an activated and amplified state. As shown in fig. 22, for each video card expanded in the dimension of interest of the user, the user can click on a video card with a value of 9 that you like, zoom in, and play a corresponding directed video clip in that video card. For the content preview of the video clips, the clips can be automatically played after 3 seconds of the video content is unfolded, the playing sequence can be based on content ordering, specifically, the video content can be controlled to automatically slide to the middle position for playing, and the video content can be controlled to be played according to a certain position sequence, for example, from top to bottom, from bottom to top, from the middle to two sides, and the like. As shown in fig. 23, after each video card is displayed in the dimension of interest to the user, the automatic play is performed after counting down for 3 seconds. As shown in fig. 24, for each video card expanded in the dimension of interest of the user, each video card is automatically slid to the intermediate position in sequence for automatic play. As shown in fig. 25, for each video card deployed in the dimension of interest to the user, each video card may be automatically played from top to bottom. In addition, for the content preview of the video clip, after the video content is unfolded, all the video content can be controlled to be automatically played at the same time, and the user can know the content at the same time. When all video clips are played simultaneously, the video content sliding to the middle area can be controlled to have sound, and other default mute plays are controlled. The user can click the video clip to trigger playing, the video clip clicked by the user is activated and amplified, the sound is opened to play, and other video clips are controlled to play in a mute mode.
For activating the video cover in the unfolded state, the user may further slide the cover downward to unfold the content segments with different character relationships, that is, the content segments obtained by matching the character relationship structure based on the content data, including but not limited to fixed video segments, such as segments with 5 seconds, or non-fixed segments, such as 5 seconds, 10 seconds, etc., each segment may be different, or all of the segments may be key frames, or multiple forms such as mixed display of the video segments and the key frames. As shown in fig. 26, for a video cover in an active expanded state, the user can slide down to expand the individual video clips in the video to which the cover is directed in the character information dimension of "character relationship". As shown in fig. 27, in a floating layer interface with a certain transparency, each video clip is sequentially displayed, where each video clip belongs to a character information dimension of a "character relationship", and the types of character relationships may include various types such as a same window, sister, couple, brother, and revenge. Each video clip is displayed in the form of a video card, and the character relation among characters in the corresponding video clip is displayed in the upper left corner of the video card, and specifically, character picture information and character relation type information can be displayed. The user can slide the video card up and down to browse different video clips, and can also directly play corresponding video clips in the video card.
Further, the content segments can be displayed in sequence according to affinity and the like, and can be ordered in various forms such as positive order, reverse order, random order and the like. The user clicks on the relationship tag location and the order of presentation can be adjusted. The descending or ascending sequence can be from top to bottom or from bottom to top, or from the center point to the upper side and the lower side, etc. As shown in fig. 28, a person relationship tag is displayed in the upper left corner of the video card, and is used to indicate the relationship type between the people appearing in the video clip pointed to in the video clip, for example, the person relationship of the currently selected video clip is a couple relationship. The user can trigger the operation aiming at the character relation tag to select the content display sequence, for example, the user can click on the character relation tag, the terminal displays a sorting mode list comprising positive sequence, reverse sequence and random, and the user can select a corresponding sorting mode to adjust and control the arrangement sequence of each video card.
Further, for content previewing of video clips, the user may slide or click on the clip content to quickly view all video clips or key frames. As shown in fig. 29, for each video clip developed according to the relationship of the characters, when the video clip is slid to the middle position of the interface, the video clip is enlarged, the content of the video clip directed to the video clip is automatically played, and the other clips are in a pause state, and are not played, for example, the video clip indicating the relationship of the characters as a couple in the middle position can be played. The video clips can be triggered to play by clicking, the video clips visible to the user can be clicked and played at the original position, and the video in playing can be in an activated and amplified state. As shown in fig. 30, for each video card developed in a person relationship, the user can click on a video card whose person relationship is sister, enlarge the video card, and play a video clip pointed accordingly in the video card. For content preview of video clips, the clips can be automatically played after a certain time of video content expansion, such as 3 seconds, the playing sequence can be based on content sequencing, specifically, the video content can be controlled to automatically slide to a middle position for playing, and the video content can be controlled to be played according to a certain position sequence, such as from top to bottom, from bottom to top, or from middle to two sides in sequence, and the like. As shown in fig. 31, after each video card was displayed in a character relationship, the automatic play was performed after counting down for 3 seconds. As shown in fig. 32, for each video card developed according to the relationship of people, each video card automatically slides to the middle position in sequence for automatic playing, and if the relationship of people in the video clip currently played at the middle position is a couple. As shown in fig. 33, for each video card developed according to the relationship of characters, each video card may be automatically played from top to bottom, for example, the relationship of characters in the currently played video clip is sister. In addition, for the content preview of the video clip, after the video content is unfolded, all the video content can be controlled to be automatically played at the same time, and the user can know the content at the same time. When all video clips are played simultaneously, the video content sliding to the middle area can be controlled to have sound, and other default mute plays are controlled. The user can click the video clip to trigger playing, the video clip clicked by the user is activated and amplified, the sound is opened to play, and other video clips are controlled to play in a mute mode.
For activating the video cover in the unfolded state, the user can slide the cover to the right further, and unfold the content segments with different picture effects, namely, match based on picture content identification, including but not limited to the irritation degree, the aesthetic degree, the scene macro degree, the disaster degree and the like, and the obtained content segments, including but not limited to fixed video segments, such as segments with 5 seconds, or non-fixed segments, such as segments with 5 seconds, 10 seconds and the like, can be different; or all in the form of key frames, or a hybrid presentation of video clips and key frames, etc. As shown in fig. 34, for a video cover in an activated expanded state, the user may slide to the right to expand each video clip in the video pointed to by the cover in the dimension of "screen effect", which may specifically include, but not limited to, various screen effects including screen stimulation level, screen beauty level, etc. As shown in fig. 35, in a floating layer interface with a certain transparency, each video clip is displayed in turn, where each video clip belongs to the dimension of "picture effect", and specifically belongs to the picture effect of stressor. Each video clip is displayed in the form of a video card, and the quantized value of the degree of the corresponding video clip is displayed at the upper left corner of the video card, for example, the quantized value of the degree of the corresponding video clip is 10. The user can slide the video card up and down to browse different video clips, and can also directly play corresponding video clips in the video card. As shown in fig. 36, in a floating layer interface with a certain transparency, each video clip is displayed in turn, and each video clip belongs to the dimension of "picture effect", specifically to the picture effect of the picture beauty. And displaying the video clips in the form of video cards, wherein the quantized value of the degree of the picture beauty of the corresponding video clip is displayed at the upper left corner of the video card, and if the quantized value of the degree of the picture beauty of the currently selected video clip is 10. The user can slide the video card up and down to browse different video clips, and can also directly play corresponding video clips in the video card.
Further, the content segments may be displayed in sequence according to the picture effect, such as the picture stimulation level, and may be ordered in a positive order, a reverse order, a random order, or the like. The user clicks the picture effect label position, and the display sequence can be adjusted. The descending or ascending sequence can be from top to bottom or from bottom to top, or from the center point to the upper side and the lower side, etc. As shown in fig. 37, a picture effect label is displayed in the upper left corner of the video card, and is used for indicating the picture effect in the video clip pointed to in the video card, if the picture effect of the currently selected video clip is stressor, the quantized value of the degree of stressor is 10. The user can trigger the operation to the picture effect label to select the content display sequence, for example, the user can click the picture effect label, the terminal displays a sorting mode list comprising positive sequence, reverse sequence and random, and the user can select a corresponding sorting mode to adjust and control the arrangement sequence of each video card. As shown in fig. 38, a picture effect label is displayed at the upper left corner of the video card, and is used for indicating the picture effect in the video clip pointed to in the video card, if the picture effect of the currently selected video clip is a picture beauty, the quantized value of the degree of the picture beauty is 10. The user can trigger the operation to the picture effect label to select the content display sequence, for example, the user can click the picture effect label, the terminal displays a sorting mode list comprising positive sequence, reverse sequence and random, and the user can select a corresponding sorting mode to adjust and control the arrangement sequence of each video card.
Further, for content previewing of video clips, the user may slide or click on the clip content to quickly view all video clips or key frames. As shown in fig. 39, for each video card expanded according to the screen effect dimension of the stressor, when the video clip is slid to the middle position of the interface, the video card can be enlarged, the content of the video clip pointed to by the video clip can be automatically played, while the other clips are in a pause state, and the video clip with the stressor degree of 10 in the middle position can be played. The video clips can be triggered to play by clicking, the video clips visible to the user can be clicked and played at the original position, and the video in playing can be in an activated and amplified state. As shown in fig. 40, for each video card expanded in the visual effect dimension of the stressor, the user can click on the video card with the stressor level of 9, enlarge the video card, and play the corresponding directed video clip in the video card. For content preview of video clips, the clips can be automatically played after a certain time of video content expansion, such as 3 seconds, the playing sequence can be based on content sequencing, specifically, the video content can be controlled to automatically slide to a middle position for playing, and the video content can be controlled to be played according to a certain position sequence, such as from top to bottom, from bottom to top, or from middle to two sides in sequence, and the like. As shown in fig. 41, after each video card was displayed in the visual effect dimension of the stressor, the automatic playback was performed after counting down for 3 seconds. As shown in fig. 42, for each video card expanded according to the visual effect dimension of the stressor, each video card automatically slides to the middle position in sequence for automatic playing, and the extent of stressor in the video clip currently played at the middle position is 10. As shown in fig. 43, for each video card expanded according to the visual effect dimension of the stressor, each video card may be automatically played from top to bottom, for example, the stressor level of the currently played video clip is 9. In addition, for the content preview of the video clip, after the video content is unfolded, all the video content can be controlled to be automatically played at the same time, and the user can know the content at the same time. When all video clips are played simultaneously, the video content sliding to the middle area can be controlled to have sound, and other default mute plays are controlled. The user can click the video clip to trigger playing, the video clip clicked by the user is activated and amplified, the sound is opened to play, and other video clips are controlled to play in a mute mode.
For the cover of the image-text content, such as the novel content or the cartoon content, the user can press the cover of the image-text content for a long time, and the cover of the image-text content is activated to support quick preview of the image-text content through different dimensions. As shown in fig. 44, in the cover list of the text contents, covers corresponding to various novels that can be read by the text content platform are shown, and the text content covers include an image part, a title part, reading information and a scoring part; the image portion may include an image associated with the text content, the title portion may include a title of the text content, the reading information may include browsing information of the text content, and the scoring portion may include scoring of the text content. When a user wants to know a certain text content, he can press the cover of the text content for a long time. For the text content cover selected by the user for long press, the cover is activated to be unfolded, and is displayed in a cover list in a preview activation mode, as shown in fig. 45, the user performs enlarged highlighting on the long press text cover, and generates a display effect of a hierarchical shadow so as to prompt the user for the triggered text content cover selected by the user.
For the text content cover in the activated unfolded state, the user can slide upwards further to trigger to view the popular chapters in the text content, and the specific chapters can be displayed in a sorting mode based on the comment quantity. As shown in fig. 46, for a text content cover in an activated expanded state, a user may slide upward to expand each section in the novel text pointed to by the cover according to the section hotness dimension of "hotness section", and specifically may perform hotness section determination according to the comment quantity of the section. As shown in fig. 47, in a floating layer interface with a certain transparency, each text section is displayed in turn, and each text section belongs to the fragment hotness dimension of "hotspots". And displaying each text chapter in the form of chapter cards, and displaying the comment quantity of the corresponding text chapter in the upper left corner of the chapter card, wherein the number of comments of the currently selected text chapter is 100. In addition, the user can adjust the display of the sequence, and the sequence can be ordered in various forms such as positive sequence, reverse sequence, random sequence and the like. The user may click on the hot chapter tab or the highlight chapter tab, triggering the adjustment of the order in which the individual video clips are presented. The descending or ascending sequence can be from top to bottom or from bottom to top, or from the center point to the upper side and the lower side, etc. Specifically, the user can slide the chapter card up and down to browse different text chapters, and can also directly read each text chapter in the chapter card. As shown in fig. 48, a comment quantity tag of a corresponding text chapter is displayed in the upper left corner of the chapter card, and is used for indicating the comment quantity of the text chapter pointed to in the chapter card, for example, the comment quantity of the currently selected text chapter is 100. The user can trigger the operation aiming at the comment quantity label to select the content display sequence, for example, the user can click on the comment quantity label, the terminal displays a sorting mode list comprising positive sequence, reverse sequence and random, and the user can select a corresponding sorting mode to adjust and control the arrangement sequence of each text chapter.
Further, for activating the text content cover in the unfolded state, the user can also slide downwards to trigger to view the highlight chapters in the text content, and specific chapters can be displayed based on the highlight index. As shown in fig. 49, for a text content cover in an activated expanded state, the user may slide down to expand each section segment in the novice text to which the cover points in the dimension of "highlight", and may specifically determine according to the highlight index of the section segment. As shown in fig. 50, in a floating layer interface with a certain transparency, each text chapter is displayed in turn, and each text chapter belongs to the dimension of "highlight". Each text chapter is displayed in the form of a chapter card, and the highlight index of the corresponding text chapter is displayed at the upper left corner of the chapter card, for example, the highlight index of the currently selected text chapter is 10. The user can slide the chapter card up and down to browse different text chapters, and can also directly read each text chapter in the chapter card. As shown in fig. 51, a highlight index tab of a corresponding text chapter is displayed in the upper left corner of the chapter card for indicating the highlight index of the text chapter pointed to in the chapter card, such as the highlight index of the currently selected text chapter being 10. The user can trigger an operation aiming at the highlight index label to select the content display sequence, for example, the user can click on the highlight index label, the terminal displays a sorting mode list comprising positive, reverse and random, and the user can select a corresponding sorting mode to adjust and control the arrangement sequence of each text chapter.
Further, for content previewing of text chapters, the user may slide or click on the clip content to quickly view all of the page content. As shown in fig. 52, for each chapter card expanded according to the clip popularity dimension of "popular chapter", when a text chapter slides to an intermediate position of the interface, the chapter card may be enlarged to preview the content of the text chapter to which the chapter card points, e.g., the text chapter having a comment quantity of 100 in the intermediate position may be enlarged and previewed. The text chapter can also need to be clicked to trigger the enlarged preview, and the user can click on the text chapter visible to the user and carry out the enlarged preview at the original position. As shown in fig. 53, for each chapter card expanded in the section hotness dimension of "hotchapters", the user can click on a text chapter with a comment quantity of 90, which is enlarged to preview the content of the chapter card pointed to the text chapter. For content preview of text chapters, pages can be automatically played after a certain time of text content expansion, such as 3 seconds, the playing sequence can be played based on content ordering, specifically, the text content can be controlled to automatically slide to a middle position for amplification and stay for a certain time, and the stay time can be set according to the speed of reading the content by a user, such as stay for 3 seconds. The content can be controlled to be amplified and stopped according to a certain position sequence, such as from top to bottom, from bottom to top, from middle to two sides, and the like. As shown in fig. 54, after each text chapter is displayed in the section hotness dimension of "hotchapters", the automatic playback is performed after counting down for 3 seconds. As shown in fig. 55, for each chapter card expanded according to the clip popularity dimension of "popular chapter", each chapter card automatically slides to the middle position in sequence for automatic enlarged playing, and after 3 seconds of stay, the text content of the next page is played, for example, the comment quantity of the text chapter currently displayed in the middle position for enlarged display is 100. As shown in fig. 56, for each chapter card expanded according to the section popularity dimension of "popular chapter", each chapter card may be automatically played from top to bottom, and each page is enlarged and stopped for 3 seconds before being transferred to play the next page, for example, the comment amount of the currently played chapter card is 90.
Further, for content preview of text chapters, the preview page can completely display a page of text content, and in the preview process, when the text information can be previewed by clicking for amplifying or automatically playing. The automatic playing can automatically play the next preview page after setting n seconds based on the reading speed of each page of the user. When the preview page can not completely display a page of text content, considering that the size of the preview page is not in a full screen state, the page of text content can not be properly displayed, after the preview page is selected, the text content can be displayed in a rolling way from top to bottom, the rolling speed is calculated based on the habit of a user and the conventional reading speed, and after the content in the current page is completely displayed, the text content is automatically played to the next preview page. As shown in fig. 57, for viewing pages, one page of text content is completely presented, and when each page is automatically played, after n seconds of stay in enlargement, the next viewing page is automatically played. As shown in fig. 58, when a page of text content cannot be completely displayed, the text content in the page can be scrolled and played, and the next browsing page is sequentially played, and the text content is specifically controlled to scroll up and down until the text content is completely played, and then the next preview page is shifted to play.
For content preview of text chapters, the preview page can contain multi-page content, and each viewing page is composed of a plurality of pages, for example, each preview page represents a chapter, for example, a chapter attractive for quick preview, during the preview process, clicking zoom in or zooming out during automatic playing, content in the plurality of chapters can be played in the preview page, and modes such as page turning, scrolling up and down and the like can be included, including, but not limited to, automatic playing or manual sliding playing, wherein the speed of automatic playing can be set based on the conventional reading speed of a user. In addition, manual page turning can be supported, namely, a user can click the page turner to manually turn pages, the page turner can be used for selecting in a form, or all secondary pages are unfolded, and the user clicks or slides to select. As shown in fig. 59, when a page of text content is completely displayed, pages 1 and 2 of chapter 5 can be sequentially played until page n is reached, so as to realize playing of multi-chapter content. As shown in fig. 60, when a page of text content cannot be completely displayed, the content of chapter 5 and page 1 can be scrolled and played from top to bottom, then pages 1 and 2 of chapter 5 are sequentially played until the nth page is reached, that is, the current page content is scrolled and played first, and then the next page content is played after completion. As shown in fig. 61, the user may click on the expanded page filter, and thus may manually select and switch the page to be enlarged. As shown in fig. 62, the user can click on the expanded page, thereby manually selecting a page to switch for enlarged viewing among the expanded pages.
For image content, such as cartoon content, a user can press the cover of the image content for a long time, the cover of the image content is activated, the cartoon content can be previewed rapidly through different dimensions, and the display mode of the image content can be the same as that of text content. As shown in fig. 63, in the cover list of the image content, there are displayed respective covers of various comics that can be read by the comic content platform, the comic content cover including an image portion and a title portion; the image portion may include an image associated with the caricature content and the title portion may include a title of the caricature content. When a user wants to know a certain cartoon content, he can press the cover of the cartoon content for a long time. For the long-pressed cartoon content cover selected by the user, the cover is activated to be unfolded, and is displayed in a cover list in a preview activation mode, as shown in fig. 64, the user highlights the long-pressed cartoon cover and generates a display effect of a hierarchical shadow so as to prompt the user of the selected and triggered cartoon content cover. As shown in fig. 65, for a cover of caricature content in an activated expanded state, the user may slide upward to expand each section segment in the caricature pointed to by the cover in the dimension of "highlight", and specifically, may determine highlight content according to the amount of a bullet screen of the section segment. As shown in fig. 65, in a floating layer interface with a certain transparency, each cartoon chapter is displayed in turn, and each cartoon chapter belongs to the dimension of "highlight". And displaying each cartoon chapter in the form of chapter cards, wherein the bullet screen quantity of the corresponding cartoon chapter is displayed at the upper left corner of the chapter card, and if the bullet screen quantity of the currently selected cartoon chapter is 100.
Further, as shown in fig. 66, the user authorizes approval of the processing of the record analysis storage and the like of the viewing content, the viewing behavior and the like information. Specifically, a client installed on the terminal displays authorization notification information, is used for performing authorization operation on the authorization notification information, allows the behavior of an authorized user to be recorded and analyzed through clicking, records the behavior data of the user, uploads the behavior data to a server, and the server records and analyzes the behavior data of the user.
In the content preview provided in this embodiment, as shown in fig. 67, a user browses the content of a long video, selects a video cover of interest, presses long, and sets a reasonable time length based on different devices and user groups for long, for example, 2 seconds, 3 seconds, and the like. The client displays the cover in a selected state, the user slides the cover, gestures such as up and down can be performed, the gestures are not limited, the client prompts the user to slide and then quickly browse related contents, and the gestures include, but are not limited to, guessing that you like, character relations, picture effects and other dimensions are unfolded. And the user finishes the sliding operation, the client initiates a data request to the server, the server receives the request, and content indexing is performed based on a certain dimension. For example, guessing the favorite dimension of the user, namely, matching the favorite data of the user with the content clips of the long video, and determining the sequence of the favorite clips of the user; the dimension of the person relationship, namely, the person relationship identification and the content segment matching identification are carried out, and the order is carried out according to a certain rule; the picture dimension, namely, based on the identification of the picture, judges different degrees of highlighting, degrees of beauty and the like, and sorts the pictures according to a certain sequence. The server transmits the indexed data to the client, and the client presents the video clip list according to a certain sequence. The user selects a different presentation order, such as positive, reverse, random, etc. The client refreshes the segment display sequence, and the user slides to select favorite content playing and viewing, or automatically enters automatic playing after a certain time, and the client plays relevant content. When the user clicks the blank area, the client video preview floating layer disappears, and the content preview is exited.
Further, the relationship of spread and gestures for different content dimensions. Different products can set different gesture directions after long pressing the cover based on the content characteristics and the habit of the user, and different content fragments are displayed in a floating layer mode. And the user can also customize the setting of different gestures in the product setting page, so that the content can be quickly previewed in a floating layer mode. As shown in fig. 68, default settings may exist, or user-defined settings may be performed by a user, to specifically set a correspondence between operation gestures and corresponding display contents. The operation gesture may include sliding up, sliding down, sliding left, sliding right, etc., and the presentation content may include guessing what you like, what people are related, what effects are on the screen, etc.
Further, for previewing of the image-text content, unlike the video content, the image-text content includes contents such as web literature and cartoon, and the preview technology is slightly different, and the following mainly describes the process of page image-text previewing, the mode of multi-image-text switching, and the like. If the preview page can completely display a page of text content when the preview page is single page content, as shown in fig. 69, in the preview process, the user clicks the preview page, and the client displays the text information of the corresponding page, and the preview page is enlarged. In the preview process, the user selects automatic preview, and the client automatically plays the next preview page after setting n seconds based on the reading speed of each page of the user. If the preview page can not completely display the text content of one page, as shown in fig. 70, the user clicks the preview page in the preview process, and the client displays the text information of the corresponding page, and the preview page is enlarged. The client automatically scrolls the image-text information in the playing page, and stops playing after the information in the current preview page is played. In the preview process, the user selects automatic preview, and after the client plays the image-text information in the current preview page, the next preview page is automatically activated, and the image-text information in the next preview page is played. When the preview page is multi-page content, as shown in fig. 71, in the preview process, the user clicks the preview page, and the client displays the graphics context information (chapter album, etc.) of the corresponding page, and the preview page is enlarged. The client automatically scrolls and plays each page of content in the chapter, and when the content is automatically played, the content is played to the last page to stop playing, or the content is manually slid, the tab is manually selected, and the like. In the preview process, the user selects automatic preview, and after the client plays the graphic information (chapter album, etc.) in the current preview page, the client automatically activates the next preview page and plays the graphic information (chapter album, etc.) therein.
Further, the identification process for guessing you like content can be implemented based on a personalized recommendation system. The recommendation system is essentially a technical means for searching interested information for a user from massive information under the condition that the user demand is not clear. The recommendation system utilizes a machine learning technology to construct a user interest model by combining information (region, age, sex and the like) of a user, article information (price, production place and the like) and past behavior (whether to purchase, click, play and the like) of the user on the article, and provides accurate personalized recommendation for the user. By integrating user data, namely determining attribute information (age, gender, academic, and the like), behavior data (browsing, commenting, forwarding, deleting, and the like), user relations (relations, friends, and the like) and the like of the user; and content data, namely labels, content descriptions and the like of different segments of a certain long video; therefore, processing of the user data and the content data is carried out, and the user data and the content data undergo a recall layer, a sorting layer and re-sorting to determine a recommendation list which accords with the favorite content of the user. Model algorithms that may be employed include UserCF (user-based collaborative filtering), itemCF (item-based collaborative filtering), matrix factorization (expression of hidden vectors), and the like. The recommendation list is formed based on the algorithm, video fragments of the recommendation list are displayed on the floating layer in a certain sequence, and a user can select different display sequences and play sequences. As shown in fig. 72, the data set for personalized recommendation includes video data, video data and context data, and the processing in the recommendation system includes a recall layer, a sort layer and a reorder layer in order, so that the video clip set is processed to obtain a recommendation list according to the data set.
Further, the person relationship recognition is processed. The relationship of people in the video is derived from mutual references in the video, introduction of video content, and the like. Through the identification of the relationship of the characters in the video, the video content can be decomposed into a plurality of video fragments and stored and sequenced. For example, the distance according to the relationship, the degree of antagonism according to the relationship, and the like. When the user selects the person relation preview video content, the corresponding fragment content is called from the server according to the person relation, the user can select different content ordering modes, and the content can be quickly previewed in a manual and automatic mode. As shown in fig. 73, the data sources of the person relationship recognition processing include person introduction, mutual titles in the video, manual identification, and the like; the long video content is sliced and stored in the server, and specific character relations can comprise various types of couples, revenge, teachers and apprentices, girlfriend and the like; the presentation of the preview segments includes a variety of video segment orderings, a variety of preview modes, and the like.
Further, processing for picture level recognition. Generally, the content of one long video can be identified and disassembled into video segment collection under different description dimensions in advance through the pictures, actions and the like of the video, for example, the content can be disassembled into N beautiful picture collection, N stimulation scene collection, N scaring lens collection and the like. Specifically, techniques such as video structural analysis (dividing video into frames, superframes, shots, scenes, stories, etc.), object detection (recognition of objects such as automobiles), person recognition, and motion recognition may be employed. The specific identification method comprises the following steps: a CNN (Convolutional Neural Networks) expansion network identification method, a two-way CNN identification method, an LSTM (Long Short-Term Memory) identification method, a 3-dimensional convolution kernel (3D CNN) method and the like. When the content is stored in the server or the client, when the user needs to quickly preview the content of the beautiful pictures with different degrees, the corresponding video clips can be quickly pulled from the server and the client to be quickly displayed on the floating layer, and meanwhile, the user can also switch different orders to preview. As shown in fig. 74, the original data is long video, and the content preview segments are obtained through processing of a data processing, video recognition, slicing and synthesizing system, and the content preview segments are displayed in various video segment sequences, various preview modes and the like. The data processing process comprises video structural analysis, target detection, character recognition and action recognition; the method is realized by a CNN expansion network identification method, a two-way CNN identification method, an LSTM identification method or a 3-dimensional convolution kernel (3D CNN) method, so that the sequence of the beautiful picture segment sets, the sequence of the scaring lens sets or the sequence of the stimulation scene sets and the like are generated.
When a user sees a cover of long content, including but not limited to, movie, television drama, web literature, etc., it is uncertain whether the content meets the requirement, and whether the content is played quickly after entering a player (related to playing video content) or a reader (related to browsing text content), or selecting a clip for playing cannot be previewed quickly and effectively. According to the content preview method provided by the embodiment, when a user sees the cover, the user can expand the contents such as movies, television shows, network literature and the like in various dimensions according to the dimensions such as user interests, character relations, wonderful degrees, plot development and the like by pressing the cover for a long time, and the user can quickly browse the content fragments including but not limited to video fragments, key frame pictures, text paragraphs and the like through different dimensions, so that the information of the long content can be quickly and effectively known. According to the content preview method, the mode of displaying various dimensions of long content quickly through long pressing of the cover is adopted, the content screening speed of the user is improved, the cost of repeatedly entering the detail page for viewing is reduced, the content viewing efficiency of the user is improved, and the experience and commercial value of products are improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a content preview device for realizing the content preview method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in one or more embodiments of the content preview device provided below may refer to the limitation of the content preview method hereinabove, and will not be repeated here.
In one embodiment, as shown in fig. 75, there is provided a content preview apparatus 7500 including: a content identification presentation module 7502, a preview trigger response module 7504, and a preview presentation module 7506, wherein:
a content identifier display module 7502 for displaying a target content identifier pointing to the target content; the target content identification is used for responding to preview triggering operations of at least two categories;
a preview trigger response module 7504, configured to display at least one clip associated with the target content in response to a preview trigger operation of the target category triggered by the target content identifier; the target category belongs to at least two categories, each fragment card points to a content preview fragment of the target content, and the fragment classification dimension of the content preview fragment pointed by each fragment card is matched with the target category;
and the preview display module 7506 is configured to preview the pointed content preview segment in the target segment card in the at least one segment card.
In one embodiment, the snippet classification dimension includes at least one of a user interest dimension, a persona information dimension, a picture effect dimension, a story type dimension, or a snippet popularity dimension.
In one embodiment, the preview trigger response module 7504 is further configured to display the target content identifier in a preview activation manner in response to an activation operation triggered on the target content identifier; and in the process of displaying the target content identification in a preview activation mode, responding to a preview triggering operation of a target category triggered by the target content identification, and displaying at least one fragment card associated with the target content.
In one embodiment, the preview trigger response module 7504 is further configured to, in response to a preview trigger operation for a target category triggered by the target content identifier, arrange and display at least one clip card associated with the target content according to a classification dimension arrangement condition; the sorting dimension arrangement condition is matched with the sorting dimension of the fragment to which the content preview fragment pointed by the fragment card belongs.
In one embodiment, the preview trigger response module 7504 is further configured to display at least one fragment card associated with the target content in a distributed position arrangement in response to a preview trigger operation of a target category triggered by the target content identifier; the distribution position comprises the content preview segment pointed by the segment card and the position of the content preview segment in the target content.
In one embodiment, the target content comprises video content; the preview trigger response module 7504 is further configured to at least one of: responding to a preview triggering operation of a first category triggered by the target content identification, and displaying at least one first fragment card associated with the target content; the content preview segment pointed by each first segment card belongs to the dimension of interest of the user; responding to a preview triggering operation of a second category triggered by the target content identification, and displaying at least one second fragment card associated with the target content; the content preview segment pointed by each second segment card belongs to the character information dimension; responding to a preview triggering operation of a third category triggered by the target content identification, and displaying at least one third fragment card associated with the target content; the content preview segment pointed by each third segment card belongs to the picture effect dimension.
In one embodiment, the target content comprises teletext content; the preview trigger response module 7504 is further configured to at least one of: responding to a fourth category preview triggering operation triggered by the target content identification, and displaying at least one fourth fragment card associated with the target content; the content preview segment pointed by each fourth segment card belongs to the segment hotness dimension; responding to a fifth category preview triggering operation triggered by the target content identification, and displaying at least one fifth fragment card associated with the target content; each fifth clip of cards points to a content preview clip that belongs to the story-type dimension.
In one embodiment, the card label display module is further configured to display, for each segment card, a card label associated with each segment card; in the card label, displaying the fragment information of the content preview fragment pointed by the fragment card associated with the card label; the clip information includes at least one of a distribution position of the content preview clip in the target content, or classification dimension quantization information of the content preview clip relative to a clip classification dimension.
In one embodiment, the method further comprises a label response module, which is used for responding to the triggering operation of the card label and displaying a sorting operation area; and responding to the sequencing control operation triggered in the sequencing operation area, and sequencing and displaying at least one fragment card according to the sequencing mode designated by the sequencing control operation.
In one embodiment, the preview presentation module 7506 is further configured to, in response to a preview presentation event to a target segment card of the at least one segment card, preview the pointed-to content preview segment in the target segment card.
In one embodiment, the preview presentation module 7506 is further configured to at least one of: in response to a target fragment card in the at least one fragment card moving to a preview display position, in the target fragment card, preview displaying the pointed content preview fragment; responding to a preview display operation triggered by a target fragment card in at least one fragment card, and displaying the pointed content preview fragment in the target fragment card in a preview manner; responding to the meeting of the preview display condition, sequentially moving the pointed content preview segments to the segment cards at the preview display position in at least one segment card, and sequentially displaying the pointed content preview segments; and responding to meeting the preview display condition, and sequentially displaying the pointed content preview fragments in target fragment cards in at least one fragment card.
In one embodiment, the target content comprises video content; the preview display module 7506 is further configured to, for a target clip card in a preview display position in the at least one clip card, play an audio/video of the pointed content preview clip in the target clip card; and for the mute segment cards which are not in the preview display position in the at least one segment card, performing mute play on the pointed content preview segment in the mute segment cards.
In one embodiment, the method further comprises a selected playing module, which is used for responding to the selected playing operation triggered in at least one fragment card, and playing the audio and video of the pointed content preview fragment in the selected fragment card selected by the selected playing operation; and in the fragment cards except the selected fragment card in the at least one fragment card, the pointed content preview fragment is silently played.
In one embodiment, the target content comprises teletext content; and the preview display module 7506 is further configured to sequentially display, in a target fragment card in the at least one fragment card, previews the pointed content preview fragments in an enlarged display manner according to the preview interval duration.
In one embodiment, the preview display module 7506 is further configured to sequentially preview the pointed content preview segments in an enlarged display manner according to the first preview interval duration; and for each content preview segment for preview display, sequentially displaying at least one preview picture included in the content preview segment according to the second preview interval duration.
In one embodiment, the target content comprises teletext content; the content preview segment includes at least one preview screen; the system also comprises a browse screen operation module, a display module and a display module, wherein the browse screen operation module is used for displaying a preview screen operation inlet of a content preview segment pointed by each segment card for each segment card; and responding to the preview picture selection operation triggered by the preview picture operation entrance, and displaying the preview picture selected by the preview picture selection operation.
In one embodiment, the preview trigger response module 7504 is further configured to display a floating layer area associated with the target content identifier in response to a preview trigger operation on the target category triggered by the target content identifier; at least one clip of the target content association is displayed in the floating layer area.
In one embodiment, the method further comprises a fragment card generation module for generating a fragment acquisition request based on the preview trigger operation; transmitting a fragment acquisition request to a server; the segment acquisition request is used for indicating the server to determine a content preview segment belonging to the segment classification dimension based on the segment acquisition request, and returning the content preview segment; and generating respective fragment cards according to the content preview fragments returned by the server.
In one embodiment, the segment card generation module is further configured to at least one of: determining a fragment classification dimension matched with the target category according to the category mapping relation, and generating a fragment acquisition request according to the fragment classification dimension; and generating a fragment acquisition request according to the target category of the preview triggering operation.
In one embodiment, the segment classification dimension includes a user dimension of interest; the system also comprises an interest matching module, a matching module and a matching module, wherein the interest matching module is used for acquiring account attribute information and account behavior information of a user account; and performing interested matching on the account attribute information and the account behavior information and each content segment in the target content to obtain at least one content preview segment belonging to the interested dimension of the user.
In one embodiment, the system further comprises an authorization module for displaying authorization notification information for the user account; and responding to confirmation authorization triggered by the user account to the authorization notification information, and acquiring account behavior information of the user account in the activity process of the user account.
In one embodiment, the segment classification dimension includes a persona information dimension; the system also comprises a character information identification module which is used for carrying out character information identification on the target content to obtain character information in the target content; slicing the target content based on the character information to obtain at least one content preview segment belonging to the character information dimension.
In one embodiment, the segment classification dimension includes a picture effect dimension; the content segment classification module is used for slicing the target content to obtain each content segment; respectively carrying out picture feature recognition on each content segment to obtain respective segment picture features of each content segment; and classifying each content segment based on the segment picture characteristics to obtain at least one content preview segment belonging to the picture effect dimension.
In one embodiment, as shown in fig. 76, there is provided a content preview device 7600 including: a content identification presentation module 7602, a preview response module 7604, and a preview presentation module 2706, wherein:
a content identification display module 7602 for displaying a target content identification pointing to the target content;
a preview response module 7604 for displaying at least one clip associated with the target content in response to a preview trigger operation on the target category triggered by the target content identification; each fragment card points to a content preview fragment of the target content, and the fragment classification dimension of the content preview fragment pointed by each fragment card is matched with the target class;
the preview display module 7606 displays the pointed content preview segment in the target segment card in the at least one segment card.
The respective modules in the content preview device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and an internal structure diagram thereof may be as shown in fig. 77. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a content preview method. The display unit of the computer equipment is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device, wherein the display screen can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on a shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 77 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, storing a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (25)

1. A method of content previewing, the method comprising:
displaying a target content identifier pointing to target content; the target content identifier is used for responding to preview triggering operations of at least two categories;
responding to a preview triggering operation of a target category triggered by the target content identification, and displaying at least one fragment card associated with the target content; the target category belongs to the at least two categories, each fragment card points to a content preview fragment of the target content, and the fragment classification dimension of the content preview fragment pointed by each fragment card is matched with the target category;
And previewing the pointed content preview segment in the target segment card in the at least one segment card.
2. The method of claim 1, wherein the snippet classification dimension includes at least one of a user interest dimension, a persona information dimension, a picture effect dimension, a story type dimension, or a snippet popularity dimension.
3. The method of claim 1, wherein the displaying at least one segment card associated with the target content in response to a preview trigger operation for a target category triggered by the target content identification comprises:
responding to the activation operation triggered by the target content identification, and displaying the target content identification in a preview activation mode;
and in the process of displaying the target content identifier in a preview activation mode, responding to a preview triggering operation of a target category triggered by the target content identifier, and displaying at least one fragment card associated with the target content.
4. The method of claim 1, wherein the displaying at least one segment card associated with the target content in response to a preview trigger operation for a target category triggered by the target content identification comprises:
Responding to a preview triggering operation of a target category triggered by the target content identification, and arranging and displaying at least one fragment card associated with the target content according to a classification dimension arrangement condition;
and the sorting dimension arrangement condition is matched with the sorting dimension of the fragment to which the content preview fragment pointed by the fragment card belongs.
5. The method of claim 1, wherein the displaying at least one segment card associated with the target content in response to a preview trigger operation for a target category triggered by the target content identification comprises:
responding to a preview triggering operation of a target category triggered by the target content identification, and displaying at least one fragment card associated with the target content according to a distribution position;
the distribution positions comprise the positions of the content preview fragments pointed by the fragment cards in the target content.
6. The method of any one of claims 1 to 5, wherein the target content comprises video content; the responding to the preview triggering operation of the target category triggered by the target content identification displays at least one fragment card associated with the target content, and the method comprises at least one of the following steps:
Responsive to a preview trigger operation of a first category triggered by the target content identification, displaying at least one first segment card associated with the target content; the content preview segments pointed by each first segment card belong to the interesting dimension of the user;
responding to a preview triggering operation of a second category triggered by the target content identification, and displaying at least one second fragment card associated with the target content; the content preview segment pointed by each second segment card belongs to the character information dimension;
responding to a preview triggering operation of a third category triggered by the target content identification, and displaying at least one third fragment card associated with the target content; and the content preview segment pointed by each third segment card belongs to the picture effect dimension.
7. The method of any one of claims 1 to 5, wherein the target content comprises teletext content; the responding to the preview triggering operation of the target category triggered by the target content identification displays at least one fragment card associated with the target content, and the method comprises at least one of the following steps:
responding to a fourth category preview triggering operation triggered by the target content identification, and displaying at least one fourth fragment card associated with the target content; the content preview segments pointed by each fourth segment card belong to the segment hotness dimension;
Responding to a fifth category preview triggering operation triggered by the target content identification, and displaying at least one fifth fragment card associated with the target content; and each content preview segment pointed by the fifth segment card belongs to the scenario type dimension.
8. The method according to claim 1, wherein the method further comprises:
for each of the segment cards, displaying a card label associated with each of the segment cards;
displaying fragment information of a content preview fragment pointed by a fragment card associated with the card label in the card label;
the clip information includes at least one of a distribution position of the content preview clip in the target content, or classification dimension quantization information of the content preview clip relative to the clip classification dimension.
9. The method of claim 8, wherein the method further comprises:
responding to the triggering operation of the card label, and displaying a sequencing operation area;
and responding to the sequencing control operation triggered in the sequencing operation area, and sequencing and displaying the at least one fragment card according to the sequencing mode designated by the sequencing control operation.
10. The method of claim 1, wherein the previewing the pointed content preview segment in the target segment card of the at least one segment card comprises:
in response to a preview presentation event for a target segment card of the at least one segment card, a preview presentation is made for the pointed-to content preview segment in the target segment card.
11. The method of claim 10, wherein the responding to the preview display event of the target fragment card in the at least one fragment card, the target fragment card displaying the pointed content preview fragment in a preview manner comprises at least one of the following:
responsive to a target segment card of the at least one segment card moving to a preview display position, in the target segment card, preview displaying the pointed content preview segment;
responding to a preview display operation triggered by a target fragment card in the at least one fragment card, and displaying the pointed content preview fragment in the target fragment card in a preview manner;
responding to the meeting of the preview display condition, sequentially moving the pointed content preview segments to the segment cards at the preview display position in the at least one segment card, and sequentially displaying the pointed content preview segments;
And responding to meeting the preview display condition, and sequentially displaying the pointed content preview segments in target segment cards in the at least one segment card.
12. The method of claim 1, wherein the target content comprises video content; and in the target fragment card in the at least one fragment card, previewing and displaying the pointed content preview fragment, wherein the previewing and displaying comprises the following steps:
for a target fragment card in a preview display position in the at least one fragment card, in the target fragment card, playing the audio and video of the pointed content preview fragment;
and for the mute segment cards which are not positioned at the preview display position in the at least one segment card, in the mute segment cards, performing mute play on the pointed content preview segment.
13. The method according to claim 12, wherein the method further comprises:
responding to a selected playing operation triggered in the at least one fragment card, and playing the audio and video of the pointed content preview fragment in the selected fragment card selected by the selected playing operation;
And in the at least one fragment card except for the selected fragment card, the pointed content preview fragment is subjected to mute play.
14. The method of claim 1, wherein the target content comprises teletext content; and in the target fragment card in the at least one fragment card, previewing and displaying the pointed content preview fragment, wherein the previewing and displaying comprises the following steps:
and in the target fragment card in the at least one fragment card, sequentially previewing and displaying the pointed content preview fragments in an enlarged display mode according to the preview interval duration.
15. The method according to claim 14, wherein the preview presentation of the pointed content preview segments sequentially in a magnified presentation manner according to the preview interval duration includes:
sequentially previewing and displaying the pointed content preview segments in an enlarged display mode according to the first preview interval duration;
and for each content preview segment for preview display, sequentially displaying at least one preview picture included in the content preview segment according to the second preview interval duration.
16. The method of claim 1, wherein the target content comprises teletext content; the content preview segment comprises at least one preview screen; the method further comprises the steps of:
for each fragment card, displaying a preview screen operation inlet of a content preview fragment pointed by each fragment card;
and responding to the preview picture selection operation triggered by the preview picture operation entrance, and displaying the preview picture selected by the preview picture selection operation.
17. The method of claim 1, wherein the displaying at least one segment card associated with the target content in response to a preview trigger operation for a target category triggered by the target content identification comprises:
responding to a preview triggering operation of a target category triggered by the target content identifier, and displaying a floating layer area associated with the target content identifier;
and displaying at least one fragment card associated with the target content in the floating layer area.
18. The method according to claim 1, wherein the method further comprises:
generating a fragment acquisition request based on the preview triggering operation;
Sending the fragment acquisition request to a server; the fragment acquisition request is used for indicating the server to determine a content preview fragment belonging to the fragment classification dimension based on the fragment acquisition request, and returning the content preview fragment;
and generating respective fragment cards according to the content preview fragments returned by the server.
19. The method of any one of claims 1 to 18, wherein the segment classification dimension comprises a user-interest dimension; the method further comprises the steps of:
acquiring account attribute information and account behavior information of a user account;
and performing interest matching on the account attribute information and the account behavior information and each content segment in the target content to obtain at least one content preview segment belonging to the user interest dimension.
20. The method of any one of claims 1 to 18, wherein the segment classification dimension comprises a persona information dimension; the method further comprises the steps of:
carrying out character information identification on the target content to obtain character information in the target content;
slicing the target content based on the character information to obtain at least one content preview segment belonging to the character information dimension.
21. The method of any one of claims 1 to 18, wherein the segment classification dimension comprises a picture effect dimension; the method further comprises the steps of:
slicing the target content to obtain each content segment;
respectively carrying out picture feature recognition on each content segment to obtain respective segment picture features of each content segment;
and classifying each content segment based on the segment picture characteristics to obtain at least one content preview segment belonging to the picture effect dimension.
22. A method of content previewing, the method comprising:
displaying a target content identifier pointing to target content;
responding to a preview triggering operation of a target category triggered by the target content identification, and displaying at least one fragment card associated with the target content; each fragment card points to a content preview fragment of the target content, and the fragment classification dimension of the content preview fragment pointed by each fragment card is matched with the target class;
and previewing the pointed content preview segment in the target segment card in the at least one segment card.
23. A content preview device, the device comprising:
the content identifier display module is used for displaying a target content identifier pointing to target content; the target content identifier is used for responding to preview triggering operations of at least two categories;
the preview triggering response module is used for responding to the preview triggering operation of the target category triggered by the target content identifier and displaying at least one fragment card associated with the target content; the target category belongs to the at least two categories, each fragment card points to a content preview fragment of the target content, and the fragment classification dimension of the content preview fragment pointed by each fragment card is matched with the target category;
and the preview display module is used for displaying the pointed content preview segment in the target segment card in the at least one segment card.
24. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 22 when the computer program is executed.
25. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 22.
CN202211440248.0A 2022-11-17 2022-11-17 Content preview method, device, computer equipment and storage medium Pending CN116975480A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211440248.0A CN116975480A (en) 2022-11-17 2022-11-17 Content preview method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211440248.0A CN116975480A (en) 2022-11-17 2022-11-17 Content preview method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116975480A true CN116975480A (en) 2023-10-31

Family

ID=88477251

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211440248.0A Pending CN116975480A (en) 2022-11-17 2022-11-17 Content preview method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116975480A (en)

Similar Documents

Publication Publication Date Title
CN110110203B (en) Resource information pushing method, server, resource information display method and terminal
US9253511B2 (en) Systems and methods for performing multi-modal video datastream segmentation
US8219513B2 (en) System and method for generating a context enhanced work of communication
CN103686344B (en) Strengthen video system and method
Tiwari et al. A survey of recent work on video summarization: approaches and techniques
US11343595B2 (en) User interface elements for content selection in media narrative presentation
CN111314759A (en) Video processing method and device, electronic equipment and storage medium
US20220019635A1 (en) Method for recommending personalized content, graphical user interface and system thereof
Kannan et al. What do you wish to see? A summarization system for movies based on user preferences
WO2023160555A1 (en) Encyclopedic information display method and apparatus, device, and medium
WO2022247220A1 (en) Interface processing method and apparatus
CN112040339A (en) Method and device for making video data, computer equipment and storage medium
CN111506845A (en) Content display method, device and equipment based on media information stream and storage medium
Sluÿters et al. Consistent, continuous, and customizable mid-air gesture interaction for browsing multimedia objects on large displays
CN114329069A (en) Intelligent system and method for visual search query
CN112650428B (en) Dynamic digital content display method, graphic user interface and system
Barbieri et al. Content selection criteria for news multi-video summarization based on human strategies
US20180367838A1 (en) Systems for and methods of browsing and viewing huge and heterogeneous media collections on tv with unified interface
CN116975480A (en) Content preview method, device, computer equipment and storage medium
Leung et al. Content-based retrieval in multimedia databases
CN113297406A (en) Picture searching method and system and electronic equipment
US20230244729A1 (en) Method and system for providing contents within region of interest of a user
JP2014182681A (en) Shared information providing system
TWI423044B (en) Method and system of mutual communication based on pervasive computing
Geisler Agile views: A framework for creating more effective information seeking interfaces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication