CN113326396A - Interaction and display control method, device, electronic equipment and computer storage medium - Google Patents

Interaction and display control method, device, electronic equipment and computer storage medium Download PDF

Info

Publication number
CN113326396A
CN113326396A CN202010130752.5A CN202010130752A CN113326396A CN 113326396 A CN113326396 A CN 113326396A CN 202010130752 A CN202010130752 A CN 202010130752A CN 113326396 A CN113326396 A CN 113326396A
Authority
CN
China
Prior art keywords
reading
video
content
reading object
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010130752.5A
Other languages
Chinese (zh)
Inventor
谢同生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010130752.5A priority Critical patent/CN113326396A/en
Publication of CN113326396A publication Critical patent/CN113326396A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results

Abstract

The embodiment of the invention provides an interaction and display control method, an interaction and display control device, electronic equipment and a computer storage medium. The interaction and display control method comprises the following steps: acquiring a search term input by a user; determining, in response to the search term, at least one reading object matching the search term and at least one video object corresponding to the at least one reading object; in the search result interface, the content of the video frame of the at least one video object is displayed, and the content introduction of the at least one reading object is displayed. According to the scheme of the embodiment of the invention, the content introduction of the reading object and the video frame content of the video object corresponding to the reading object can be simultaneously displayed in the interface, so that the content of the reading object can be more intuitively understood by a user, and the reading interest of the user is aroused.

Description

Interaction and display control method, device, electronic equipment and computer storage medium
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an interaction and display control method, an interaction and display control device, electronic equipment and a computer storage medium.
Background
Existing reading searches typically expose keywords of the user search as well as keywords such as "books" and highlight the keywords. However, since there is a lot of advertisement information in a huge amount of search results, the user cannot determine the reality without clicking, and thus the presentation manner cannot arouse the interest of reading.
Disclosure of Invention
Embodiments of the present invention provide an interaction and display control method, apparatus, electronic device and computer storage medium to solve or alleviate the above problems.
According to a first aspect of the embodiments of the present invention, there is provided an interaction and display control method, including: acquiring a search term input by a user; determining, in response to the search term, at least one reading object matching the search term and at least one video object corresponding to the at least one reading object; in the search result interface, the content of the video frame of the at least one video object is displayed, and the content introduction of the at least one reading object is displayed.
According to a second aspect of the embodiments of the present invention, there is provided an interaction and display control method, including: acquiring a search term input by a user; determining, in response to the search term, at least one reading object and at least one video object that match the search term; in the search result interface, the content of the video frame of the at least one video object is displayed, and the content introduction of the at least one reading object is displayed.
According to a third aspect of the embodiments of the present invention, there is provided an interaction and display control method including: monitoring browsing operation on a reading object in a browsing interface; responding to the browsing operation, and determining at least one video object matched with the reading object; displaying the video frame content of the at least one video object in the browsing interface.
According to a fourth aspect of the embodiments of the present invention, there is provided an interaction and display control method, including: monitoring the browsing operation of a video object in a browsing interface; responding to the browsing operation, and determining at least one reading object matched with the video object; and displaying the content introduction of the at least one reading object in the browsing interface.
According to a fifth aspect of the embodiments of the present invention, there is provided an interaction and display control method, including: acquiring a search term input by a user; in response to the search term, determining a target reading object matching the search term and associated reading objects and associated video objects associated with the target reading object; displaying the target reading object and displaying the associated reading object and the associated video object based on the target reading object in a search result interface.
According to a sixth aspect of embodiments of the present invention, there is provided an interaction and display control apparatus including: the acquisition module acquires a search term input by a user; a determination module, responsive to the search term, for determining a target reading object matching the search term and associated reading objects and associated video objects associated with the target reading object; and the display module displays the target reading object in a search result interface, and displays the associated reading object and the associated video object based on the target reading object.
According to a seventh aspect of the embodiments of the present invention, there is provided an interaction and display control apparatus including: the acquisition module acquires a search term input by a user; a determination module that determines, in response to the search term, at least one reading object that matches the search term and at least one video object that corresponds to the at least one reading object; and the display module displays the video frame content of the at least one video object and displays the content introduction of the at least one reading object in the search result interface.
According to an eighth aspect of the embodiments of the present invention, there is provided an interaction and display control apparatus including: the acquisition module acquires a search term input by a user; a determination module that, in response to the search term, determines at least one reading object and at least one video object that match the search term; and the display module displays the video frame content of the at least one video object and displays the content introduction of the at least one reading object in the search result interface.
According to a ninth aspect of the embodiments of the present invention, there is provided an interaction and display control apparatus including: the monitoring module is used for monitoring browsing operation on a reading object in the browsing interface; the determining module is used for responding to the browsing operation and determining at least one video object matched with the reading object; and the display module displays the video frame content of the at least one video object in the browsing interface.
According to a tenth aspect of the embodiments of the present invention, there is provided an interaction and display control apparatus including: the monitoring module monitors browsing operation on the video object in the browsing interface; the determining module is used for responding to the browsing operation and determining at least one reading object matched with the video object; and the display module displays the content introduction of the at least one reading object in the browsing interface.
According to an eleventh aspect of embodiments of the present invention, there is provided an electronic apparatus, including: one or more processors; a computer readable medium configured to store one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the interaction and display control method according to any one of the first to fifth aspects.
According to a twelfth aspect of embodiments of the present invention, there is provided a computer-readable medium on which a computer program is stored, the program, when executed by a processor, implementing the interaction and display control method according to any one of the first to fifth aspects.
According to the scheme of the embodiment of the invention, the content introduction of the reading object and the video frame content of the video object corresponding to the reading object can be simultaneously displayed in the interface, so that the content of the reading object can be more intuitively understood by a user, and the reading interest of the user is aroused.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present invention, and it is also possible for a person skilled in the art to obtain other drawings based on the drawings.
FIG. 1A is a diagram illustrating an overall network architecture to which one embodiment of the present invention is applicable;
FIG. 1B is a diagram illustrating a search network architecture according to another embodiment of the present invention;
FIG. 2 is a schematic flow chart diagram of an interaction and display control method according to another embodiment of the present invention;
fig. 3A to 6 are schematic diagrams illustrating interface switching and state change according to another embodiment of the present invention;
FIG. 7A is a diagram illustrating a search network architecture according to another embodiment of the present invention;
FIG. 7B is a schematic flow chart diagram of an interaction and display control method according to another embodiment of the invention;
FIGS. 7C and 7D are schematic diagrams of interaction and interface states of another embodiment of the present invention;
FIG. 8A is a diagram illustrating a search network architecture according to another embodiment of the present invention;
FIG. 8B is a schematic flow chart diagram of an interaction and display control method according to another embodiment of the present invention;
FIG. 8C is a schematic view of an interface state according to another embodiment of the present invention;
FIG. 9A is a diagram illustrating a search network architecture according to another embodiment of the present invention;
FIG. 9B is a schematic flow chart diagram of an interaction and display control method according to another embodiment of the invention;
FIG. 9C is a schematic illustration of interface switching and state change according to another embodiment of the present invention;
FIG. 10A is a schematic flow chart diagram of an interaction and display control method according to another embodiment of the present invention;
FIG. 10B is a schematic diagram of interaction and interface states of another embodiment of the present invention;
FIG. 11A is a schematic block diagram of an interaction and display control device according to another embodiment of the present invention;
FIG. 11B is a schematic block diagram of an interaction and display control device according to another embodiment of the present invention;
FIG. 12 is a schematic block diagram of an interaction and display control apparatus according to another embodiment of the present invention;
FIG. 13 is a schematic block diagram of an interaction and display control apparatus according to another embodiment of the present invention;
FIG. 14 is a schematic block diagram of an interaction and display control apparatus according to another embodiment of the present invention;
FIG. 15 is a schematic block diagram of an electronic device of another embodiment of the present invention;
fig. 16 is a hardware configuration of an electronic device according to another embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention shall fall within the scope of the protection of the embodiments of the present invention.
The following further describes specific implementation of the embodiments of the present invention with reference to the drawings.
Fig. 1A is a schematic diagram of a search network architecture 100 according to an embodiment of the present invention. Search network architecture 100 includes a client system 120 and a search server system 160 connected to each other by a network 110. It should be understood that the search network architecture shown in FIG. 1A is applicable to all examples of embodiments of the present invention. Although the embodiment of the present invention also shows examples of other search network architectures, such as fig. 7A, fig. 8A, fig. 9A, and the like below, all the shown network architectures are solutions for facilitating understanding of the embodiment of the present invention, and should not be understood as limiting application scenarios, interaction manners, and various communication relationships of the embodiment of the present invention.
For example, by way of example and not limitation, fig. 1B is a schematic diagram of a search network architecture to which another embodiment of the present invention is applicable. As shown in fig. 1B, the video asset server 132 bypasses the network 110 and the search server system 160. The reading resource server 142 is connected to the search server system 160 through the network 110. In addition, the video asset server 132 may communicate with the search server system 160 via the network 110. The reading resource server 142 may be connected to the search server system 160 bypassing the network 110. Moreover, although FIG. 1B illustrates a particular number of client systems 120, search server systems 160, video asset servers 132, reading asset servers 142, and networks 110, embodiments of the invention contemplate any suitable number of client systems 120, search server systems 160, video asset servers 132, reading asset servers 142, and networks 110. By way of example and not limitation, search network 100 may include a plurality of client systems 120, search server system 160, video asset server 132, reading asset server 142, and network 110.
Any suitable network 110 is contemplated by embodiments of the present invention. By way of example and not limitation, one or more portions of network 110 may include an ad hoc network, an intranet, an extranet, a Virtual Private Network (VPN), a Local Area Network (LAN), a wireless LAN (wlan), a Wide Area Network (WAN), a wireless WAN (wwan), a Metropolitan Area Network (MAN), a portion of the internet, a portion of the Public Switched Telephone Network (PSTN), a cellular telephone network, or a combination of two or more of these. The network 110 may include one or more networks 110.
Network communication link 150 may connect client system 120, search server system 160, and video resource server 132, reading resource server 142 to communication network 110 or to each other. Embodiments of the present invention contemplate any suitable network communication link 150. In one particular implementation, the one or more network communication links 150 include one or more wireline (e.g., Digital Subscriber Line (DSL) or Data Over Cable Service Interface Specification (DOCSIS), wireless (e.g., Wi-Fi or Worldwide Interoperability for Microwave Access (WiMAX)) or optical (e.g., Synchronous Optical Network (SONET) or Synchronous Digital Hierarchy (SDH)) links in one particular implementation, the one or more network communication links 150 each include an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a WWAN, a MAN, a portion of the Internet, a portion of the PSTN, a cellular technology-based network, a satellite communication technology-based network, another network communication link 150, or a combination of two or more such network communication links 150. the one or more first network communication links 150 may differ in one or more respects from one or more other network communication links 150 in the search network architecture 100 One or more second network communication links 150.
In one particular implementation, search server system 160 may be a network addressable computing system that may carry online searches. Search server system 160 may generate, store, receive, and transmit search data. Search server system 160 may be accessed directly by other components of search network architecture 100 or via network 110. By way of example and not limitation, client system 120 may access search server system 160 directly or via network 110 using web browser/search application 122 or a local application associated with search server system 160 (e.g., a mobile search application, a messaging application, another suitable application, or any combination thereof). In one particular implementation, search server system 160 may include one or more servers 162. Each server 162 may be a single server or a distributed server spanning multiple computers or multiple data centers. Server 162 may be of various types, such as, but not limited to, a web server, a news server, a mail server, a messaging server, an advertising server, a file server, an application server, an exchange server, a database server, a proxy server, another server adapted to perform the functions or processes described herein, or any combination thereof. In one particular implementation, each server 162 may include hardware, software, or embedded logic components or a combination of two or more such components for performing the appropriate functions implemented or supported by the server 162. In one particular implementation, search server system 160 may include one or more data stores 164. Data storage 164 may be used to store various types of information. In one particular implementation, the information stored in data store 164 may be organized according to a particular data structure. In one particular implementation, each data store 164 may be a relational database, a column database, a relational database, or other suitable database. Although embodiments of the present invention describe or illustrate a particular type of database, embodiments of the present invention contemplate any suitable type of database. One particular implementation may provide an interface that enables client system 120, search server system 160, or video asset server 132, reading asset server 142 to manage, retrieve, modify, add, or delete information stored in data store 164.
In one particular implementation, client system 120 may be an electronic device including hardware, software, or embedded logic components or a combination of two or more such components and capable of performing the appropriate functions implemented or supported by client system 120. By way of example and not limitation, client system 120 may include a computer system such as a desktop computer, notebook or laptop computer, netbook, tablet computer, e-book reader, GPS device, camera, Personal Digital Assistant (PDA), handheld electronic device, cellular telephone, smart phone, other suitable electronic device, or any suitable combination thereof. Embodiments of the present invention contemplate any suitable client system 120. Client system 120 may enable a network user at client system 120 to access network 110. Client system 120 may enable its user to communicate with other users at other client systems 120.
In one particular implementation, search server system 160 may store multiple read objects or multiple video objects in one or more data stores 164. In one particular implementation, search server system 160 may provide users with the ability to take actions on various types of items or objects supported by search server system 160.
In one particular implementation, search server system 160 is capable of linking various entities. By way of example and not limitation, search server system 160 may enable users to interact with each other and receive content from video resource server 132, reading resource server 142, or other entities, or allow users to interact with these entities through an Application Programming Interface (API) or other communication channel.
In a particular implementation, video asset server 132, reading asset server 142 may include one or more types of servers, one or more data stores, one or more interfaces (including but not limited to APIs), one or more web services, one or more content sources, one or more networks, or any other suitable components with which a server may communicate, for example. The video asset server 132, the reading asset server 142 may be operated by an entity different from the entity operating the search server system 160. However, in one particular implementation, search server system 160 and video asset server 132, reading asset server 142 may operate in conjunction with each other to provide search services to users of search server system 160 or video asset server 132, reading asset server 142. In this sense, search server system 160 may provide a platform or backbone that other systems, such as video resource server 132, reading resource server 142, may use to provide search services and functionality to users over the Internet.
Fig. 2 is a schematic flowchart of an interaction and display control method according to a second embodiment of the present invention. The method system framework of the embodiment of fig. 2 may be implemented in a so-called browser server (B/S) or client server (C/S) mode, as necessary, and is described in conjunction with the network architectures of fig. 1A-1B. The interaction and display control method of fig. 2 may be performed by a client, for example, by the client system 120 described above. In addition, for ease of illustration, the embodiment of fig. 2 is still described in conjunction with fig. 1A-1B described above. In one particular implementation, client system 120 may include any kind of web browser 122 in a browser server (B/S) and may have one or more attachments, plug-ins, or other extensions, such as toolbars. A user at client system 120 may enter a Uniform Resource Locator (URL) or other address directing web browser 122 to a particular server, such as server 162 or a server associated with video resource server 132, reading resource server 142, and web browser 122 may generate a hypertext transfer protocol (HTTP) request and transmit the HTTP request to the server. The server may accept the HTTP request and transmit one or more hypertext markup language (HTML) files to client system 120 in response to the HTTP request. Client system 120 may render a web interface (e.g., a web page) based on the HTML files from the server for presentation to the user. Embodiments of the invention contemplate any suitable source file. By way of example and not limitation, the web interface may be rendered from an HTML file, an extensible hypertext markup language (XHTML) file, or an extensible markup language (XML) file, according to particular needs. Such an interface may also execute scripts, such as, but not limited to, scripts written in JAVASCRIPT, JAVA, MICROSOFT SILVERLIGHT, combinations of markup languages and scripts, such as AJAX (asynchronous JAVASCRIPT and XML), and the like. Here, references to a web interface include, where appropriate, one or more corresponding source files (which a browser can use to render the web interface), and vice versa.
In one particular implementation, in a client-server (C/S) mode, client system 120 may include any kind of search application 122 and may have one or more attachments, plug-ins, or other extensions, such as toolbars. Similar to the browser server mode, the embodiment of the present invention will not be described in detail herein. It should be understood that the client server mode or the browser server mode is only an example, and the embodiment of the present invention is not limited thereto. The interaction and display control method of fig. 2 includes:
210: search terms input by a user are acquired.
It should be understood that the search term obtained from the user input may be implemented in a browser or application program in the user device, including but not limited to the user device system framework described above, but it should be understood that the method of the present embodiment may be performed by any suitable electronic device having data processing capabilities, including but not limited to: server, mobile terminal (such as mobile phone, PAD, etc.), PC, etc. For example, a client such as an application or a browser is installed in the above-described device. For example, I/O (input/output) events are detected using a client, such as a browser or application, for example, a trigger operation, such as a search engine component, under the HTML framework described above. For example, the search engine component sends an input search term, such as a character field, to the search server when the trigger operation is monitored. The above-mentioned trigger operation may be in any form, for example, by mouse click, user touch operation such as gesture, intelligent interaction such as voice recognition or face recognition or gesture recognition, and the like.
The search terms may include any of a voice search term, a picture search term, a video frame search term.
220: in response to the search term, at least one reading object matching the search term and at least one video object corresponding to the at least one reading object are determined.
It should be appreciated that for determining at least one reading object that matches the search term, for example, a plurality of objects that match are received from the server, for example, at least one object is determined from the plurality of objects. For example, the server performs initial ranking based on the index results of the search terms using a first ranking algorithm, and sends the results of the initial ranking to the user device. For example, the user equipment performs subsequent sorting, and the subsequent sorting may be re-sorting or intercepting part of the initial sorting to obtain a sorting result. For example, before the server performs the sorting, the reading object information may be obtained from a reading resource (e.g., a reading resource server, or a reading resource database, etc.), and for example, the video object information may be obtained from a video resource (e.g., a video resource server, or a video resource database). For example, the server cluster applied in the embodiment of the present invention includes at least one of the search server, the application server, the WEB server, and the database server described above. For example, the database server acquires the reading object information and the video object information by a crawler operation or the like. For example, the database server may directly access the video object and the reading object through authentication. For example, the retrieval process in the search described above may be implemented by building various indexes. For example, an index between search terms and database servers may be constructed in an inverted index fashion.
In a specific embodiment, for example, at a server side including the server cluster, the labeling of the video object information and the reading information is performed, and at least one reading object is associated with at least one video object. For example, at the server, at least one reading object is precisely matched with at least one video object. The at least one reading object is associated with the at least one video, for example, by a classification or clustering algorithm. For example, at least one reading object is precisely matched with at least one video object through vector calculation. For example, the at least one reading object is matched with the at least one video object by establishing a vector similarity index or a key field index. For example, at least one of the reading object or the video object is subjected to keyword or keyword labeling. For example, when the server performs search sorting, the reading object and the video object are placed in one data set to perform search sorting, or the data size of the data object and the data size of the reading object may be determined, and sorting may be performed based on an object having a smaller data size. For example, if the data amount of the reading object is small, the reading object is sorted. For example, the sequencing result of the reading object and the association or correspondence between the reading object and the video object are sent to the client. Alternatively, if the amount of data of the video object is small, the video object is sorted. For example, the sorting result of the video objects and the association or correspondence between the reading object and the video object are sent to the client. Since only one of the reading object or the video object is searched and the corresponding relation of the reading object and the video object is returned to the client, the search calculation amount of the server is reduced.
In another implementation manner of the present invention, when the server performs annotation, the reading object and the video object may be associated in any manner. For example, an index from a reading object to a video object may be constructed, and when retrieving based on the search term, the reading object is found first, and then the video object corresponding to the reading object is determined. For example, an index from a video object to a reading object may be constructed, and when retrieving based on the search term, the video object is found first, and then the reading object corresponding to the video object is determined. For example, the server sends a first correspondence between the reading object and the video object to the client. For example, the client determines a second object relationship of the at least one reading object and the at least one video object based on the first corresponding relationship. For example, the first corresponding relationship corresponds to an initial sorting result of the server. For example, the second corresponding relationship is a sorting result obtained by sorting the initial sorting result of the server again.
It should be understood that the second correspondence relationship may be any one of a one-to-one correspondence relationship, a one-to-many relationship, a many-to-one relationship, and a many-to-many relationship, and the embodiment of the present invention does not limit this.
For example, determining, in response to the search term, a reading object and a video object that match the search term includes: in response to the search term, at least one reading object that matches the subject matter content of the search term from among a plurality of candidate reading objects obtained from the second resource is determined, and at least one video object that matches the subject matter content of the search term from among a plurality of candidate video objects obtained from the first resource is determined. For example, the first resource is a video resource server and the second resource is a reading resource server.
230: in the search result interface, the content of the video frame of at least one video object is displayed, and the content introduction of at least one reading object is displayed.
For example, the content of the video frame of at least one video object is presented in the first presentation area, and the content introduction of at least one reading object is presented in the second presentation area, as shown in fig. 3A. For example, based on the second presentation area, the video frame content of the at least one video object is presented in the first area. For example, based on the first presentation area, the video frame content of at least one reading object is presented in the second presentation area. For example, the first presentation area and the second presentation area are different areas in the search results interface. For example, the first presentation area and the second presentation area partially overlap in the search results interface. For example, the first presentation area is in the second presentation area. For example, the second presentation area is in the first presentation area.
For example, the content of the video frame of at least one video object is displayed on the first layer, and the content introduction of at least one reading object is displayed on the second layer. For example, the first layer is below the second layer, and the second layer has a transparency greater than 0. For example, the second layer is below the first layer, and the first layer has a transparency greater than 0. For example, the first layer and the second layer partially overlap in the search results interface. For example, the first layer is an interactive floating layer of the second layer, and at least a partial area of the second layer is provided with an interactive floating layer presentation trigger component. For example, the second layer is an interactive floating layer of the first layer, and at least a partial area of the first layer is provided with an interactive floating layer presentation trigger component. For example, when any one of the first image layer and the second image layer is taken as a bottom layer, the interactive floating layer presentation triggering component determines the transparency of the interactive floating layer based on at least partial region of the bottom layer. For example, when the second layer is a bottom layer, when the interactive position of the user passes through the position of the keyword on the second layer, the interactive floating layer is displayed, and then the interactive floating layer disappears to play a flickering effect, so as to prompt the user to return to the position of the keyword. For example, the keyword position in the first layer is set to the transparency of the interactive floating layer is 0 (i.e., opaque) or substantially 0, or less than a first preset threshold. For example, the positions other than the keyword position in the first image layer are set to be the transparency of the interactive floating layer larger than a second preset threshold. For example, the second preset threshold is greater than the first preset threshold. For example, when the first layer is a bottom layer, when the interactive position of the user passes through the key position of the video frame content of the first layer, the interactive floating layer is displayed, and then the interactive floating layer disappears to play a flickering effect, so as to prompt the user to return to the key position.
According to the scheme of the embodiment of the invention, the content introduction of the reading object and the video frame content of the video object corresponding to the reading object can be simultaneously displayed in the interface, so that the content of the reading object can be more intuitively understood by a user, and the reading interest of the user is aroused. In other words, in the scheme of the embodiment of the invention, the content introduction of the reading object is more intuitive, and a good content understanding experience can be brought to a user. And the advertisement is not available, so that the user is not easy to generate the dislike behavior.
In another implementation of the present invention, for example, determining, in response to a search term, at least one reading object matching the search term and at least one video object corresponding to the at least one reading object includes:
determining, in response to the search term, a reading object and a first correspondence between the plurality of reading objects and the plurality of video objects;
and determining a second corresponding relation between the reading objects and the video objects according to the first corresponding relation, wherein the second corresponding relation indicates the corresponding relation between at least one reading object of the plurality of reading objects and at least one video object of the plurality of video objects in the first corresponding relation.
For example, the at least one reading object is a high topic relevance object in the plurality of reading objects. For example, the at least one video object is a high topic relevance object of the plurality of video objects. For example, each reading object comprises a plurality of sub-reading objects. For example, each video object includes a plurality of sub video objects. For example, each video object has a third correspondence with each reading object. For example, the method may further comprise: and determining a fourth corresponding relation between the sub reading object and the sub video object according to the third corresponding relation.
For example, the reading object is an electronic book, and the sub-reading object is a chapter of the electronic book. For example, the video object is a video corresponding to an electronic book, and the sub-video object is a video of a chapter of the electronic book. The method also includes presenting a plurality of (at least two) search result refinement levels. For example, in response to a user entering a target level of the plurality of search result refinement levels, search results corresponding to the target level are displayed. For example, different levels of search result refinement correspond to different search results. For example, the client responds to the simplified level of the search result input by the user according to the corresponding relation returned by the server, and the ranking is carried out again.
In another implementation manner of the present invention, in the search result interface, displaying the content of the video frame of the at least one video object and displaying the content introduction of the at least one reading object includes: in the search results interface, video frame content of at least one video object is displayed based on the first presentation area, and content introduction of at least one reading object is displayed based on the second presentation area.
For example, the first presentation area is provided with a first interface switching trigger component, and the second presentation area is provided with a second interface switching trigger component. For example, the interface switch triggering component is used to switch from a current search interface results page to a content presentation page. The content presentation page may be a content presentation page of a reading object, or may be a content presentation page of a video object. For example, the first interface switch triggering component is the same as the second interface switch triggering component. For example, the first interface switch trigger component is different from the second interface switch trigger component. For example, in response to the triggering operation of the first interface switching triggering component, switching to a content presentation page of the video object or switching to a content presentation page of the reading object. For example, in response to the triggering operation of the second interface switching triggering component, the content presentation page of the video object is switched to, or the content presentation page of the reading object is switched to. Because the interface switching triggering component is used for switching from the current search interface result page to the content presentation page, the user can be guided to leave the search interface page, and the content presentation interface of the object or the video object can be read.
As shown in fig. 3B, the user clicks on the search results interface on the left side, and the client switches to the content presentation interface for the video object on the right side.
In another implementation of the present invention, displaying video frame content of at least one video object based on a first presentation area includes: determining a first ordering of the at least one video object based on the subject matter; based on the first ordering, displaying the corresponding characteristic video frame of the at least one video object in the first presentation area.
For example, the client receives the initial ranking results from the server. For example, the client determines a first ordering of at least one video object based on the initial ordering result. For example, the client receiving server returns the first order. For example, determining at least one video object is based on at least one degree of matching of the subject matter content with the reading object. For example, the at least one video object is ranked based on the at least one degree of match. For example, the video object with the highest matching degree with the subject of the reading object is ranked in the front, and the video object with the lowest matching degree is ranked in the last.
In another implementation manner of the present invention, displaying the content introduction of at least one reading object based on the second presentation area comprises: determining a second ranking of the at least one reading object based on the subject matter; and displaying the corresponding content introduction of the at least one reading object in the second presentation area based on the second ordering.
For example, the client receives the initial ranking results from the server. For example, the client determines a second ranking of the at least one reading object based on the initial ranking result. For example, the client receiving server returns the second ranking. For example, determining at least one reading object is based on at least one degree of matching of the subject content to the video object. For example, the at least one reading object is ranked based on the at least one degree of match. For example, the reading object with the highest matching degree with the subject of the video object is ranked in the front, and the reading object with the lowest matching degree is ranked in the last.
In another implementation manner of the present invention, the first display area is provided with at least one interface switching triggering component respectively corresponding to at least one video object, wherein the method further comprises: responding to a target interface switching triggering component in at least one interface switching triggering component, and switching from the search result interface to a content display interface of a target video object corresponding to the target interface switching triggering component; and displaying the content in the content display interface.
As shown in fig. 3B, the drawing on the right is a content presentation interface for the video object. Optionally, the reading object presentation interface may also be entered, as shown in the right drawing of fig. 4.
In another implementation manner of the present invention, displaying content in a content display interface includes: determining a playing area of the target video object, so that the target playing area is adapted to the content display interface; and in the playing area, displaying the characteristic video frame of the target video object.
For example, a feature video frame may include the most hot element of the video object. For example, the most thematic characteristic video frames when a movie or video is first shown. For example, the characteristic video frame may be consistent with the content of the video frame presented in the search result interface, thereby ensuring continuity of the user experience with the interface switching.
In another implementation of the invention, a play trigger component of the target video object is displayed, for example, in a first portion of the play area. And displaying the playing triggering component of at least any other video object in the second part of the playing area. The first portion may be any location in the video playback area. For example, for a rectangular playing area, the first part is any one of the four corners of the playing area. For example, there may be a middle portion of each of the four sides. For example, the alternative location that may be the second portion is the same as the alternative location that may be the first portion. For example, when the first portion is in a first angular position, the second portion is in a second angular position, the first angular position being different from the second angular position.
In one example, the method further comprises: in the middle portion of the playback area, a playback trigger component of the target video object is displayed. And displaying the playing triggering component of at least any other video object at the edge part in the playing area.
In another implementation of the invention, for example, a reading trigger area of the reading object is presented within the play area of the video object. For example, a play trigger area of the video object is presented within a reading trigger area of the reading object. Due to the arrangement, the video object can be switched with the reading object. As shown in fig. 4, when the user triggers the trigger area in the video interface shown in the left drawing, the interface transitions to the reading interface shown in the right drawing. Similarly, when the user triggers the trigger area of the reading interface on the right, the interface transitions to the video interface on the left. Because the trigger area does not affect the viewing of the video object or the reading of the reading object, the trigger area is presented in the reading interface or the playing interface, so that the reading interface or the playing interface can be matched with the outline of the display screen, and the optimal reading or viewing experience is realized under the condition that the size of the display screen is limited.
In another implementation manner of the present invention, the content is displayed in a content display interface, and the method further includes: displaying a play trigger component of the target video object in the middle part of the play area; and displaying a reading triggering component of a target reading object in the at least one reading object at the edge part in the playing area.
In another implementation of the present invention, as shown in fig. 5, after the state in the left drawing is entered into the state in the right drawing, reading of the reading object and viewing of the video object can be performed simultaneously. For example, the relative size of the reading area and the playing area can be adjusted by sliding up or down. For example, fine adjustment of the reading area and the playing area can be achieved by sliding up or down. For example, a full screen display of the reading area or the playing area may be achieved by sliding up or down. It is to be understood that the drawings are designed solely for the purposes of illustration and not as a definition of the limits of the invention.
In another implementation manner of the present invention, the content is displayed in a content display interface, and the method further includes: and responding to the triggering operation of the target reading triggering component, and displaying the reading area of the target reading object by covering the playing area of the target video object.
In another implementation manner of the present invention, the content is displayed in a content display interface, and the method further includes: and responding to the triggering operation of the target reading triggering component, and displaying the initial reading area of the target reading object by reducing the playing area of the target video object, so that the playing area of the target video object and the initial reading area of the target reading object are matched in the content display interface.
In another implementation manner of the present invention, the content is displayed in a content display interface, and the method further includes: in the initial reading area, displaying a continuous reading triggering component of the target reading object: and responding to the trigger operation of the continuous reading trigger component, and displaying the reading area of the target reading object by enlarging the initial reading area so that the reading area of the target reading object is matched in the content display interface.
As shown in fig. 6, the state of the right-hand drawing presents the reading area in full screen display, and the audio playback area is presented on either of the four sides of the reading area (as an example, the lower side is shown in the drawing).
Fig. 7A is a schematic diagram of a search network architecture 100 according to an embodiment of the present invention. Search network architecture 100 includes a client system 120, a search server system 160, a video asset server 132, and a reading asset server 142 connected to each other by a network 110. Although fig. 7A illustrates a particular arrangement of client system 120, search server system 160, video asset server 132, reading asset server 142, and network 110, embodiments of the invention contemplate any suitable arrangement of client system 120, search server system 160, video asset server 132, reading asset server 142, and network 110. By way of example and not limitation, two or more of client system 120, search server system 160, video asset server 132, and reading asset server 142 may be directly connected to each other, bypassing network 110. As another example, two or more of client system 120, search server system 160, video asset server 132, and reading asset server 142 may be physically or logically co-located, in whole or in part, with one another. Fig. 7A is a schematic diagram of a search network architecture according to another embodiment of the present invention. Moreover, although FIG. 7A illustrates a particular number of client systems 120, search server systems 160, video asset servers 132, reading asset servers 142, and networks 110, embodiments of the invention contemplate any suitable number of client systems 120, search server systems 160, video asset servers 132, reading asset servers 142, and networks 110. By way of example and not limitation, search network 100 may include a plurality of client systems 120, search server system 160, video asset server 132, reading asset server 142, and network 110.
Any suitable network 110 is contemplated by embodiments of the present invention. By way of example and not limitation, one or more portions of network 110 may include an ad hoc network, an intranet, an extranet, a Virtual Private Network (VPN), a Local Area Network (LAN), a wireless LAN (wlan), a Wide Area Network (WAN), a wireless WAN (wwan), a Metropolitan Area Network (MAN), a portion of the internet, a portion of the Public Switched Telephone Network (PSTN), a cellular telephone network, or a combination of two or more of these. The network 110 may include one or more networks 110.
Fig. 7B is a schematic flow chart of an interaction and display control method according to another embodiment of the invention. The interaction and display control method of fig. 7B includes:
710: search terms input by a user are acquired.
It should be understood that the search term obtained from the user input may be implemented in a browser or application program in the user device, including but not limited to the user device system framework described above, but it should be understood that the method of the present embodiment may be performed by any suitable electronic device having data processing capabilities, including but not limited to: server, mobile terminal (such as mobile phone, PAD, etc.), PC, etc. For example, a client such as an application or a browser is installed in the above-described device. For example, I/O (input/output) events are detected using a client, such as a browser or application, for example, a trigger operation, such as a search engine component, under the HTML framework described above. For example, the search engine component sends an input search term, such as a character field, to the search server when the trigger operation is monitored. The above-mentioned trigger operation may be in any form, for example, by mouse click, user touch operation such as gesture, intelligent interaction such as voice recognition or face recognition or gesture recognition, and the like.
The search terms may include any of a voice search term, a picture search term, a video frame search term.
720: in response to the search term, at least one reading object and at least one video object that match the search term are determined.
It should be appreciated that for determining at least one reading object that matches the search term, for example, receiving a matching plurality of reading objects from the server, for example, determining at least one object from the plurality of reading objects. For example, the server performs initial ranking based on the index results of the search terms using a first ranking algorithm, and sends the results of the initial ranking to the user device. For example, the user equipment performs subsequent sorting, and the subsequent sorting may be re-sorting or intercepting part of the initial sorting to obtain a sorting result. For example, before the server performs the sorting, the reading object information may be obtained from a reading resource (e.g., a reading resource server, or a reading resource database, etc.), and for example, the video object information may be obtained from a video resource (e.g., a video resource server, or a video resource database). For determining at least one video object that matches the search, for example, receiving a match from the server is a plurality of video objects, for example, determining at least one video object from the plurality of video objects.
For example, the server cluster applied in the embodiment of the present invention includes at least one of the search server, the application server, the WEB server, and the database server described above. For example, the database server acquires the reading object information and the video object information by a crawler operation or the like. For example, the database server may directly access the video object and the reading object through authentication. For example, the retrieval process in the search described above may be implemented by building various indexes. For example, an index between search terms and database servers may be constructed in an inverted index fashion.
In a specific embodiment, for example, at a server side including the server cluster, the labeling of the video object information and the reading information is performed, and at least one reading object is associated with at least one video object. For example, at the server, at least one reading object is precisely matched with at least one video object. The at least one reading object is associated with the at least one video, for example, by a classification or clustering algorithm. For example, at least one reading object is precisely matched with at least one video object through vector calculation. For example, the at least one reading object is matched with the at least one video object by establishing a vector similarity index or a key field index. For example, at least one of the reading object or the video object is subjected to keyword or keyword labeling. For example, when the server performs search sorting, the reading object and the video object are placed in one data set to perform search sorting, or the data size of the data object and the data size of the reading object may be determined, and sorting may be performed based on an object having a smaller data size. For example, if the data amount of the reading object is small, the reading object is sorted. For example, the sequencing result of the reading object and the association or correspondence between the reading object and the video object are sent to the client. Alternatively, if the amount of data of the video object is small, the video object is sorted. For example, the sorting result of the video objects and the association or correspondence between the reading object and the video object are sent to the client. Since only one of the reading object or the video object is searched and the corresponding relation of the reading object and the video object is returned to the client, the search calculation amount of the server is reduced.
In another implementation manner of the present invention, when the server performs annotation, the reading object and the video object may be associated in any manner. For example, an index from a reading object to a video object may be constructed, and when retrieving based on the search term, the reading object is found first, and then the video object corresponding to the reading object is determined. For example, an index from a video object to a reading object may be constructed, and when retrieving based on the search term, the video object is found first, and then the reading object corresponding to the video object is determined. For example, the server sends a first correspondence between the reading object and the video object to the client. For example, the client determines a second object relationship of the at least one reading object and the at least one video object based on the first corresponding relationship. For example, the first corresponding relationship corresponds to an initial sorting result of the server. For example, the second corresponding relationship is a sorting result obtained by sorting the initial sorting result of the server again.
It should be understood that the second correspondence relationship may be any one of a one-to-one correspondence relationship, a one-to-many relationship, a many-to-one relationship, and a many-to-many relationship, and the embodiment of the present invention does not limit this.
730: in the search result interface, the content of the video frame of at least one video object is displayed, and the content introduction of at least one reading object is displayed.
For example, the client receives the initial ranking results from the server. For example, the client determines a first ordering of at least one video object based on the initial ordering result. For example, the client receiving server returns the first order. For example, determining at least one video object is based on at least one degree of matching of the subject matter content with the reading object. For example, the at least one video object is ranked based on the at least one degree of match. For example, the video object with the highest matching degree with the subject of the reading object is ranked in the front, and the video object with the lowest matching degree is ranked in the last.
For example, the client receives the initial ranking results from the server. For example, the client determines a second ranking of the at least one reading object based on the initial ranking result. For example, the client receiving server returns the second ranking. For example, determining at least one reading object is based on at least one degree of matching of the subject content to the video object. For example, the at least one reading object is ranked based on the at least one degree of match. For example, the reading object with the highest matching degree with the subject of the video object is ranked in the front, and the reading object with the lowest matching degree is ranked in the last.
In other words, the server acquires the retrieval request sent by the front end, and the retrieval request comprises video feature information generated according to the video. And responding to the retrieval request, and determining a plurality of video characteristic information matched with the video characteristic information in the video characteristic information database. For example, the video feature information in the video feature information database has been labeled or feature vector extracted. For example, a plurality of videos corresponding to a plurality of video feature information are returned.
In one implementation manner of the present invention, determining a plurality of video feature information in a video feature information database, which match with the video feature information, in response to a search request includes: responding to the retrieval request, and determining at least one matching degree between at least one piece of video characteristic information in the video characteristic information database and the video characteristic information respectively; and sequencing the at least one piece of video characteristic information based on the at least one matching degree to obtain a plurality of pieces of video characteristic information.
For example, the embodiment of the present invention does not limit the display manner of the at least one reading object and the at least one video object. For example, a first arrangement may be used for at least one reading object and a second arrangement may be used for at least one video object. For example, at least one reading object may be displayed on one side and at least one video object may be displayed on the other side. For example, the at least one reading object and the at least one video object may be alternately arranged. For example, the relevance may be ranked in descending order based on relevance (or degree of match) in the search results interface. For example, ranking is based on at least one reading object, and then displaying a video object related to the reading object in its vicinity, it being understood that if there is no matching video object for a particular reading object, then another reading object related may be recommended. Similarly, the ordering may be based on at least one video object, and then a reading object that the video object matches or is associated with is displayed around or near.
In another implementation of the present invention, a first reading object, a first video object matching the first reading, a second reading object matching the first video object, a third video object matching the second reading object, and so on are displayed in sequence.
Fig. 7C and 7D are schematic diagrams of interaction and interface states of another embodiment of the present invention. As shown, in fig. 7C, the video playback trigger component is shown on the left side, and the parts of the reading object are shown on the right side. In fig. 7D, the video playback area is shown on the left side, and the reading trigger component of the reading object is shown on the right side. It is to be understood that the above-described arrangements or displays are exemplary only and are not limiting upon the embodiments of the present invention.
It should also be understood that the embodiments of fig. 7A and 7B are equally applicable to the various displays depicted in fig. 1A-6, with the same or similar descriptions referring to the same or similar schemes or means.
Fig. 8A is a schematic diagram of a search network architecture 100 according to an embodiment of the present invention. Search network architecture 100 includes a client system 120, a search server system 160, a video asset server 132, and a reading asset server 142 connected to each other by a network 110. Although FIG. 8A illustrates a particular arrangement of client system 120, search server system 160, video asset server 132, reading asset server 142, and network 110, embodiments of the invention contemplate any suitable arrangement of client system 120, search server system 160, video asset server 132, reading asset server 142, and network 110. By way of example and not limitation, two or more of client system 120, search server system 160, video asset server 132, and reading asset server 142 may be directly connected to each other, bypassing network 110. As another example, two or more of client system 120, search server system 160, video asset server 132, and reading asset server 142 may be physically or logically co-located, in whole or in part, with one another. Fig. 8A is a schematic diagram of a search network architecture according to another embodiment of the present invention. Moreover, although FIG. 8A illustrates a particular number of client systems 120, search server systems 160, video asset servers 132, reading asset servers 142, and networks 110, embodiments of the invention contemplate any suitable number of client systems 120, search server systems 160, video asset servers 132, reading asset servers 142, and networks 110. By way of example and not limitation, search network 100 may include a plurality of client systems 120, search server system 160, video asset server 132, reading asset server 142, and network 110.
Any suitable network 110 is contemplated by embodiments of the present invention. By way of example and not limitation, one or more portions of network 110 may include an ad hoc network, an intranet, an extranet, a Virtual Private Network (VPN), a Local Area Network (LAN), a wireless LAN (wlan), a Wide Area Network (WAN), a wireless WAN (wwan), a Metropolitan Area Network (MAN), a portion of the internet, a portion of the Public Switched Telephone Network (PSTN), a cellular telephone network, or a combination of two or more of these. The network 110 may include one or more networks 110.
Fig. 8B is a schematic flowchart of an interaction and display control method according to another embodiment of the invention. The interaction and display control method of fig. 8B includes:
810: and monitoring the browsing operation of the reading object in the browsing interface.
It should be understood that the browsing operation of monitoring the reading object in the browsing interface can be implemented by, for example, a framework of a browser or an application program in the user equipment, and the specific implementation manner includes, but is not limited to, a trigger event monitoring module or a trigger event monitor of the browser or the application program. For example, an I/O (input/output) event is detected with a client such as a browser or an application, for example, a trigger operation such as an application open operation or a page jump event in a browser under the above-described HTML framework. The above-mentioned trigger operation may be in any form, for example, by mouse click, user touch operation such as gesture, intelligent interaction such as voice recognition or face recognition or gesture recognition, and the like.
820: and determining at least one video object matched with the reading object in response to the browsing operation.
For example, when the browser or the e-reader or the reading application program monitors the trigger operation, the browser or the e-reader or the reading application program sends the input search term such as the reading keyword or the subject key field to the search server. For example, at least one video object matching the reading object is returned from the server. For example, a particular story keyword in a reading object, or a particular named entity, is tagged as matching a particular video. For example, the particular video is associated with the placed advertisement. For example, when the user clicks on or triggers the playing of the video object, the matching advertisement is recommended for the user.
830: and displaying the video frame content of at least one video object in the browsing interface.
As shown in fig. 8C, for example, when the reading object is opened, for example, a client such as a reading application or a browser is opened, a search operation of the video object is triggered. For example, a play trigger component that displays the associated video object in a display interface of the reading application. For example, the play trigger component is displayed in a position that does not affect reading. For example, the play trigger component may be displayed in the form of a floating button at any location.
Fig. 9A is a schematic diagram of a search network architecture 100 according to an embodiment of the present invention. Search network architecture 100 includes a client system 120, a search server system 160, a video asset server 132, and a reading asset server 142 connected to each other by a network 110. Although FIG. 9A illustrates a particular arrangement of client system 120, search server system 160, video asset server 132, reading asset server 142, and network 110, embodiments of the invention contemplate any suitable arrangement of client system 120, search server system 160, video asset server 132, reading asset server 142, and network 110. By way of example and not limitation, two or more of client system 120, search server system 160, video asset server 132, and reading asset server 142 may be directly connected to each other, bypassing network 110. As another example, two or more of client system 120, search server system 160, video asset server 132, and reading asset server 142 may be physically or logically co-located, in whole or in part, with one another. Fig. 9A is a schematic diagram of a search network architecture according to another embodiment of the present invention. Moreover, although FIG. 9A illustrates a particular number of client systems 120, search server systems 160, video asset servers 132, reading asset servers 142, and networks 110, embodiments of the invention contemplate any suitable number of client systems 120, search server systems 160, video asset servers 132, reading asset servers 142, and networks 110. By way of example and not limitation, search network 100 may include a plurality of client systems 120, search server system 160, video asset server 132, reading asset server 142, and network 110.
Any suitable network 110 is contemplated by embodiments of the present invention. By way of example and not limitation, one or more portions of network 110 may include an ad hoc network, an intranet, an extranet, a Virtual Private Network (VPN), a Local Area Network (LAN), a wireless LAN (wlan), a Wide Area Network (WAN), a wireless WAN (wwan), a Metropolitan Area Network (MAN), a portion of the internet, a portion of the Public Switched Telephone Network (PSTN), a cellular telephone network, or a combination of two or more of these. The network 110 may include one or more networks 110.
FIG. 9B is a schematic flow chart diagram of an interaction and display control method according to another embodiment of the invention; the interaction and display control method of fig. 9B includes:
910: search terms input by a user are acquired.
It is to be understood that obtaining the user-entered search term may be implemented, for example, through a framework of a browser or application in the user device, including but not limited to a trigger event monitoring module or trigger event monitor of the browser or application. For example, an I/O (input/output) event is detected with a client such as a browser or an application, for example, a trigger operation such as an application open operation or a page jump event in a browser under the above-described HTML framework. The above-mentioned trigger operation may be in any form, for example, by mouse click, user touch operation such as gesture, intelligent interaction such as voice recognition or face recognition or gesture recognition, and the like.
920: in response to the search term, a target reading object matching the search term and associated reading objects and associated video objects associated with the target reading object are determined.
For example, when the browser or the search engine application acquires the trigger operation, the browser or the search engine application sends the input search term such as a reading keyword or a topic key field to the search server. For example, a target reading object is returned from the server, and an associated video object and an associated reading object associated with the reading object are returned. For example, the associated reading object and the target reading object comprise associated specific plot keywords. For example, associating a video object as a particular named entity is labeled as matching a particular video. For example, the particular video is associated with the placed advertisement. For example, when the user clicks on or triggers the playing of the video object, the matching advertisement is recommended for the user.
930: in the search result interface, a target reading object is displayed, and based on the target reading object, an associated reading object and an associated video object are displayed.
As an example, as shown in fig. 9C, when a presentation area of an associated reading object or an associated video object is clicked, a play operation of the associated video object is triggered. For example, a reading trigger component of the associated reading object is displayed in the associated video playing interface. For example, the reading trigger component is displayed in a position that does not affect viewing. For example, the reading trigger assembly may be displayed in the form of a floating button at any position.
It should also be understood that the associated video object may be directly associated with the associated reading object, e.g., the associated video object matches or corresponds to the associated reading object. For example, the associated video object may be indirectly associated with the associated reading object through the target reading video. For example, the method may further include determining a target video object associated with the target reading object. For example, the method may further include displaying a target video object associated with the target reading object.
It will also be appreciated that for determining a target reading object matching the search term and associated reading objects and associated video objects associated with the target reading object in response to the search term, the target reading object matching the search term and associated reading objects and associated video objects associated with the target reading object may be determined simultaneously; or the target reading object matched with the search term is determined firstly, and then the associated reading object and the associated video object associated with the target reading object are determined. In addition, for determining the associated reading object and the associated video object associated with the target reading object, the associated reading object and the associated video object associated with the target reading object may be determined simultaneously. One of the associated reading object and the associated video object associated with the target reading object may also be determined, and the other may then be determined based on the one. In other words, the determination of the associated reading object and the associated video object may be parallel.
It will also be appreciated that determining, in response to the search term, a target reading object that matches the search term includes: and determining the target reading object as an initial ordering in response to the search term, wherein the initial search result further comprises a target video object associated with the target reading object or a target video object corresponding to the target reading object. Determining an associated reading object and an associated video object associated with the target reading object, comprising: and performing secondary sorting based on the initial search result to obtain an associated reading object and an associated video object associated with the target reading object.
It is also understood that, for example, determining the target reading object as an initial ranking includes: and acquiring an initial sequencing result from the server, and determining a target reading object from the initial sequencing result. For example, determining an associated reading object and an associated video object associated with the target reading object includes: and acquiring and determining an associated reading object and an associated video object associated with the target reading object from the server. For example, determining an associated reading object and an associated video object associated with the target reading object includes: and acquiring an initial sequencing result from the server. For example, in the initial sorting result, secondary sorting is performed based on the target reading object. For example, from the secondary ranking results, an associated reading object and an associated video object are determined.
It should also be appreciated that for displaying the associated reading object and the associated video object based on the target reading object, a layer of the associated reading object and a layer of the associated video object may be displayed based on a layer of the target reading object. The region associated with the reading object and the region associated with the video object may also be displayed based on the region of the target reading object. For example, the target reading object is displayed in the search result presentation area. For example, a region associated with the reading object and an associated video object are displayed in the recommendation presentation region.
It should also be appreciated that in the search results interface, target reading objects are displayed, including: and displaying a plurality of target reading objects. For example, the plurality of target reading objects includes a first target reading object and a second target reading object, the first target reading object is ordered before the second target reading object, and the associated reading object and the associated video object are displayed based on the target reading objects, including: and displaying the associated reading object and the associated video object based on the first target reading object in the search result interface. In other words, the associated reading object and the associated video object are displayed based on the front target reading object among the plurality of target reading objects. Since the user's attention to the reading object or the video object is limited, only the associated reading object and the associated video object associated with a part or the most forward target reading object are displayed, so that the user's attention is focused on the current recommended object, and the associated reading object and the associated video object are not displayed in other parts, thereby being beneficial to presenting more search results, such as keyword matching search results, in a relatively limited display space.
Fig. 10A is a schematic flow chart of an interaction and display control method according to another embodiment of the invention. The interaction and display control method of fig. 10 includes:
1010: monitoring the browsing operation of a video object in a browsing interface;
it should be understood that the browsing operation of monitoring the video object in the browsing interface can be implemented by, for example, a framework of a browser or an application program in the user equipment, and the specific implementation manner includes, but is not limited to, a trigger event monitoring module or a trigger event monitor of the browser or the application program. For example, an I/O (input/output) event is detected with a client such as a browser or an application, for example, a trigger operation such as an application open operation or a page jump event in a browser under the above-described HTML framework. The above-mentioned trigger operation may be in any form, for example, by mouse click, user touch operation such as gesture, intelligent interaction such as voice recognition or face recognition or gesture recognition, and the like.
1020: and determining at least one reading object matched with the video object in response to the browsing operation.
For example, when the browser or the video player or the video application monitors the trigger operation, the browser or the video player or the video application sends the input search term such as the video frame or the topic keyword to the search server. For example, at least one reading object matching the video object is returned from the server. For example, a particular episode key picture or key frame in a video object is labeled as matching a particular text. For example, the particular text is associated with the placed advertisement. For example, when the user clicks or triggers the reading object, the matched text advertisement is recommended for the user.
1030: and displaying the content introduction of at least one reading object in the browsing interface.
As shown in fig. 10B, when a client such as a video player or a video-class application or a browser is opened, a search operation for a reading object is triggered. For example, a reading trigger component of the relevant reading object is displayed in a display interface of the video application. For example, the reading trigger component is displayed in a position that does not affect the playback. For example, the reading trigger assembly may be displayed in the form of a floating button at any position.
It should be understood that the arrangements of fig. 8B, 9B and 10A are equally applicable to the various displays depicted in fig. 1A-7D, with the same or similar descriptions referring to the same or similar arrangements or means.
Fig. 11A is a schematic block diagram of an interaction and display control device according to another embodiment of the present invention. The interaction and display control apparatus of fig. 11A, comprising:
an obtaining module 1101 for obtaining a search term input by a user;
a determining module 1102, responsive to the search term, for determining a target reading object matching the search term and associated reading objects and associated video objects associated with the target reading object;
the display module 1103 displays the target reading object in the search result interface, and displays the associated reading object and the associated video object based on the target reading object.
Fig. 11B is a schematic block diagram of an interaction and display control device according to another embodiment of the present invention. The interaction and display control apparatus of fig. 11B includes:
the obtaining module 1110 obtains a search term input by a user.
The determining module 1120 determines at least one reading object matching the search term and at least one video object corresponding to the at least one reading object in response to the search term.
The display module 1130 displays the content of the video frame of the at least one video object and displays the content description of the at least one reading object in the search result interface.
According to the scheme of the embodiment of the invention, the content introduction of the reading object and the video frame content of the video object corresponding to the reading object can be simultaneously displayed in the interface, so that the content of the reading object can be more intuitively understood by a user, and the reading interest of the user is aroused.
In one implementation manner of the present invention, the display module is specifically configured to: in the search results interface, video frame content of at least one video object is displayed based on the first presentation area, and content introduction of at least one reading object is displayed based on the second presentation area.
In one implementation manner of the present invention, the display module is specifically configured to: determining a first ordering of the at least one video object based on the subject matter; based on the first ordering, displaying the corresponding characteristic video frame of the at least one video object in the first presentation area.
In one implementation manner of the present invention, the display module is specifically configured to: determining a second ranking of the at least one reading object based on the subject matter; and displaying the corresponding content introduction of the at least one reading object in the second presentation area based on the second ordering.
In one implementation manner of the present invention, the first display area is provided with at least one interface switching triggering component respectively corresponding to at least one video object, wherein the apparatus further includes an interface switching module, and the interface switching module is configured to: responding to a target interface switching triggering component in at least one interface switching triggering component, and switching from the search result interface to a content display interface of a target video object corresponding to the target interface switching triggering component; the display module is specifically used for; and displaying the content in the content display interface.
In one implementation manner of the present invention, the display module is specifically configured to: determining a playing area of the target video object, so that the target playing area is adapted to the content display interface; and in the playing area, displaying the characteristic video frame of the target video object.
In one implementation of the invention, the display module is further configured to: displaying a play trigger component of the target video object in the middle part of the play area; and displaying the playing triggering component of at least any other video object at the edge part in the playing area.
In one implementation of the invention, the display module is further configured to: displaying a play trigger component of the target video object in the middle part of the play area; and displaying a reading triggering component of a target reading object in the at least one reading object at the edge part in the playing area.
In one implementation of the invention, the display module is further configured to: and responding to the triggering operation of the target reading triggering component, and displaying the reading area of the target reading object by covering the playing area of the target video object.
In one implementation of the invention, the display module is further configured to: and responding to the triggering operation of the target reading triggering component, and displaying the initial reading area of the target reading object by reducing the playing area of the target video object, so that the playing area of the target video object and the initial reading area of the target reading object are matched in the content display interface.
In one implementation of the invention, the display module is further configured to: in the initial reading area, displaying a continuous reading triggering component of the target reading object: and responding to the trigger operation of the continuous reading trigger component, and displaying the reading area of the target reading object by enlarging the initial reading area so that the reading area of the target reading object is matched in the content display interface.
In an implementation manner of the present invention, the determining module is specifically configured to: in response to the search term, at least one reading object that matches the subject matter content of the search term from among a plurality of candidate reading objects obtained from the second resource is determined, and at least one video object that matches the subject matter content of the search term from among a plurality of candidate video objects obtained from the first resource is determined.
In one implementation of the invention, the search terms include any of a voice search term, a picture search term, a video frame search term.
The apparatus of this embodiment is used to implement the corresponding method in the foregoing method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again. In addition, the functional implementation of each module in the apparatus of this embodiment can refer to the description of the corresponding part in the foregoing method embodiment, and is not described herein again.
Fig. 12 is a schematic block diagram of an interaction and display control apparatus according to another embodiment of the present invention. The interaction and display control apparatus of fig. 12 includes:
an obtaining module 1210 for obtaining a search term input by a user;
a determining module 1220 for determining at least one reading object and at least one video object matching the search term in response to the search term;
the display module 1230 displays the content of the video frame of the at least one video object and displays the content introduction of the at least one reading object in the search result interface.
The apparatus of this embodiment is used to implement the corresponding method in the foregoing method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again. In addition, the functional implementation of each module in the apparatus of this embodiment can refer to the description of the corresponding part in the foregoing method embodiment, and is not described herein again.
Fig. 13 is a schematic block diagram of an interaction and display control apparatus according to another embodiment of the present invention. The interaction and display control apparatus of fig. 13 includes:
a monitoring module 1310 for monitoring browsing operation of the reading object in the browsing interface;
a determining module 1320, configured to determine at least one video object matching the reading object in response to the browsing operation;
the display module 1330 displays the video frame content of the at least one video object in the browsing interface.
The apparatus of this embodiment is used to implement the corresponding method in the foregoing method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again. In addition, the functional implementation of each module in the apparatus of this embodiment can refer to the description of the corresponding part in the foregoing method embodiment, and is not described herein again.
Fig. 14 is a schematic block diagram of an interaction and display control apparatus according to another embodiment of the present invention. The interaction and display control apparatus of fig. 14 includes:
the monitoring module 1410 is used for monitoring browsing operation of a video object in a browsing interface;
a determining module 1420, responsive to the browsing operation, for determining at least one reading object matching the video object;
the display module 1430 displays the content introduction of the at least one reading object in the browsing interface.
The method of the present embodiment may be performed by any suitable electronic device having data processing capabilities, including but not limited to: server, mobile terminal (such as mobile phone, PAD, etc.), PC, etc.
The apparatus of this embodiment is used to implement the corresponding method in the foregoing method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again. In addition, the functional implementation of each module in the apparatus of this embodiment can refer to the description of the corresponding part in the foregoing method embodiment, and is not described herein again.
Fig. 15 is a schematic structural diagram of an electronic device according to another embodiment of the invention; the electronic device may include:
one or more processors 501;
a computer-readable medium 502, which may be configured to store one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the interaction and display control method as described in the above embodiment.
Fig. 16 is a hardware configuration of an electronic apparatus of another embodiment of the present invention; as shown in fig. 6, the hardware structure of the electronic device may include: a processor 1601, a communication interface 1602, a computer-readable medium 1603, and a communication bus 1604;
wherein the processor 1601, the communication interface 1602, and the computer-readable medium 1603 communicate with each other via the communication bus 1604;
alternatively, the communication interface 1602 may be an interface of a communication module;
the processor 1601 may be specifically configured to: acquiring a search term input by a user; determining, in response to the search term, at least one reading object matching the search term and at least one video object corresponding to the at least one reading object; displaying the content of the video frame of the at least one video object and displaying the content introduction of the at least one reading object in the search result interface, or,
acquiring a search term input by a user; determining, in response to the search term, at least one reading object and at least one video object that match the search term; displaying the content of the video frame of the at least one video object and displaying the content introduction of the at least one reading object in the search result interface, or,
monitoring browsing operation on a reading object in a browsing interface; responding to the browsing operation, and determining at least one video object matched with the reading object; displaying, in the browsing interface, video frame content of the at least one video object, or,
monitoring the browsing operation of a video object in a browsing interface; responding to the browsing operation, and determining at least one reading object matched with the video object; and displaying the content introduction of the at least one reading object in the browsing interface.
Processor 1601 may be a general purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), etc.; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The computer-readable medium 1603 may be, but is not limited to, a Random Access Memory (RAM), a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
In particular, according to an embodiment of the present invention, the processes described above with reference to the flowcharts may be implemented as a computer software program. For example, embodiments of the invention include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code configured to perform the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section, and/or installed from a removable medium. The computer program, when executed by a Central Processing Unit (CPU), performs the above-described functions defined in the method of the present application. It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access storage media (RAM), a read-only storage media (ROM), an erasable programmable read-only storage media (EPROM or flash memory), an optical fiber, a portable compact disc read-only storage media (CD-ROM), an optical storage media piece, a magnetic storage media piece, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code configured to carry out operations for the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may operate over any of a variety of networks: including a Local Area Network (LAN) or a Wide Area Network (WAN) -to the user's computer, or alternatively, to an external computer (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions configured to implement the specified logical function(s). In the above embodiments, specific precedence relationships are provided, but these precedence relationships are only exemplary, and in particular implementations, the steps may be fewer, more, or the execution order may be modified. That is, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present application may be implemented by software or hardware. The names of these modules do not in some cases constitute a limitation of the module itself.
As another aspect, the present application also provides a computer-readable medium on which a computer program is stored, which when executed by a processor implements the method as described in the above embodiments.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring a search term input by a user; determining, in response to the search term, at least one reading object matching the search term and at least one video object corresponding to the at least one reading object; displaying the content of the video frame of the at least one video object and displaying the content introduction of the at least one reading object in the search result interface, or,
acquiring a search term input by a user; determining, in response to the search term, at least one reading object and at least one video object that match the search term; displaying the content of the video frame of the at least one video object and displaying the content introduction of the at least one reading object in the search result interface, or,
monitoring browsing operation on a reading object in a browsing interface; responding to the browsing operation, and determining at least one video object matched with the reading object; displaying, in the browsing interface, video frame content of the at least one video object, or,
monitoring the browsing operation of a video object in a browsing interface; responding to the browsing operation, and determining at least one reading object matched with the video object; and displaying the content introduction of the at least one reading object in the browsing interface.
The expressions "first", "second", "said first" or "said second" as used in various implementations of embodiments of the invention may modify various means irrespective of order and/or importance, but these expressions do not limit the respective means. The above description is only configured for the purpose of distinguishing elements from other elements. For example, the first user equipment and the second user equipment represent different user equipment, although both are user equipment. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of embodiments of the present invention.
When an element (e.g., a first element) is referred to as being "operably or communicatively coupled" or "connected" (operably or communicatively) to "another element (e.g., a second element) or" connected "to another element (e.g., a second element), it is understood that the element is directly connected to the other element or the element is indirectly connected to the other element via yet another element (e.g., a third element). In contrast, it is understood that when an element (e.g., a first element) is referred to as being "directly connected" or "directly coupled" to another element (a second element), no element (e.g., a third element) is interposed therebetween.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (24)

1. An interaction and display control method comprising:
acquiring a search term input by a user;
determining, in response to the search term, at least one reading object matching the search term and at least one video object corresponding to the at least one reading object;
in the search result interface, the content of the video frame of the at least one video object is displayed, and the content introduction of the at least one reading object is displayed.
2. The method of claim 1, wherein said displaying, in a search results interface, video frame content of the at least one video object and displaying a content introduction of the at least one reading object comprises:
in the search result interface, the content of the video frame of the at least one video object is displayed based on the first presentation area, and the content introduction of the at least one reading object is displayed based on the second presentation area.
3. The method of claim 2, wherein said displaying video frame content of the at least one video object based on the first presentation region comprises:
determining a first ordering of the at least one video object based on subject matter content;
displaying, in the first presentation area, a characteristic video frame corresponding to the at least one video object based on the first ordering.
4. The method of claim 2, wherein said displaying the content presentation of the at least one reading object based on the second presentation area comprises:
determining a second ranking of the at least one reading object based on subject matter;
and displaying the corresponding content introduction of the at least one reading object in the second presentation area based on the second ordering.
5. The method of claim 2, wherein the first display area is provided with at least one interface toggle trigger component corresponding to the at least one video object, respectively, wherein the method further comprises:
responding to a target interface switching trigger component in the at least one interface switching trigger component, and switching from the search result interface to a content display interface of a target video object corresponding to the target interface switching trigger component;
and displaying the content in the content display interface.
6. The method of claim 5, wherein the displaying content in the content display interface comprises:
determining a playing area of the target video object, so that the target playing area is adapted in the content display interface;
and displaying the characteristic video frame of the target video object in the playing area.
7. The method of claim 6, wherein the displaying content in the content display interface further comprises:
displaying a play trigger component of the target video object in a middle part of the play area;
and displaying a playing triggering component of at least any other video object at the edge part in the playing area.
8. The method of claim 6, wherein the displaying content in the content display interface further comprises:
displaying a play trigger component of the target video object in a middle part of the play area;
and displaying a reading trigger component of a target reading object in the at least one reading object at the edge part in the playing area.
9. The method of claim 8, wherein the displaying content in the content display interface further comprises:
and responding to the triggering operation of the target reading triggering component, and displaying the reading area of the target reading object by covering the playing area of the target video object.
10. The method of claim 8, wherein the displaying content in the content display interface further comprises:
responding to the triggering operation of the target reading triggering component, and displaying the initial reading area of the target reading object by reducing the playing area of the target video object, so that the playing area of the target video object and the initial reading area of the target reading object are adapted in the content display interface.
11. The method of claim 10, wherein the displaying content in the content display interface further comprises:
in the initial reading area, displaying a continuous reading trigger component of the target reading object:
and responding to the triggering operation of the continuous reading triggering component, and displaying the reading area of the target reading object by enlarging the initial reading area so that the reading area of the target reading object is adapted in the content display interface.
12. The method of claim 1, wherein said determining, responsive to the search term, a reading object and a video object that match the search term comprises:
in response to the search term, determining at least one reading object that matches the subject matter content of the search term from among a plurality of candidate reading objects obtained from a second resource, and determining at least one video object that matches the subject matter content of the search term from among a plurality of candidate video objects obtained from a first resource.
13. The method of claim 1, wherein the search term comprises any of a voice search term, a picture search term, a video frame search term.
14. An interaction and display control method comprising:
acquiring a search term input by a user;
determining, in response to the search term, at least one reading object and at least one video object that match the search term;
in the search result interface, the content of the video frame of the at least one video object is displayed, and the content introduction of the at least one reading object is displayed.
15. An interaction and display control method comprising:
monitoring browsing operation on a reading object in a browsing interface;
responding to the browsing operation, and determining at least one video object matched with the reading object;
displaying the video frame content of the at least one video object in the browsing interface.
16. An interaction and display control method comprising:
monitoring the browsing operation of a video object in a browsing interface;
responding to the browsing operation, and determining at least one reading object matched with the video object;
and displaying the content introduction of the at least one reading object in the browsing interface.
17. An interaction and display control method comprising:
acquiring a search term input by a user;
in response to the search term, determining a target reading object matching the search term and associated reading objects and associated video objects associated with the target reading object;
displaying the target reading object and displaying the associated reading object and the associated video object based on the target reading object in a search result interface.
18. An interaction and display control apparatus comprising:
the acquisition module acquires a search term input by a user;
a determination module, responsive to the search term, for determining a target reading object matching the search term and associated reading objects and associated video objects associated with the target reading object;
and the display module displays the target reading object in a search result interface, and displays the associated reading object and the associated video object based on the target reading object.
19. An interaction and display control apparatus comprising:
the acquisition module acquires a search term input by a user;
a determination module that determines, in response to the search term, at least one reading object that matches the search term and at least one video object that corresponds to the at least one reading object;
and the display module displays the video frame content of the at least one video object and displays the content introduction of the at least one reading object in the search result interface.
20. An interaction and display control apparatus comprising:
the acquisition module acquires a search term input by a user;
a determination module that, in response to the search term, determines at least one reading object and at least one video object that match the search term;
and the display module displays the video frame content of the at least one video object and displays the content introduction of the at least one reading object in the search result interface.
21. An interaction and display control apparatus comprising:
the monitoring module is used for monitoring browsing operation on a reading object in the browsing interface;
the determining module is used for responding to the browsing operation and determining at least one video object matched with the reading object;
and the display module displays the video frame content of the at least one video object in the browsing interface.
22. An interaction and display control apparatus comprising:
the monitoring module monitors browsing operation on the video object in the browsing interface;
the determining module is used for responding to the browsing operation and determining at least one reading object matched with the video object;
and the display module displays the content introduction of the at least one reading object in the browsing interface.
23. An electronic device, the device comprising:
one or more processors;
a computer readable medium configured to store one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the interaction and display control method of any one of claims 1-17.
24. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the interaction and display control method according to any one of claims 1 to 17.
CN202010130752.5A 2020-02-28 2020-02-28 Interaction and display control method, device, electronic equipment and computer storage medium Pending CN113326396A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010130752.5A CN113326396A (en) 2020-02-28 2020-02-28 Interaction and display control method, device, electronic equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010130752.5A CN113326396A (en) 2020-02-28 2020-02-28 Interaction and display control method, device, electronic equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN113326396A true CN113326396A (en) 2021-08-31

Family

ID=77412845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010130752.5A Pending CN113326396A (en) 2020-02-28 2020-02-28 Interaction and display control method, device, electronic equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN113326396A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114168250A (en) * 2021-12-30 2022-03-11 北京字跳网络技术有限公司 Page display method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114168250A (en) * 2021-12-30 2022-03-11 北京字跳网络技术有限公司 Page display method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US20240126824A1 (en) Web document enhancement
US9697538B2 (en) Product recommendations based on analysis of social experiences
US11662872B1 (en) Providing content presentation elements in conjunction with a media content item
US8806000B1 (en) Identifying viral videos
US8046436B2 (en) System and method of providing context information for client application data stored on the web
JP2013517556A (en) Preview functionality for increased browsing speed
US10498839B2 (en) Providing supplemental content in relation to embedded media
US10984065B1 (en) Accessing embedded web links in real-time
CN113079417B (en) Method, device and equipment for generating bullet screen and storage medium
US20080228837A1 (en) System and method of restoring data and context of client applications stored on the web
US7996779B2 (en) System and method of providing a user interface for client applications to store data and context information on the web
US20080228903A1 (en) System and method of serving advertisements for web applications
WO2023071491A1 (en) Encyclopedia information determination method and apparatus, encyclopedia information display method and apparatus, and device and medium
US20150356191A1 (en) Web document enhancement
US8046437B2 (en) System and method of storing data and context of client application on the web
US10534826B2 (en) Guided search via content analytics and ontology
US20170116291A1 (en) Network caching of search result history and interactions
CN113326396A (en) Interaction and display control method, device, electronic equipment and computer storage medium
US11106758B2 (en) Customized display of filtered social media content using a private dislike button
US9471939B1 (en) Product recommendations based on analysis of social experiences
US20230073128A1 (en) Video display method and apparatus, computerreadable medium, and electronic device
US20230004615A1 (en) Systems and methods of organizing and providing bookmarked content
US20200257825A1 (en) Customized display of filtered social media content using a private dislike button
CN107145314B (en) Display processing method and device for display processing
WO2023035893A1 (en) Search processing method and apparatus, and device, medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination