CN113420247A - Page display method and device, electronic equipment, storage medium and program product - Google Patents

Page display method and device, electronic equipment, storage medium and program product Download PDF

Info

Publication number
CN113420247A
CN113420247A CN202110700304.9A CN202110700304A CN113420247A CN 113420247 A CN113420247 A CN 113420247A CN 202110700304 A CN202110700304 A CN 202110700304A CN 113420247 A CN113420247 A CN 113420247A
Authority
CN
China
Prior art keywords
video
target object
media
page
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110700304.9A
Other languages
Chinese (zh)
Inventor
吴远一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202110700304.9A priority Critical patent/CN113420247A/en
Publication of CN113420247A publication Critical patent/CN113420247A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations

Abstract

A page display method, a device, an electronic device, a storage medium and a program product are provided, wherein the page display method comprises the following steps: in response to a first predetermined operation of a control for a target object, acquiring M media resources related to the target object, wherein each media resource comprises at least one frame of image, the M media resources are generated from a video set corresponding to the target object, the video set comprises at least one video, the videos in the video set are associated with a label of the target object, and the control for the target object comprises the label of the target object; displaying a detail page about the target object, wherein the detail page comprises an image display area; the detail page for showing the target object comprises the following steps: at least partial areas of M media assets are presented in the image presentation area, M being an integer greater than 0. The page display method can help the user to quickly know the information of the related objects, and the displayed resources can well continue the interest of the user on the related objects from the perspective of user experience.

Description

Page display method and device, electronic equipment, storage medium and program product
Technical Field
The embodiment of the disclosure relates to a page display method and device, electronic equipment, storage medium and program product.
Background
With the continuous development of internet technology, the content and form of the webpage are richer, and the use experience of the user is continuously improved. For example, an anchor Point of a related object may be added to the user distribution content, for example, an anchor Point of a Point of interest (POI) such as a store associated with the current content may be added to the user distribution content, so that when the user distribution content is consumed, a detail page of the POI such as a store may be displayed by clicking the anchor Point, so that the user may quickly and conveniently know information of the POI such as a store.
Disclosure of Invention
At least one embodiment of the present disclosure provides a page display method, an apparatus, an electronic device, a storage medium, and a program product, which are used to help a user to quickly know information of a related object and improve user experience, so as to continue the interest of the user in the related object.
At least one embodiment of the present disclosure provides a page display method, including: in response to a first predetermined operation of a control for a target object, acquiring M media resources related to the target object, wherein each media resource comprises at least one frame of image, the M media resources are generated from a video set corresponding to the target object, the video set comprises at least one video, and the videos in the video set are associated with a label of the target object, wherein the control for the target object comprises the label of the target object; presenting a detail page about the target object, wherein the detail page includes an image presentation area, wherein presenting the detail page about the target object includes: at least partial areas of M media assets are presented in the image presentation area, M being an integer greater than 0.
At least one embodiment of the present disclosure further provides a page displaying apparatus, including: a response unit configured to respond to a first predetermined operation of a control for a target object, and acquire M media resources related to the target object, wherein each media resource comprises at least one frame of image, the M media resources are generated from a video set corresponding to the target object, the video set comprises at least one video, and the videos in the video set are associated with a label of the target object, wherein the control for the target object comprises the label of the target object; the display unit is configured to display a detail page related to the target object, wherein the detail page comprises an image display area, the display unit is further configured to display at least partial areas of the M media resources in the image display area, and M is an integer greater than 0.
At least one embodiment of the present disclosure also provides an electronic device including: a display device configured to present a page; a processor; a memory including one or more computer program modules; wherein one or more computer program modules are stored in the memory and configured to be executed by the processor, the one or more computer program modules comprising instructions for implementing the page presentation method according to any of the embodiments of the present disclosure.
At least one embodiment of the present disclosure also provides a storage medium for storing non-transitory computer readable instructions, which can implement the page display method according to any embodiment of the present disclosure when the non-transitory computer readable instructions are executed by a computer.
At least one embodiment of the present disclosure also provides a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program comprising program code for performing the page presentation method of any of the embodiments of the present disclosure.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Like reference symbols in the various drawings indicate like elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of a page display method according to some embodiments of the present disclosure;
FIG. 2 is a schematic diagram of an initial page provided by some embodiments of the present disclosure;
FIG. 3 is a schematic diagram of a details page provided by some embodiments of the present disclosure;
fig. 4 is a schematic diagram of obtaining a video clip according to some embodiments of the present disclosure;
FIG. 5 is a schematic illustration of an increase in image display area provided by some embodiments of the present disclosure;
FIG. 6 is a schematic diagram of a progress bar component of a details page provided by some embodiments of the present disclosure;
FIG. 7 is a schematic flow chart diagram illustrating a display details page provided by some embodiments of the present disclosure;
FIGS. 8A and 8B are schematic diagrams of association information for a details page provided by some embodiments of the present disclosure;
FIG. 9 is a schematic illustration of another associated information of a details page provided by some embodiments of the present disclosure;
fig. 10 is a flowchart illustrating a page request response method according to some embodiments of the disclosure;
FIG. 11 is a system that may be used to implement the page display method provided by embodiments of the present disclosure;
FIG. 12 is a schematic block diagram of a page displaying apparatus provided in some embodiments of the present disclosure;
fig. 13 is a schematic block diagram of an electronic device provided by some embodiments of the present disclosure;
fig. 14 is a schematic block diagram of another electronic device provided by some embodiments of the present disclosure; and
fig. 15 is a schematic diagram of a storage medium according to some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise. "plurality" is to be understood as two or more.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
In a detail page of an object such as a POI, structured information such as characters is mainly used, and the content displayed on the detail page is usually official publicity information rather than being presented from the perspective of user experience, so that the experience is not good, and the interest of the user on the object such as the POI cannot be well continued on the detail page.
At least one embodiment of the present disclosure provides a page display method, an apparatus, an electronic device, a storage medium, and a program product, which can help a user to quickly know information of a related object, and display resources are presented from a user experience perspective, so that a user's interest in the related object can be well continued, and user experience and feeling are improved.
At least one embodiment of the present disclosure provides a page display method, including: in response to a first predetermined operation of a control for a target object, acquiring M media resources related to the target object, wherein each media resource comprises at least one frame of image, the M media resources are generated from a video set corresponding to the target object, the video set comprises at least one video, the videos in the video set are associated with a label of the target object, and the control for the target object comprises the label of the target object; displaying a detail page about the target object, wherein the detail page comprises an image display area; the detail page for showing the target object comprises the following steps: at least partial areas of M media assets are presented in the image presentation area, M being an integer greater than 0.
Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a page display method according to some embodiments of the present disclosure. As shown in FIG. 1, in at least one embodiment, the method includes steps S110 and S120 as follows.
Step S110: in response to a first predetermined operation of a control for a target object, M media resources related to the target object are obtained, each media resource comprises at least one frame of image, the M media resources are generated from a video set corresponding to the target object, the video set comprises at least one video, the videos in the video set are associated with a label of the target object, the control for the target object comprises the label of the target object, and M is an integer greater than 0.
Step S120: and displaying a detail page related to the target object, wherein the detail page comprises an image display area. For example, presenting a details page about a target object includes: at least a partial region of the M media assets is presented in the image presentation region.
For example, the page display method may be performed by a terminal device (or a user terminal, a client) such as a mobile phone, a tablet computer, and the like, where the terminal device may include, for example, a display device, a processor, a data transceiver, and the like, and the terminal device may transmit data and messages with a server and/or a database through a network.
For example, in step S110, the target object may be a Point of interest (POI), and the POI may be a location, such as a store, an attraction, a mall, a cell, a building, a bus stop, a school, and so on. As another example, the target object may be an article, and may be, for example, a food, a daily necessity, an office supply, furniture, a toy, or an appliance. In some embodiments below, the target object is taken as a shop as an example for explanation.
For example, the control of the target object may be an anchor point, a link, a label, a virtual key, or the like, which is related to the target object, the control of the target object may be executed with a predetermined operation such as clicking, and when the control of the target object is executed with the predetermined operation, execution of corresponding instructions may be triggered, for example, steps S110 and S120 of the page showing method of the embodiment of the present disclosure are executed, and the like.
For example, the first predetermined operation may be a click operation, in some other embodiments, the first predetermined operation may also be other operations such as double-click, drag, and the like, and the first predetermined operation may be determined according to actual needs, which is not limited in this disclosure.
For example, the control of the target object may be exposed in the initial page. Fig. 2 is a schematic diagram of an initial page provided by some embodiments of the disclosure, as shown in fig. 2, the initial page may be, for example, a short video page 201, the short video may be, for example, a video about a store S, the short video may be associated with a store tag 202, for example, the short video may be attached with the store tag 202, the tag 202 may be displayed when the short video is played, and the store tag 202 may be used as a control and may be clicked or the like. The tag 202 is, for example, an anchor point, and when the tag 202 is clicked or the like, the terminal device may acquire M media assets related to the store S in response to the predetermined operation and present a detail page related to the store S, where M is an integer greater than 0, and may be, for example, a number between 1 and 10. In some embodiments, the initial page may also be other pages besides the short video page, and any page showing the control of the target object may be used as the initial page, which is not limited in this disclosure.
For example, the terminal device may acquire the M media assets from the server, the terminal device may send a detail page request to the server in response to, for example, a first predetermined operation, and the server may send the M media assets regarding the target object to the terminal device in response to the detail page request. Each media asset may comprise at least one frame of image and the media asset may be, for example, a video, a dynamic image or a picture, etc.
For example, the server may pre-collect a plurality of videos published by the platform user on the short video platform and associated with the tags of the target objects to form a video collection for the target objects. The video with the tag of the target object may be referred to as a card punching video of the target object, for example, a user shoots a video about the target object after experiencing the target object and manually adds the tag of the target object when uploading the video. In another example, the video may be automatically tagged according to user authorization without being manually tagged by the user, for example, after the user experiences the target object, the video related to the target object is captured and uploaded, after the video is uploaded, the content of the video is analyzed by the server, and if the video is analyzed to be the video related to the target object, the tag of the target object may be automatically tagged to the video; or the label of the target object added to the video can be automatically positioned according to the active trigger of the user. As described above, the tag of the target object is one of the controls of the target object, and when the tag of the target object is executed with the first predetermined operation, the operations of steps S110 and S120 of the page display method according to the embodiment of the disclosure may be triggered.
For example, M media assets may be generated according to videos in a video set, and as for a manner of generating M media assets, in one example, M videos may be selected from the video set as M media assets; in another example, multiple videos may be selected from a video set and then combined to form M videos; in another example, multiple videos may be selected from a video set, and then M dynamic graphs may be truncated from the multiple videos, and so on.
For example, for step S120, a detail page for the target object may be presented in response to a first predetermined operation of the control for the target object. The showing of the details page about the target object may be controlling a display device of the terminal apparatus to show the details page about the target object. The acquired M media assets can be presented in one of the areas (image presentation area) in the details page.
Fig. 3 is a schematic diagram of a detail page according to some embodiments of the present disclosure, as shown in fig. 3, the detail page includes an image display area 301, and the image display area 301 may display media resources, for example, the image display area 301 may display one media resource at a time, that is, M media resources are sequentially displayed, and after the display of one media resource is completed, the detailed page is switched to the next media resource. The M media assets may have the same size, for example, the size of the image display area 301 may be smaller than or equal to the size of any one of the media assets, and if the size of the image display area is smaller than the size of the media asset, a partial area of the media asset may be displayed in the image display area; if the size of the image presentation area is equal to the size of the media asset, the entire area of the media asset may be presented in the image presentation area.
For example, in the detail page, M media assets may be dynamically displayed, for example, as the detail page is opened, the M media assets start to be dynamically displayed, and in the case of taking a media asset as a video clip, a video may start to be played after the detail page is opened. When M media assets are presented, it may be a silent presentation, i.e. no sound information of the media assets is output.
According to the page display method, the related media resources are displayed in the detail page of the target object, and the media resources are from the video which is issued by the user and provided with the label of the target object, on one hand, the media resources are generally wonderful segments in the punch card video, so that the user can be helped to quickly know the high-quality related information of the target object, and the attention of the user can be attracted compared with the simple text display; on the other hand, the media resources are derived from videos which are issued by the platform user and are associated with the tags of the target objects, so that the media resources are started from the perspective of user experience and sensing, the interest of the user can be aroused, the interest of the user on the target objects can be well continued from the initial page to the detailed page, and the user experience is improved.
For example, as shown in fig. 3, the details page may further include an information presentation area 302, in which introduction information about the target object may be presented in the information presentation area 302. Part of the introduction information may be presented in a text manner, it is understood that the information displayed in the detail page is information uploaded publicly by the user or, for example, information uploaded publicly by the shop, or may also be information obtained by summarizing, analyzing, and the like the above information, and optionally, the above information may also be obtained by obtaining an authorization manner such as the shop. As an example, the introduction information may include at least one of a name, an address, a telephone, a consumption situation, ranking information, operation information, and the like of the target object, for example. In addition, if the target object is provided with a transaction service, for example, the target object is provided with a vendable commodity, in this case, the introduction information may further include a plurality of commodity transaction cards, each of which may include information such as a picture, a price, a name, and a sold quantity of the commodity, and when the commodity transaction card is clicked, the user may jump to a commodity detail page of the corresponding commodity to purchase the corresponding commodity.
For example, the image presentation area 301 is near the top of the detail page relative to the information presentation area 302. The page has a top and a bottom, the top of the page being, for example, the upper portion of the page shown in fig. 2 and 3. The top of the page may be relative to the "operating floor," e.g., a "slide up" operation on the page may refer to a slide in a direction from the bottom of the page to the top of the page, and a "pull down" operation on the page may refer to a slide in a direction from the top of the page to the top of the page. In some examples, the details page may be larger in size than the screen, and for portions not currently shown, the page may be slid for viewing. For example, in some examples, an "back to top" anchor may also be set at the bottom of the page, and clicking on the anchor may jump to the top of the page.
For example, because the image presentation area is near the top of the detail page relative to the information presentation area, the media asset may be located above the introductory information, i.e., after opening the detail page, the user may first view the media asset and then view the introductory information downward. In some examples, the image presentation area may be a top area of the detail page, and the top of the image presentation area may be aligned with the top of the page, i.e., the media asset is presented at the head of the page. Based on the setting, the user can be attracted by the media resources firstly after the detail page is opened, the user perception is enhanced by the media resources, and the user is helped to quickly know the characteristics of the target object and further interest the user.
For example, each media asset may include at least one video clip and/or at least one dynamic graph.
For example, the video clips are video clips which are selected from the video set by using the content understanding analysis model and meet a predetermined condition, and/or the dynamic graphs are dynamic graphs which are selected from the video set by using the content understanding analysis model and meet the predetermined condition.
For example, the video clip may be a premium video clip (also referred to as a highlight clip) extracted from the video set, and the dynamic graph may be a dynamic graph generated from the premium video clip extracted from the video set. The following description will take an example of a method of extracting a high-quality video clip using a content understanding analysis model.
The predetermined condition for the premium video segments may include, for example, at least one of the following conditions:
(1) the content of the video clip meets the predetermined content requirement;
(2) the image quality of the video clip meets the preset image quality requirement.
For example, in the case where the target object is a store, the video clip may be required to include a product in the store, not include characters, a portrait, and the like, and may be required to have an image quality higher than a predetermined image quality, so as to ensure that the content of the video clip is clear. In another example, the predetermined condition may also include, for example, a requirement for a shooting angle, a hue, and the like of the video. In addition, the video clips can be selected by combining the information such as the number of praise, the number of comments, the number of watching and the like of the video.
For example, a content understanding analysis model may be pre-trained with sample data. In one example, some video samples may be screened from a video database, and good quality video segment samples may be extracted from the video samples, and a content understanding analysis model may be trained by using the video samples and the good quality video segment samples, and the content understanding analysis model may be, for example, a neural network model or other types of models or a model formed by combining multiple types of models. For example, in the process of selecting a high-quality video segment by using the content understanding analysis model after the content understanding analysis model is trained, the input of the model may be, for example, a video set corresponding to the target object, and the output may be, for example, the high-quality video segment.
For example, in another example, some samples of video segments may be screened from the video database and each video segment may be appended with verification information, such as a classification of the video segment, such as a classification that meets or does not meet a predetermined condition, or a score of the video segment, which may characterize how well the video segment meets the predetermined condition, the higher the score, the better the video segment fits into the predetermined condition, and the better the video segment is characterized. The content understanding analysis model can be trained by using the video segment sample with the appended verification information, and the content understanding analysis model can be a neural network model or other types of models or a model formed by combining multiple types of models. For example, in the process of selecting a high-quality video clip by using the content understanding analysis model after the training of the content understanding analysis model is completed, the input of the model may be, for example, a video clip with a label of the target object, and the output of the model may be, for example, a classification of the video clip, for example, into two categories that satisfy a predetermined condition or that do not satisfy the predetermined condition, in which case, the video clip classified as satisfying the predetermined condition may be selected as the high-quality video clip. In another example, the model output may be, for example, a score of a video segment, in which case several video segments with higher scores may be selected as the premium video segments.
For example, in some embodiments, the content understanding analysis model may continually optimize updates with ever increasing sample data.
For example, in some embodiments, the M media assets may include N video segments, each video segment being a partial segment contained in a target video selected from the video set, N being an integer greater than 0. Presenting at least a partial area of the M media assets in an image presentation area, comprising: in the image presentation area, at least partial areas of the N video segments are presented. Based on the mode, the video clip of the user experience video is displayed in the detail page, and the impression can be deepened and the user experience can be given to the user in the form of the dynamic video.
For example, several target videos may be selected from the video set, and one or more video clips may be intercepted from each target video, where the duration of each video clip may be, for example, between 2 and 5 seconds. Fig. 4 is a schematic diagram of obtaining video segments according to some embodiments of the present disclosure, as shown in fig. 4, for example, target videos V1, V2, and V3 are selected from the video collection 400, one video segment is cut from each target video, for example, one video segment c1 is cut from the target video V1, one video segment c2 is cut from the target video V2, and one video segment c3 is cut from the target video V3. In another example, multiple video clips may be cut from one target video, for example, video clip c1 and video clip c4 may be cut from target video V1.
In one example, N may be equal to M, i.e., each media asset may be a truncated video clip, e.g., truncated video clips c1, c2, c3, and c4 may be respectively one media asset, in which case video clips c1, c2, c3, and c4 may be sequentially presented in the image presentation area.
In another example, N may be greater than M, one or more video segments may be combined into one media asset, e.g., video segments c1 and c2 may be stitched and combined to form one combined segment, video segments c3 and c4 may be stitched and combined to form another combined segment, each of which may be respectively one media asset. In this case, several combined clips, for example, a combined clip formed of the video clips c1 and c2 and a combined clip formed of the video clips c3 and c4, may be sequentially presented in the image presentation area.
For example, in some embodiments, the M media assets may include P dynamic graphs, each dynamic graph generated from a target video selected from a set of videos, P being an integer greater than 0. Presenting at least a partial area of the M media assets in the image presentation area comprises: in the image display area, at least partial areas of the P dynamic images are displayed. Based on the mode, the dynamic images derived from the user experience video are displayed in the detail page, and the impression can be enhanced and the user can be provided with the experience of being personally on the scene by utilizing the form of the dynamic images.
For example, several target videos may be selected from a video set, one or more video clips may be cut from each target video, and each video clip may be converted into a dynamic image in format of gif (graphics exchange format), etc. For example, see fig. 4 and the related description above for a manner of acquiring the video clip.
In one example, P may be equal to M, i.e., each media asset may be a dynamic map, for example, the truncated video segments c1, c2, c3 and c4 are converted into dynamic maps g1, g2, g3 and g4, respectively, and the dynamic maps g1, g2, g3 and g4 may be regarded as a media asset, respectively, in which case the dynamic maps g1, g2, g3 and g4 may be sequentially presented in the image presentation area.
In another example, P may be greater than M, and one or more dynamic graphs may be combined into one media asset, for example, dynamic graphs g1 and g2 may be combined together to form one combined dynamic graph, and dynamic graphs g3 and g4 may be combined together to form another combined dynamic graph, each of which may be a media asset, respectively. In this case, several combined motion pictures may be sequentially displayed in the image display area, for example, a combined motion picture formed by the motion pictures g1 and g2 and a combined motion picture formed by the motion pictures g3 and g4 are sequentially displayed.
For example, in some embodiments, the M media assets may include Q video shots, each video shot being a picture taken from a target video selected from a video collection, Q being an integer greater than 0. Presenting at least a partial area of the M media assets in the image presentation area comprises: in the image presentation area, at least a partial area of the Q video screenshots is presented, each media asset being presented for a predetermined length of time.
For example, several target videos may be selected from a video collection, such as one or more video shots taken from each target video. For example, target videos V1, V2, and V3 are selected from the video collection 400, and one video shot may be taken from each target video, such as one video shot p1 from the target video V1, one video shot p2 from the target video V2, and one video shot p3 from the target video V3. In another example, multiple video shots may be taken from one target video, for example, video shot p1 and video shot p4 may be taken from target video V1.
In one example, Q may be equal to M, i.e. each media asset may be one video shot, for example, the intercepted video shots p1, p2, p3 and p4 may be respectively one media asset, in which case the video shots p1, p2, p3 and p4 may be presented in sequence in the image presentation area, for example, each video shot may be presented for a predetermined length of time (e.g. 0.1-2 seconds), followed by switching to the next video shot.
In another example, Q may be greater than M, one or more video shots may be combined into one media asset, e.g., video shot p1 and shot p2 may be stitched and combined to form one mosaic, video shots p3 and p4 may be stitched and combined to form another mosaic, each mosaic may be respectively one media asset. In this case, several tiles may be sequentially displayed in the image display area, for example, a tile formed by video shots p1 and p2 and a tile formed by video shots p3 and p 4. For example, each mosaic may be presented for a predetermined length of time (e.g., 0.1-2 seconds) and then switched to the next mosaic.
For example, in some embodiments, the M media assets may include two or three of a video clip, a dynamic graph, and a video screenshot, e.g., a portion of the M media assets are video clips and another portion are video screenshots.
For example, for step S120, presenting at least a partial area of the M media assets in the image presentation area may include: sequentially displaying at least partial areas of the M media resources in the image display area; after the last media resource in the M media resources is displayed, the M media resources are displayed again from the first media resource in the M media resources, or after the last media resource in the M media resources is displayed, a predetermined interface is continuously displayed in the image display area until a predetermined operation event is triggered.
For example, in one embodiment, M media assets can be played back in a loop in the image presentation area. In one example, M media assets are, for example, M video clips, and may be played from a first video clip, after all image frames of the first video clip are played, switch to a second video clip to start playing, and so on, until a last video clip of the M video clips is played, and may be played from the first video clip again in sequence. In another example, M media assets, such as M video shots, may be presented starting from a first video clip, after the first video shot is presented for a predetermined length of time (e.g., 1 second), switching to a second video shot, and so on, until after a last video shot of the M video shots is presented for a predetermined length of time (e.g., 1 second), the M video shots may be presented again in sequence starting from the first video shot.
For example, in another embodiment, after the M media assets are sequentially displayed, the M media assets may be frozen on the predetermined interface until a predetermined operation event is triggered, where the predetermined operation event is an event that is switched from the frozen predetermined interface to displaying one of the M media assets.
For example, the page display method may further include: in response to a second predetermined operation with respect to the image presentation area, switching from a first media asset to a second media asset of the M media assets in the image presentation area, the second media asset being a subsequent media asset of the first media asset in the ranking of the M media assets.
For example, the second predetermined operation may be a left-slide operation, i.e., a slide in a direction from the right side to the left side of the page. The first media resource is, for example, a currently displayed media resource, and if a left-sliding operation is performed on the image display area, the currently displayed first media resource may be switched to a second media resource after the first media resource is displayed. If the first media resource is the last media resource in the M media resources, the image display area may be switched to display the first media resource in the M media resources after the second predetermined operation is performed on the image display area.
For example, the page display method may further include: in response to a third predetermined operation with respect to the image presentation area, switching from a first media asset to a third media asset of the M media assets in the image presentation area, the second media asset being a previous media asset of the first media asset in the ranking of the M media assets.
For example, the third predetermined operation may be a right-slide operation, i.e., a slide in a direction from the left side to the right side of the page. The first media resource is, for example, a currently displayed media resource, and if the right-sliding operation is performed on the image display area, the currently displayed first media resource may be switched to a second media resource before the first media resource is displayed. If the first media resource is the first media resource in the M media resources, the image display area may be switched to display the last media resource in the M media resources after the third predetermined operation is performed on the image display area. For example, based on the above, the user may be manually switched to a media asset of more interest.
For example, the page display method may further include: in the case where a partial area of the M media assets is presented in the image presentation area, a length of the image presentation area in the first direction is increased to present a full area of the M media assets in the image presentation area in response to a fifth predetermined operation for the details page. The first direction is a direction perpendicular to the top edge of the detail page.
Fig. 5 is a schematic diagram of an increase of an image display area provided by some embodiments of the present disclosure, as shown in fig. 5, where the first direction is, for example, an up-down direction (vertical direction) shown in fig. 5, after the detail page is opened, a vertical display length of the image display area 501 is smaller than a vertical length of the media asset, for example, a ratio of the vertical display length to a lateral width of the image display area 501 is 3:4, and the image display area 501 is in a collapsed state, and only a partial area of the media asset can be displayed. The fifth predetermined operation may be, for example, a pull-down operation, where the pull-down operation is performed on the detail page, the vertical display length of the image display area 501 may be increased, so as to obtain an increased image display area 501 ', a ratio of the vertical display length to the horizontal width of the image display area 501' is, for example, 4:3, and the image display area is in an expanded state, and is capable of displaying all areas of the media resources, so that the user may view the complete media resources. Accordingly, the vertical display length of the information display area 501 is shortened, and the introduction information displayed in the shortened information display area 501' is reduced.
For example, the page display method may further include: in a case where all of the areas of the M media assets are presented in the image presentation area, in response to a sixth predetermined operation for the details page, a length of the image presentation area in the first direction is reduced to present a partial area of the M media assets in the image presentation area.
For example, the sixth predetermined operation may be, for example, a slide-up operation, and when the entire area of the media asset is displayed in the image display area 501 ', the slide-up operation is performed on the detail page, so that the vertical display length of the image display area 501' can be reduced, the image display area is switched from the expanded state to the collapsed state, the reduced image display area 501 is obtained, and only a partial area of the media asset is displayed in the reduced image display area 501. Accordingly, the vertical display length of the information display area 501' is increased, and the introduction information displayed in the increased information display area 501 is increased, so that the user can view more introduction information.
For example, the page display method may further include: in response to a seventh predetermined operation on the other areas of the detail page except the image display area, the length of the image display area in the first direction is reduced, and the length of the information display area in the first direction is increased to display other introduction information except the currently displayed introduction information in the information display area.
For example, the seventh predetermined operation may be a slide-up operation, the slide-up operation is performed on the information display area 502, the page may be slid up, the image display area 501 may be slid away from the top of the page, and the vertical display area of the information display area 502 is increased to slide out more introductory information from the bottom of the page.
For example, for step S120, presenting a details page about the target object may further include: and in the image display area, displaying M progress bar components respectively corresponding to the M media resources, wherein each progress bar component is configured to display the display progress of the corresponding media resource.
Fig. 6 is a schematic diagram of progress bar components of a detail page provided by some embodiments of the present disclosure, as shown in fig. 6, for example, at least one progress bar component 603 may be displayed in an image display area, the number of progress bar components 603 may be consistent with the number of media assets, each progress bar component 603 is used for representing the playing progress of a corresponding media asset, for example, M media assets include video segments c1, c2 and c3, and progress bar components 603 corresponding to video segments c1, c2 and c3 may be displayed side by side in the image display area. Each progress bar component 603 may be configured to display two colors, for example the ratio of the unplayed portion may be characterized by the length of a first color and the ratio of the played portion may be characterized by the length of a second color. Taking the video segment c1 as an example, as the playing time of the video segment c1 increases, the length of the first color of the corresponding progress bar component decreases, and the length of the second color increases, until the video segment c1 is played, the progress bar component 603 may only display the second color. In this manner, different media assets can be distinguished by the progress bar component, and the user can be made aware of which media asset is currently playing.
For example, in one example, progress bar component 603 may be presented in superimposition with the media asset, i.e., the presentation position of progress bar component 603 is located in the presentation area of the media asset, and progress bar component 603 is superimposed on the media asset, covering a partial area of the media asset. In another example, the progress bar component 603 may be located on one side of the media asset, such as on the top, bottom, left, or right side of the media asset.
Fig. 7 is a schematic flowchart of displaying a detail page according to some embodiments of the present disclosure, and as shown in fig. 7, for step S120, the displaying of the detail page about the target object may include, in addition to step S21 (in the image display area, at least partial area of the M media assets is displayed), step S22 (in the information display area, introduction information about the target object is displayed), and step S23 (in the image display area, M progress bar components corresponding to the M media assets, respectively, are displayed), for example, step S24 may also be included: and displaying the associated information related to the target object in the image display area. The association information may be presented in superposition with the media asset or may be presented to one side of the media asset. The related information may include, for example, text, pictures, and the like.
For example, in the image display area, a plurality of pieces of associated information are sequentially displayed in a dynamic manner.
For example, the plurality of associated information may include E pieces of text information. The E pieces of text information can be displayed in an overlapping manner with the M media resources, or the display positions of the E pieces of text information are positioned on one side of the display area of each M media resource, and E is an integer larger than 0.
Fig. 8A and 8B are schematic diagrams of the associated information of the detail page provided by some embodiments of the present disclosure, fig. 8A is a schematic diagram in a case where the image display area 801 displays the entire area of the media asset, and fig. 8B is a schematic diagram in a case where the image display area 801 displays a partial area of the media asset. As shown in fig. 8A and 8B, the text information 804 may be rating information, for example, and the text information 804 may be scroll-presented in the form of a bullet screen. The textual information 804 may include, for example, statements like: the "xxx" person recommends this. Further, in the case where the target object is a POI, the evaluation information may include evaluation sentences of relevant contents of quality of goods, price, location, decoration, and the like of the POI. In the case where the target object is an article, the evaluation information may include an evaluation statement regarding the price, utility, appearance, and other related contents of the article. Based on the mode, the text information is displayed in the image display area, and the text information can be combined with the media resource, so that the displayed content is richer, and the user is helped to quickly know the target object.
For example, in one example, the text information 804 may be displayed in superposition with the media asset, i.e., the display position of the text information 804 is located in the display area of the media asset, and the text information 804 is superimposed on the media asset, covering a partial area of the media asset. In another example, the textual information 804 may be located on one side of the media asset, such as on the top, bottom, left, or right side of the media asset.
For example, the E pieces of text information include at least one piece of comment information on the videos in the video set and/or at least one piece of rating information on the target object associated with the videos in the video set.
For example, the E pieces of text information may be selected from rating information on videos in the video set and/or rating information on the target object.
For example, each video in a video collection may have a video comment entry through which a video comment page may be entered, in which comment information such as text or pictures may be entered and published.
For example, the platform may be further provided with a rating function for the target object, for example, a video in the video set, that is, a video associated with a tag of the target object, may show a rating entry of the target object to a publisher during or after the publication, through which a rating page of the target object may be entered, and rating information such as text or pictures may be input into the rating page of the target object and published.
For example, for a target object, comment information of a video in a video set and rating information about the target object may be collected into one text set, and some good comment information or rating information (also referred to as highlight comments) may be selected from the text set as the above-mentioned E pieces of text information.
For example, the E pieces of text information are selected from the text set by using a pre-trained text analysis model, and satisfy a predetermined text condition. The text collection includes comment information about videos in the video collection and rating information about the target object. For example, the predetermined text condition may include a requirement for the content of the text information itself, such as a requirement for the number of text words, the use of words, and the like, for example, the predetermined text condition may include: (1) the number of text words does not exceed a preset number of words, (2) the words are used to contain specific types of sentences, and the like, and the preset text condition can be specifically determined according to actual requirements.
For example, in some embodiments, the predetermined text condition may further include a requirement for a publication operation of a publication time, a publication user, and the like of the text information, and may include, for example, that the publication time is later than a predetermined time point, and the like.
For example, the text analysis model may be obtained by pre-training sample data, for example, some comment information samples of videos and some evaluation information samples related to some objects (for example, shops) may be obtained, the comment information samples and the evaluation information samples serve as sample information, and verification information is attached to each sample information, where the verification information may be, for example, a classification of the sample information, and may be, for example, a score of the sample information, where the score may represent a degree that the sample information satisfies a predetermined text condition, and the higher the score is, the more the sample information fits the predetermined text condition, and the better the sample information represents. The text analysis model can be trained by using the sample information with the appended verification information, and the text analysis model can be a neural network model or other types of models or a model formed by combining multiple types of models.
For example, in the process of applying the text analysis model, the input of the text analysis model may be, for example, text information, the text information may include comment information of a video in a video set and/or evaluation information of a target object, and the output of the model may be, for example, classification of the text information, for example, the text information may be classified into two categories that satisfy a predetermined condition or that do not satisfy the predetermined condition, in which case, some text information classified as satisfying the predetermined condition may be selected as candidate text information of the E pieces of text information. In another example, the model output may be, for example, a score of the text information, in which case, several text information with higher scores may be selected as the above-mentioned E text information. The text analysis model can be continuously optimized and updated under the condition that the sample data is continuously increased.
For example, the E pieces of textual information may include at least one piece of comment information about a source video of the M media assets and/or at least one piece of rating information about the target object associated with the source video.
For example, in some embodiments, the E pieces of textual information include at least one rating information for a source video of the M media assets.
For example, the M media assets include video segments c1, c2, and c3, and video segments c1, c2, and c3 are respectively derived from target videos V1, V2, and V3. At least one piece of the E pieces of text information may be rating information of the target video V1, V2, or V3. For example, the M media assets may further include rating information of at least one target object.
Fig. 9 is a schematic diagram of another related information of the detail page provided in some embodiments of the present disclosure, as shown in fig. 9, for example, the related information may further include F related object tags 905 about the target object, where F is an integer greater than 0. The associated object corresponding to at least one associated object label is related to the content displayed by the M media resources; and/or the associated object corresponding to the at least one associated object tag is related to the content of the video presentation in the video set.
For example, the page display method may further include: in response to a fourth predetermined operation on any one of the F associated object tags 905, an associated page corresponding to the operated associated object tag 905 is presented.
For example, in the case where the target object has a transaction service, the associated object may be a commodity of the target object, and the associated object tag may be a commodity recommendation tag, which may include information such as a commodity map, a commodity name, and a price.
For example, the items involved in the F associated object tags may include at least one item exhibited in the M media assets. For example, if M media assets show the steak items in the store S, the labels of the steak items sold in the store S can be shown in the image display area. The fourth predetermined operation may be, for example, a click operation, and when the item recommendation tab is subjected to the click operation, the user may jump to an item detail page of the corresponding item. For example, the items involved in the F associated object tags may also include at least one item presented by a video in the video set.
For example, in some embodiments, F trading cards ranked top among the trading cards of the target object may be selected and displayed in the image display area as F associated object tags 905. The transaction cards may be ranked by the number of sales, for example.
For example, in some embodiments, exposing F associated object tags for a target object includes: and in response to that the display time and/or the display progress of the M media resources meet/meets a preset condition, displaying an associated object recommendation page in the image display area, wherein the associated object recommendation page comprises F associated object tags.
For example, as shown in fig. 9, after all the M media assets are displayed, the M media assets may be displayed and frozen on the associated object recommendation page, or when the time length of displaying the M media assets exceeds a predetermined time length, the M media assets may be displayed and frozen on the associated object recommendation page in the image display area 901. One or more associated object tags may be presented in the associated object recommendation page.
For example, in some embodiments, exposing F associated object tags for a target object includes: and displaying the F associated object tags and the M media resources in an overlapping manner. That is, the associated object tag may be displayed superimposed on the media asset. In some embodiments, only a partial area of the associated object tag may be exposed in order not to affect the health of the M media assets.
For example, the overlay presentation of the F associated object tags with the M media assets includes: for any frame of image in each media resource, superposing an associated object label related to the content shown by the image on the image; or, for any one of the M media assets, an associated object tag related to the content presented by the media asset is superimposed on each frame of image of the media asset.
For example, in one example, a recommendation tag showing an item shown in an image may be superimposed on the image for any frame of the image in each media asset. For example, for a certain video clip, if steak is displayed in the 1 st to 19 th frame images, a commodity recommendation label related to the steak can be displayed on the 1 st to 19 th frame images in an overlapping manner; if the cake is displayed in the 20 th to 50 th frame images, the commodity recommendation label related to the cake can be displayed on the 20 th to 50 th frame images in an overlapping mode.
For example, in another example, for each media asset, a recommendation label for all of the items shown by the media asset may be superimposed on each frame image of the respective media asset. For example, for a certain video clip, if steak is shown in the 1 st to 19 th frame images and a cake is shown in the 20 th to 50 th frame images, a product recommendation label related to the steak and a product recommendation label related to the cake can be displayed in a superimposed manner on each frame image of the video clip.
For example, in another example, several commodities may be selected as target commodities from all commodities displayed by M media resources, and commodity recommendation tags displaying the several target commodities may be superimposed on the M media resources. For example, a plurality of commodities are displayed in the M media resources, a part of the commodities can be selected from the commodities, and the commodity recommendation labels displaying the part of the commodities are superimposed on the M media resources.
For example, the page display method may further include: and in response to the eighth preset operation on any one of the M media resources, displaying a detail page of an associated object related to the displayed content of the media resource.
For example, the eighth predetermined operation may be, for example, a click operation, and when the media resource is subjected to the click operation, the user may jump to a detail page of an article displayed by the media resource. For example, for a certain video segment, if a cake is displayed in the video segment, when the video segment is clicked, the user can jump to a product detail page of the cake.
For example, the page display method may further include: and not responding to the eighth scheduled operation of any media resource in the M media resources, wherein the eighth scheduled operation is a click operation. For example, when a media asset is clicked on, there may be no response.
For example, presenting a details page about a target object includes: the M media assets presented by the details page may be the same or different in response to different operation times of the first predetermined operation on the target object.
For example, in one example, the details page is opened at different times, and the M media assets presented in the details page are the same.
For example, in another example, where the details page is opened at different times, the M media assets presented in the details page may be different. For example, as time goes by, the number of punched-card videos related to the target object increases, so that the video set is updated continuously, M media resources may be reselected from the updated video set every predetermined time period (for example, one month), and the original M media resources are replaced with the newly selected M media resources. Based on the mode, M media resources can be updated regularly, so that the media resources displayed on the detail page can better embody the characteristics of the target object at the present stage.
For example, in another example, different M media assets may be presented at different time periods of a year, month, week, or day, e.g., different M media assets may be presented on weekdays and non-weekdays of a week, or different M media assets may be presented in the morning, afternoon, and evening of a day, respectively. Based on the method, the display mode of the media resources can be more flexible.
At least one embodiment of the disclosure also provides a page request response method. Fig. 10 is a flowchart illustrating a page request response method according to some embodiments of the present disclosure. As shown in fig. 10, in at least one embodiment, the method includes steps S1010 and S1020 as follows.
Step S1010: in response to receiving a detail page request for a target object from a terminal device, M media assets for the target object are obtained.
Step S1020: and sending the M media resources to the terminal equipment so that the terminal equipment shows the M media resources in a detail page of the target object.
For example, each media asset comprises at least one frame of image, M media assets are generated from a video set corresponding to the target object, the video set comprises at least one video, the videos in the video set are associated with a label of the target object, and the control of the target object comprises the label of the target object.
For example, the page request response method of the embodiment of the present disclosure may be executed by a server, for example.
For example, the page request response method may further include: and acquiring introduction information about the target object, and sending the introduction information to the terminal equipment so that the terminal equipment displays the introduction information in a detail page about the target object.
For example, the page request response method may further include, for example, the above-mentioned related operations of selecting a video segment from the video set, such as selecting a high-quality video segment from the video set by using a content understanding analysis model. For example, the operation of training the content understanding analysis model may also be performed by the server, for example.
For example, the page request response method may further include: and acquiring the associated information about the target object, and sending the associated information to the terminal equipment so that the terminal equipment displays the associated information in a detail page about the target object. As described above, the associated information may include E pieces of text information and/or F pieces of associated object tags.
For example, the page request response method may further include, for example, a related operation of selecting E pieces of text information from the text set in the above-described contents. For example, the operation of training the text analysis model may also be performed by the server, for example.
Fig. 11 is a system that can be used to implement the page display method provided by the embodiments of the present disclosure. As shown in fig. 11, the system 10 may include a user terminal (terminal device) 11, a network 12, a server 13, and a database 14. For example, the system 10 may be used to implement the page display method provided by any of the embodiments of the present disclosure.
The user terminal 11 is, for example, a computer 11-1. It is understood that the user terminal 11 may be any other type of electronic device capable of performing data processing, which may include, but is not limited to, a desktop computer, a notebook computer, a tablet computer, a workstation, and the like. The user terminal 11 may also be any equipment provided with an electronic device. Embodiments of the present disclosure do not limit the hardware configuration or the software configuration (e.g., the type (e.g., Windows, MacOS, etc.) or version of the operating system) of the user terminal.
The user may operate an application installed on the user terminal 11 or a website logged in on the user terminal 11, the application or the website transmits user behavior data to the server 13 through the network 12, and the user terminal 11 may also receive data transmitted by the server 13 through the network 12.
For example, the user terminal 11 may execute the page display method provided by the embodiment of the present disclosure in a code running manner, so as to help a user to quickly know information of a related object, and the displayed resource is from the perspective of user experience, so that the user's interest in the related object can be well continued, and the experience of the user is improved.
The network 12 may be a single network or a combination of at least two different networks. For example, the network 12 may include, but is not limited to, one or a combination of local area networks, wide area networks, public networks, private networks, and the like.
The server 13 may be a single server or a group of servers, each connected via a wired or wireless network. A group of servers may be centralized, such as a data center, or distributed. The server 13 may be local or remote.
The database 14 may generally refer to a device having a storage function. The database 14 is mainly used for storing various data utilized, generated, and outputted by the user terminal 11 and the server 13 in operation. The database 14 may be local or remote. The database 14 may include various memories such as a Random Access Memory (RAM), a Read Only Memory (ROM), and the like. The above-mentioned storage devices are merely examples, and the storage devices that may be used by the system 10 are not limited thereto. The embodiment of the present disclosure does not limit the type of the database, and may be, for example, a relational database or a non-relational database.
The database 14 may be interconnected or in communication with the server 13 or a portion thereof via the network 12, or directly interconnected or in communication with the server 13, or a combination thereof.
In some examples, the database 14 may be a standalone device. In other examples, the database 14 may also be integrated in at least one of the user terminal 11 and the server 13. For example, the database 14 may be provided on the user terminal 11 or may be provided on the server 13. For another example, the database 14 may be distributed, and a part thereof may be provided in the user terminal 11 and another part thereof may be provided in the server 13.
At least one embodiment of the present disclosure further provides a page display apparatus, which can help a user to quickly know information of a related object, and the displayed resources are from the perspective of user experience, so that the user's interest in the related object can be well continued, and the experience of the user is improved.
Fig. 12 is a schematic block diagram of a page displaying apparatus according to some embodiments of the present disclosure. As shown in fig. 12, the page presentation apparatus 1200 includes a response unit 1210 and a presentation unit 1220. For example, the page displaying apparatus 1200 may be applied to a user terminal, and may also be applied to any device or system that can display a page, and the embodiment of the disclosure is not limited thereto.
The response unit 1210 is configured to, in response to a first predetermined operation on a control of a target object, acquire M media assets related to the target object, each media asset including at least one frame of image, the M media assets being generated from a video set corresponding to the target object, the video set including at least one video, the videos in the video set being associated with a label of the target object, and the control of the target object including the label of the target object. For example, the response unit 1210 may perform step S110 of the page presentation method as shown in fig. 1.
The presentation unit 1220 is configured to present a detail page about the target object, for example, to control a display device of the user terminal to present the detail page about the target object. Wherein the details page includes an image presentation area. For example, the presentation unit 1220 is further configured to present at least partial areas of the M media assets in the image presentation area. For example, the presentation unit 1220 may perform step S120 of the page presentation method as shown in fig. 1.
For example, the response unit 1210 and the presentation unit 1220 may be hardware, software, firmware, and any feasible combination thereof. For example, the response unit 1210 and the presentation unit 1220 may be dedicated or general circuits, chips, devices, or the like, or may be a combination of a processor and a memory. Embodiments of the present disclosure are not limited in this regard to specific implementations of response unit 1210 and presentation unit 1220.
It should be noted that, in the embodiment of the present disclosure, each unit of the page displaying apparatus 1200 corresponds to each step of the page displaying method, and for the specific function of the page displaying apparatus 1200, reference may be made to the description related to the page displaying method, which is not described herein again. The components and structures of the page displaying apparatus 1200 shown in fig. 12 are exemplary only, and not limiting, and the page displaying apparatus 1200 may further include other components and structures as needed.
For example, in some examples, the page presentation apparatus 1200 may further include a length change unit configured to perform at least one of the following steps:
in the case where a partial area of the M media assets are presented in the image presentation area, in response to a fifth predetermined operation with respect to the detail page, increasing a length of the image presentation area in the first direction to present all of the area of the M media assets in the image presentation area, e.g., the first direction is a direction perpendicular to a top edge of the detail page;
in the case where the entire area of the M media assets is presented in the image presentation area, in response to a sixth predetermined operation for the details page, reducing a length of the image presentation area in the first direction to present a partial area of the M media assets in the image presentation area;
in response to a seventh predetermined operation on the other areas of the detail page except the image display area, the length of the image display area in the first direction is reduced, and the length of the information display area in the first direction is increased to display other introduction information except the currently displayed introduction information in the information display area.
At least one embodiment of the present disclosure also provides a page request response device. The page request responding device comprises a request responding unit and a sending unit. For example, the page request responding apparatus may be applied to a server, and may also be applied to other devices or systems, and the embodiments of the present disclosure are not limited thereto.
The request response unit is configured to acquire M media resources for the target object in response to receiving a detail page request for the target object from the terminal device.
The transmitting unit is configured to transmit the M media assets to the terminal device so that the terminal device presents the M media assets in a detail page about the target object.
For example, each media asset comprises at least one frame of image, M media assets are generated from a video set corresponding to the target object, the video set comprises at least one video, the videos in the video set are associated with a label with the target object, and the control of the target object comprises the label of the target object.
For example, the request response unit and the sending unit may be hardware, software, firmware, or any feasible combination thereof. For example, the request responding unit and the sending unit may be dedicated or general circuits, chips, devices, or the like, or may be a combination of a processor and a memory. The embodiments of the present disclosure are not limited in this regard to the specific implementation forms of the request response unit and the sending unit.
It should be noted that, in the embodiment of the present disclosure, each unit of the page request responding apparatus corresponds to each step of the page request responding method, and for the specific function of the page request responding apparatus, reference may be made to the description related to the page request responding method in the foregoing, and details are not described here again. The above-described components and structures of the page request responding apparatus are only exemplary and not restrictive, and the page request responding apparatus may further include other components and structures as necessary.
Fig. 13 is a schematic block diagram of an electronic device provided in some embodiments of the present disclosure. As shown in fig. 13, the electronic device 1300 includes a processor 1310, a memory 1320, and a display 1330. The display device 1330 is configured to present pages, such as the details pages described above. The memory 1320 is used to store non-transitory computer readable instructions (e.g., one or more computer program modules). The processor 1310 is configured to execute non-transitory computer readable instructions, which when executed by the processor 1310 may perform one or more of the steps of the page presentation method described above. The memory 1320 and the processor 1310 may be interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, the processor 1310 may be a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or other form of processing unit having data processing capabilities and/or program execution capabilities, such as a Field Programmable Gate Array (FPGA), or the like; for example, the Central Processing Unit (CPU) may be an X86 or ARM architecture or the like. The processor 1310 may be a general-purpose processor or a special-purpose processor that may control other components in the electronic device 1300 to perform desired functions.
For example, memory 1320 may include any combination of one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, Random Access Memory (RAM), cache memory (or the like). The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), USB memory, flash memory, and the like. One or more computer program modules may be stored on the computer-readable storage medium and executed by processor 1310 to implement various functions of electronic device 1300. Various applications and various data, as well as various data used and/or generated by the applications, and the like, may also be stored in the computer-readable storage medium.
For example, the Display device 1330 is an output component of a Display component, and the Display device 1330 is a Display unit such as a Liquid Crystal Display (LCD), an Organic Light Emitting Diode (OLED) Display, a Quantum Dot Light Emitting Diode (QLED) Display, and the like, which is not limited in this respect in the embodiments of the present disclosure.
It should be noted that, in the embodiment of the present disclosure, reference may be made to the description about the page-based display method in the foregoing for specific functions and technical effects of the electronic device 1300, and details are not described here again.
Fig. 14 is a schematic block diagram of another electronic device provided by some embodiments of the present disclosure. The electronic device 1400 is, for example, suitable for implementing the page display method provided by the embodiment of the disclosure. The electronic device 1400 may be a user terminal or the like. It should be noted that the electronic device 1400 shown in fig. 14 is only one example, and does not bring any limitation to the functions and the scope of the application of the embodiments of the present disclosure.
As shown in fig. 14, electronic device 1400 may include a processing means (e.g., central processing unit, graphics processor, etc.) 1410 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)1420 or a program loaded from storage 1480 into a Random Access Memory (RAM) 1430. In the RAM 1430, various programs and data required for the operation of the electronic device 1400 are also stored. Processing device 1410, ROM 1420, and RAM 1430 are coupled to each other via bus 1440. An input/output (I/O) interface 1450 also connects to bus 1440.
Generally, the following devices may be connected to the I/O interface 1450: input devices 1460 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, or the like; output devices 1470 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, or the like; storage devices 1480 including, for example, magnetic tape, hard disk, etc.; and a communication device 1490. The communication device 1490 may allow the electronic device 1400 to communicate wirelessly or by wire with other electronic devices to exchange data. While fig. 14 illustrates an electronic device 1400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided, and that the electronic device 1400 may alternatively be implemented or provided with more or less means.
For example, the page presentation method illustrated in fig. 1 may be implemented as a computer software program according to an embodiment of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program comprising program code for performing the page presentation method described above. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 1490, or installed from the storage means 1480, or installed from the ROM 1420. When executed by the processing device 1410, the computer program may perform the functions defined in the page presentation method provided by the embodiment of the present disclosure.
At least one embodiment of the present disclosure also provides a storage medium for storing non-transitory computer-readable instructions, which when executed by a computer, can implement the page presentation method according to any embodiment of the present disclosure. The storage medium can help a user to quickly know the information of the related object, and the displayed resources are from the perspective of user experience, so that the interest of the user on the related object can be well continued, and the experience feeling of the user is improved.
Fig. 15 is a schematic diagram of a storage medium according to some embodiments of the present disclosure. As shown in fig. 15, the storage medium 1500 is used to store non-transitory computer readable instructions 1510. For example, the non-transitory computer readable instructions 1510, when executed by a computer, may perform one or more steps according to the page presentation method described above.
The storage medium 1500 may be applied to the electronic apparatus 1300 described above, for example. The storage medium 1500 may be, for example, the memory 1320 in the electronic device 1300 shown in fig. 13. For example, the related description about the storage medium 1500 may refer to the corresponding description of the memory 1320 in the electronic device 1300 shown in fig. 13, and is not repeated here.
In the above, the page display method, the page display apparatus, the electronic device, and the storage medium provided by the embodiment of the disclosure are described with reference to fig. 1 to fig. 15. According to the page display method provided by the embodiment of the disclosure, on one hand, the attention of the user can be attracted and the user can be helped to quickly know related information by displaying the media resources compared with simple text display, on the other hand, the media resources can arouse the interest of the user from the perspective of user experience and induction, the interest of the user on the target object can be well continued from the initial page to the detailed page, and the user experience is improved.
It should be noted that the storage medium (computer readable medium) described above in the present disclosure may be a computer readable signal medium or a non-transitory computer readable storage medium or any combination of the two. The non-transitory computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the non-transitory computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a non-transitory computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a non-transitory computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as the hypertext Transfer Protocol (HTTP), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a Local Area Network (LAN), a Wide Area Network (WAN), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the present disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the embodiments of the disclosure and is provided for the purpose of illustrating the general principles of the technology. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (19)

1. A page display method comprises the following steps:
in response to a first predetermined operation of a control for a target object, acquiring M media resources related to the target object, wherein each media resource comprises at least one frame of image, the M media resources are generated from a video set corresponding to the target object, the video set comprises at least one video, the videos in the video set are associated with a label of the target object, and the control of the target object comprises the label of the target object;
presenting a details page about the target object, wherein the details page includes an image presentation area,
wherein said presenting a details page about said target object comprises: presenting at least a partial area of the M media assets in the image presentation area,
m is an integer greater than 0.
2. The method of claim 1, wherein each of the media assets comprises at least one video clip and/or at least one dynamic graph.
3. The method according to claim 2, wherein the video clips are video clips selected from the video collection by using a content understanding analysis model and satisfying a predetermined condition, and/or the dynamic graph is a dynamic graph selected from the video collection by using a content understanding analysis model and satisfying a predetermined condition.
4. The method of claim 2, wherein the M media assets comprise N video segments, each of the N video segments being a partial segment of a target video selected from the video set, where N is an integer greater than 0;
presenting at least a partial region of the M media assets in the image presentation region, comprising: in the image presentation area, at least partial areas of the N video segments are presented.
5. The method of claim 2, wherein the M media assets comprise P dynamic graphs, each generated from a target video selected from the video set, where P is an integer greater than 0;
presenting at least a partial region of the M media assets in the image presentation region, comprising: in the image display area, at least partial areas of the P dynamic graphs are displayed.
6. The method of claim 2, wherein the M media assets comprise Q video shots, each video shot being a picture taken from a target video selected from the video set, wherein Q is an integer greater than 0;
presenting at least a partial region of the M media assets in the image presentation region, comprising: and displaying at least partial areas of the Q video screenshots in the image display area, wherein each media resource is displayed for a preset time.
7. The method of any one of claims 1-6, wherein presenting at least a partial area of the M media assets in the image presentation area comprises:
sequentially displaying at least partial areas of the M media resources in the image display area;
after the last media resource in the M media resources is displayed, the M media resources are displayed again from the first media resource in the M media resources, or after the last media resource in the M media resources is displayed, a predetermined interface is continuously displayed in the image display area.
8. The method of any of claims 1-6, further comprising:
in response to a second predetermined operation with respect to the image presentation area, switching from a first media asset to a second media asset of the M media assets in the image presentation area, wherein the second media asset is a subsequent media asset of the first media asset in the ranking of the M media assets; or
In response to a third predetermined operation with respect to the image presentation area, switching from the first media asset to a third media asset of the M media assets in the image presentation area, wherein the second media asset is a previous media asset of the first media asset in the ordering of the M media assets.
9. The method of any of claims 1-6, wherein presenting a details page about the target object further comprises:
in the image display area, displaying M progress bar components respectively corresponding to the M media resources,
wherein each progress bar component is configured to display the presentation progress of the corresponding media asset.
10. The method of any of claims 1-6, wherein presenting a details page about the target object further comprises:
and displaying the associated information related to the target object in the image display area.
11. The method of claim 10, wherein presenting, in the image presentation area, associated information related to the target object comprises:
sequentially displaying a plurality of the associated information in a dynamic mode in the image display area; and/or
And in response to that the display time and/or the display progress of the M media resources meet a preset condition, displaying the associated information in the image display area.
12. The method of claim 11, wherein the associated information includes E pieces of text information, where E is an integer greater than 0;
the E pieces of text information comprise at least one piece of comment information about the videos in the video set and/or at least one piece of evaluation information about the target object, which is associated with the videos in the video set; and/or
The E pieces of text information comprise at least one piece of comment information about source videos of the M pieces of media resources and/or at least one piece of evaluation information about the target object, which is associated with the source videos.
13. The method of claim 10, wherein,
the associated information comprises F associated object tags about the target object, wherein F is an integer greater than 0;
the associated object corresponding to at least one associated object label is related to the content displayed by the M media resources; and/or
And the associated object corresponding to at least one associated object label is related to the content of the video presentation in the video set.
14. The method of claim 13, wherein the page presentation method further comprises:
and in response to a fourth preset operation on any one of the E associated object tags, displaying an associated page corresponding to the operated associated object tag.
15. The method of claim 1, wherein presenting a details page about the target object comprises:
and in response to different operation time of the first preset operation on the target object, the M media resources shown by the detail page are the same or different.
16. A page display apparatus comprising:
a response unit configured to, in response to a first predetermined operation on a control of a target object, acquire M media assets related to the target object, wherein each of the media assets includes at least one frame of image, the M media assets are generated from a video set corresponding to the target object, the video set includes at least one video, videos in the video set are associated with a label of the target object, and the control of the target object includes the label of the target object;
a presentation unit configured to present a detail page about the target object, wherein the detail page includes an image presentation area,
wherein the presentation unit is further configured to present at least a partial area of the M media assets in the image presentation area,
m is an integer greater than 0.
17. An electronic device, comprising:
a display device configured to present a page;
a processor;
a memory including one or more computer program modules;
wherein the one or more computer program modules are stored in the memory and configured to be executed by the processor, the one or more computer program modules comprising instructions for implementing the page presentation method of any one of claims 1-15.
18. A computer readable storage medium storing non-transitory computer readable instructions which, when executed by a computer, implement the page presentation method of any one of claims 1-15.
19. A computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program comprising program code for performing the page presentation method of any one of claims 1-15.
CN202110700304.9A 2021-06-23 2021-06-23 Page display method and device, electronic equipment, storage medium and program product Pending CN113420247A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110700304.9A CN113420247A (en) 2021-06-23 2021-06-23 Page display method and device, electronic equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110700304.9A CN113420247A (en) 2021-06-23 2021-06-23 Page display method and device, electronic equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN113420247A true CN113420247A (en) 2021-09-21

Family

ID=77716361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110700304.9A Pending CN113420247A (en) 2021-06-23 2021-06-23 Page display method and device, electronic equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN113420247A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114168250A (en) * 2021-12-30 2022-03-11 北京字跳网络技术有限公司 Page display method and device, electronic equipment and storage medium
CN114629882A (en) * 2022-03-09 2022-06-14 北京字跳网络技术有限公司 Information display method and device, electronic equipment, storage medium and program product
CN114661215A (en) * 2022-03-31 2022-06-24 北京达佳互联信息技术有限公司 Animation display method and device, electronic equipment, storage medium and program product
CN114760516A (en) * 2022-04-11 2022-07-15 北京字跳网络技术有限公司 Video processing method, device, equipment and storage medium
CN114760515A (en) * 2022-03-30 2022-07-15 北京字跳网络技术有限公司 Method, device, equipment, storage medium and program product for displaying media content
CN115022653A (en) * 2022-04-27 2022-09-06 北京达佳互联信息技术有限公司 Information display method and device, electronic equipment and storage medium
CN115150653A (en) * 2022-06-25 2022-10-04 北京字跳网络技术有限公司 Media content display method and device, electronic equipment and storage medium
CN115361565A (en) * 2022-07-05 2022-11-18 北京达佳互联信息技术有限公司 Information display method, device, equipment and storage medium
WO2023045825A1 (en) * 2021-09-27 2023-03-30 北京有竹居网络技术有限公司 Video-based information display method and apparatus, and electronic device and storage medium
WO2023151589A1 (en) * 2022-02-10 2023-08-17 北京字跳网络技术有限公司 Video display method and apparatus, electronic device and storage medium
WO2023185640A1 (en) * 2022-03-31 2023-10-05 北京字跳网络技术有限公司 Page display method and apparatus, device, computer readable storage medium and product
WO2023208229A1 (en) * 2022-04-29 2023-11-02 北京有竹居网络技术有限公司 Information display method and apparatus, electronic device, and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080140523A1 (en) * 2006-12-06 2008-06-12 Sherpa Techologies, Llc Association of media interaction with complementary data
CN110784754A (en) * 2019-10-30 2020-02-11 北京字节跳动网络技术有限公司 Video display method and device and electronic equipment
CN111338537A (en) * 2020-02-11 2020-06-26 北京字节跳动网络技术有限公司 Method, apparatus, electronic device, and medium for displaying video
CN111930996A (en) * 2020-07-06 2020-11-13 北京字节跳动网络技术有限公司 Display method, device, equipment and storage medium
CN111949864A (en) * 2020-08-10 2020-11-17 北京字节跳动网络技术有限公司 Searching method, searching device, electronic equipment and storage medium
CN112770187A (en) * 2020-12-23 2021-05-07 口碑(上海)信息技术有限公司 Shop data processing method and device
CN112911372A (en) * 2021-01-29 2021-06-04 北京达佳互联信息技术有限公司 Page data processing method and device, electronic equipment and server

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080140523A1 (en) * 2006-12-06 2008-06-12 Sherpa Techologies, Llc Association of media interaction with complementary data
CN110784754A (en) * 2019-10-30 2020-02-11 北京字节跳动网络技术有限公司 Video display method and device and electronic equipment
CN111338537A (en) * 2020-02-11 2020-06-26 北京字节跳动网络技术有限公司 Method, apparatus, electronic device, and medium for displaying video
CN111930996A (en) * 2020-07-06 2020-11-13 北京字节跳动网络技术有限公司 Display method, device, equipment and storage medium
CN111949864A (en) * 2020-08-10 2020-11-17 北京字节跳动网络技术有限公司 Searching method, searching device, electronic equipment and storage medium
CN112770187A (en) * 2020-12-23 2021-05-07 口碑(上海)信息技术有限公司 Shop data processing method and device
CN112911372A (en) * 2021-01-29 2021-06-04 北京达佳互联信息技术有限公司 Page data processing method and device, electronic equipment and server

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023045825A1 (en) * 2021-09-27 2023-03-30 北京有竹居网络技术有限公司 Video-based information display method and apparatus, and electronic device and storage medium
CN114168250A (en) * 2021-12-30 2022-03-11 北京字跳网络技术有限公司 Page display method and device, electronic equipment and storage medium
WO2023151589A1 (en) * 2022-02-10 2023-08-17 北京字跳网络技术有限公司 Video display method and apparatus, electronic device and storage medium
CN114629882A (en) * 2022-03-09 2022-06-14 北京字跳网络技术有限公司 Information display method and device, electronic equipment, storage medium and program product
CN114760515A (en) * 2022-03-30 2022-07-15 北京字跳网络技术有限公司 Method, device, equipment, storage medium and program product for displaying media content
CN114661215A (en) * 2022-03-31 2022-06-24 北京达佳互联信息技术有限公司 Animation display method and device, electronic equipment, storage medium and program product
WO2023185640A1 (en) * 2022-03-31 2023-10-05 北京字跳网络技术有限公司 Page display method and apparatus, device, computer readable storage medium and product
CN114760516A (en) * 2022-04-11 2022-07-15 北京字跳网络技术有限公司 Video processing method, device, equipment and storage medium
CN115022653A (en) * 2022-04-27 2022-09-06 北京达佳互联信息技术有限公司 Information display method and device, electronic equipment and storage medium
WO2023208229A1 (en) * 2022-04-29 2023-11-02 北京有竹居网络技术有限公司 Information display method and apparatus, electronic device, and storage medium
CN115150653A (en) * 2022-06-25 2022-10-04 北京字跳网络技术有限公司 Media content display method and device, electronic equipment and storage medium
CN115150653B (en) * 2022-06-25 2024-02-06 北京字跳网络技术有限公司 Media content display method and device, electronic equipment and storage medium
CN115361565A (en) * 2022-07-05 2022-11-18 北京达佳互联信息技术有限公司 Information display method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113420247A (en) Page display method and device, electronic equipment, storage medium and program product
CN102016904B (en) In social networks, brand engine is to the amendment of content representation
CN111711828B (en) Information processing method and device and electronic equipment
CN109416805A (en) The method and system of presentation for the media collection with automatic advertising
CN105117491B (en) Page push method and apparatus
US10121187B1 (en) Generate a video of an item
EP4343518A1 (en) Data interaction method and apparatus, device, and storage medium
US9594540B1 (en) Techniques for providing item information by expanding item facets
US20140136517A1 (en) Apparatus And Methods for Providing Search Results
KR101607617B1 (en) System of providing real-time moving picture for tourist attraction
CN110059256B (en) System, method and device for displaying information
CN112883263B (en) Information recommendation method and device and electronic equipment
CN104881423B (en) Information provider unit and information providing method
JP6532555B1 (en) Sales support device, sales support method and program
US20150121254A1 (en) Food feedback interface systems and methods
WO2016144386A1 (en) Measuring organizational impact based on member interactions
CN116137662A (en) Page display method and device, electronic equipment, storage medium and program product
CN112395109B (en) Clipboard content processing method and device
CN108960896A (en) Data processing method, user terminal and server end
US11443009B2 (en) Information processing system, information processing method, program, and information processing apparatus
CN115878844A (en) Video-based information display method and device, electronic equipment and storage medium
CN106485565B (en) Information processing method and device
JP7190620B2 (en) Information processing device, information delivery method, and information delivery program
JP7485718B2 (en) Information providing device, information providing method, and information providing program
US20160117698A1 (en) System and Method for Context Dependent Streaming Services

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination