CN117215442A - Display processing method, display processing device, electronic device, storage medium, and program product - Google Patents

Display processing method, display processing device, electronic device, storage medium, and program product Download PDF

Info

Publication number
CN117215442A
CN117215442A CN202211666821.XA CN202211666821A CN117215442A CN 117215442 A CN117215442 A CN 117215442A CN 202211666821 A CN202211666821 A CN 202211666821A CN 117215442 A CN117215442 A CN 117215442A
Authority
CN
China
Prior art keywords
area
information
video
man
interaction interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211666821.XA
Other languages
Chinese (zh)
Inventor
杨祺琪
姚浩荣
丁旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211666821.XA priority Critical patent/CN117215442A/en
Publication of CN117215442A publication Critical patent/CN117215442A/en
Pending legal-status Critical Current

Links

Abstract

The application provides a display processing method, a display processing device, electronic equipment, a computer readable storage medium and a computer program product; the method comprises the following steps: playing a video of a recommended article in a first area of a man-machine interaction interface, and displaying first information of the article in a second area of the man-machine interaction interface; and in response to a sliding operation in the man-machine interaction interface, zooming the second area to form a third area, and displaying second information of the object in the third area to replace the first information. According to the application, the display form of the article information can be changed while the video is played in a sliding mode, so that the man-machine interaction efficiency and the display efficiency are improved.

Description

Display processing method, display processing device, electronic device, storage medium, and program product
Technical Field
The present application relates to man-machine interaction technology, and in particular, to a display processing method, apparatus, electronic device, computer readable storage medium, and computer program product.
Background
In the related art, a live broadcasting room and a short video of an e-commerce advertisement refer to more effective introduction and popularization of commodities by a multimedia technology, commodity cards displayed on the live broadcasting room or the short video can be clicked to jump to a purchase page of the corresponding commodity, and the clicking operation is easy to be triggered by mistake to be a sliding operation, so that the sliding operation on the live broadcasting room and the short video can lead to jumping to other short videos or other live broadcasting rooms, and after jumping to other short videos and other live broadcasting rooms, a user is difficult to find an effective path returning to the original live broadcasting room or the original short video, thereby causing the problems of lower man-machine interaction efficiency and lower display efficiency.
Disclosure of Invention
The embodiment of the application provides a display processing method, a device, electronic equipment, a computer readable storage medium and a computer program product, which can change the display form of article information while playing a video in a sliding way, thereby improving the man-machine interaction efficiency and the display efficiency.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a display processing method, which comprises the following steps:
playing a video of a recommended article in a first area of a man-machine interaction interface, and displaying first information of the article in a second area of the man-machine interaction interface;
and in response to a sliding operation in the man-machine interaction interface, zooming the second area to form a third area, and displaying second information of the object in the third area to replace the first information.
An embodiment of the present application provides a display processing apparatus, including:
the display module is used for playing the video of the recommended article in the first area of the man-machine interaction interface and displaying the first information of the article in the second area of the man-machine interaction interface;
and the sliding module is used for responding to the sliding operation in the man-machine interaction interface, zooming the second area to form a third area, and displaying second information of the article in the third area to replace the first information.
An embodiment of the present application provides an electronic device, including:
a memory for storing computer executable instructions;
and the processor is used for realizing the display processing method provided by the embodiment of the application when executing the computer executable instructions stored in the memory.
The embodiment of the application provides a computer readable storage medium which stores computer executable instructions for realizing the display processing method provided by the embodiment of the application when being executed by a processor.
The embodiment of the application provides a computer program product, which comprises a computer program or a computer executable instruction, and the computer program or the computer executable instruction realize the display processing method provided by the embodiment of the application when being executed by a processor.
The embodiment of the application has the following beneficial effects:
the video of the recommended article and the information of the article are respectively displayed in two areas of the man-machine interaction interface, the article is illustrated by utilizing the video, the comprehensive display of the article can be realized, and the display area of the article information can be zoomed by triggering the sliding operation aiming at the man-machine interaction interface, which is equivalent to the realization of diversified display mode conversion through the sliding operation.
Drawings
FIG. 1 is a schematic diagram of a display processing system according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIGS. 3A-3C are schematic flow diagrams of a display processing method according to an embodiment of the present application;
FIGS. 4A-4F are schematic views of display interfaces of a display processing method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a configuration flow of a display processing method according to an embodiment of the present application;
fig. 6 is a schematic diagram of a playing flow of a display processing method according to an embodiment of the present application;
fig. 7 is a schematic diagram of a play control flow of a display processing method according to an embodiment of the present application;
fig. 8 is a schematic diagram of switching display modes according to an embodiment of the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", "third" and the like are merely used to distinguish similar objects and do not represent a specific ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a specific order or sequence, as permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
Before describing embodiments of the present application in further detail, the terms and terminology involved in the embodiments of the present application will be described, and the terms and terminology involved in the embodiments of the present application will be used in the following explanation.
1) In response to a condition or state that is used to represent the condition or state upon which the performed operation depends, the performed operation or operations may be in real-time or with a set delay when the condition or state upon which it depends is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
2) Floor pages are commonly used for advertising campaigns, and mainly refer to the first page that a potential user clicks on a display through channels such as a web site inside, a web site outside, a search engine, and the like.
3) The applet is an application program in the social application that can be used on the social platform without downloading and installing.
In the related art, the advertisement landing page only provides the graphic details of the articles, and when an order is submitted, a plurality of processes such as commodity specification, filling address, quantity and the like are selected by a user, the user is required to jump out of the current page and switch back and forth for operation, so that the man-machine interaction efficiency is low; in the related art, advertisement delivery can be carried out only in a single scene, and the advertisement delivery is limited by traffic scenes; in the related art, the user is difficult to return to the video of the recommended article before by controlling to switch from the currently played video to other videos through sliding operation, so that the human-computer interaction efficiency is low.
The embodiment of the application provides a display processing method, a display processing device, electronic equipment, a computer program product and a computer readable storage medium, which can improve the diversity of human-computer interaction for generating and transmitting dynamic.
The following describes exemplary applications of the electronic device provided by the embodiments of the present application, where the electronic device provided by the embodiments of the present application for implementing the display processing method may be implemented as various types of user terminals such as a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, and a portable game device).
Referring to fig. 1, fig. 1 is a schematic diagram of an architecture of a display processing system according to an embodiment of the present application, in order to support a social application, a terminal 400 is connected to a server 200 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two.
The terminal 400 receives the operation of clicking the recommended information by the user, sends a data service request to the server 200, the server 200 sends the video and the first information to the terminal 400, the terminal 400 plays the video of the recommended article in the first area of the man-machine interaction interface, and displays the first information of the article in the second area of the man-machine interaction interface; the terminal 400 receives the sliding operation in the man-machine interaction interface and sends a data request to the server 200, the server 200 returns the second information to the terminal 400, the terminal 400 zooms the second area to form a third area, and the second information of the object is displayed in the third area to replace the first information.
In some embodiments, the terminal or the server may implement the display processing method provided by the embodiment of the present application by running a computer program. For example, the computer program may be a native program or a software module in an operating system; the Application program can be a local (Native) Application program (APP), namely a program which can be run only by being installed in an operating system, namely an instant messaging client, a video conference client and a social network client; the method can also be an applet, namely a program which can be run only by being downloaded into a browser environment; but also an applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
The embodiment of the application can be realized by means of Cloud Technology (Cloud Technology), wherein the Cloud Technology refers to a hosting Technology for integrating serial resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data.
The cloud technology is a generic term of network technology, information technology, integration technology, management platform technology, application technology and the like based on cloud computing business model application, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical network systems require a large amount of computing and storage resources.
As an example, the server 200 may be a stand-alone physical server, a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms. The terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, a car terminal, etc. The terminal 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present application.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and a terminal 400 shown in fig. 2 includes: at least one processor 410, a memory 450, at least one network interface 420, and a user interface 430. The various components in terminal 400 are coupled together by a bus system 440. It is understood that the bus system 440 is used to enable connected communication between these components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled in fig. 2 as bus system 440.
The processor 410 may be an integrated circuit chip having signal processing capabilities such as a general purpose processor, such as a microprocessor or any conventional processor, or the like, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
The user interface 430 includes one or more output devices 431, including one or more speakers and/or one or more visual displays, that enable presentation of the media content. The user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
Memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 450 optionally includes one or more storage devices physically remote from processor 410.
Memory 450 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The non-volatile memory may be read only memory (ROM, read Only Me mory) and the volatile memory may be random access memory (RAM, random Access Memor y). The memory 450 described in embodiments of the present application is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 451 including system programs, e.g., framework layer, core library layer, driver layer, etc., for handling various basic system services and performing hardware-related tasks, for implementing various basic services and handling hardware-based tasks;
a network communication module 452 for accessing other electronic devices via one or more (wired or wireless) network interfaces 420, the exemplary network interface 420 comprising: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), etc.;
A presentation module 453 for enabling presentation of information (e.g., a user interface for operating peripheral devices and displaying content and information) via one or more output devices 431 (e.g., a display screen, speakers, etc.) associated with the user interface 430;
an input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
In some embodiments, the display processing device provided in the embodiments of the present application may be implemented in software, and fig. 2 shows the display processing device 455 stored in the memory 450, which may be software in the form of a program, a plug-in, or the like, including the following software modules: a display module 4551 and a slider module 4552, which are logical, and thus may be combined or split further according to the functions implemented. The functions of the respective modules will be described hereinafter.
The display processing method provided by the embodiment of the application will be described in connection with the exemplary application and implementation of the terminal provided by the embodiment of the application.
Referring to fig. 3A, fig. 3A is a schematic flow chart of a display processing method according to an embodiment of the present application, and the steps shown in fig. 3A will be described.
In step 101, a video of a recommended item is played in a first area of a human-machine interface and first information of the item is displayed in a second area of the human-machine interface.
As an example, referring to fig. 4B, a video is played in a first area 402B of a human-computer interaction interface 401B, the video content is a recommended item a, first information of the item a is displayed in a second area 403B of the human-computer interaction interface 401B, a first arrangement of the first information and a first information amount of the first information are adapted to a size of the second area, the first information includes at least one of: the name of the article a, the advertisement of the article a, the function keyword of the article a, the price of the article a, the specification of the article a, etc., according to the size of the second area, the first information may be arranged in a form adapted to the size of the second area, so that a display effect conforming to the browsing habit of the user may be achieved, and due to the size limitation of the second area, when the first information is displayed, the first information amount corresponding to the size of the second area needs to be adapted, for example, displayed according to the priority of each first information, and on the premise of ensuring the readability of the information, the first information with higher priority is preferentially displayed in the second area, for example, the name and the price of the article a are displayed in the first area, but the specification of the article a is not displayed.
In step 102, in response to the sliding operation in the human-computer interaction interface, the second area is zoomed to form a third area, and second information of the object is displayed in the third area to replace the first information.
As an example, referring to fig. 4B, a video is played in a first area 402B of the human-computer interaction interface 401B, the video content is a recommended item a, second information of the item a is displayed in a second area 403B of the human-computer interaction interface 401B, the second area 403B is scaled to a third area 404B in response to a sliding operation (the sliding operation may be triggered in the first area or in the second area), a second arrangement of the second information and a second amount of the second information is adapted to a size of the third area, the second information includes at least one of: the name of the article a, the advertisement of the article a, the function keyword of the article a, the price of the article a, the specification of the article a, etc., the second information may be arranged in a pattern adapted to the size of the third area according to the size of the third area, so that a display effect conforming to the browsing habit of the user may be achieved, and due to the size limitation of the third area, when the second information is displayed, the second information amount corresponding to the size of the third area needs to be adapted, for example, displayed according to the priority of each second information, and on the premise of ensuring the readability of the information, the second information with higher priority is preferentially displayed in the third area, for example, the name and the price of the article a are displayed in the third area, instead of displaying the specification of the article a.
The scaling here may also be an equal-scale scaling (1:1 scaling), i.e. the actual area size does not change, by means of a continuous up-and-down operation, as an example. And, when the card of the article is displayed in the second area, the second area may also be zoomed into the third area by a click operation for the card of the article, and the second information is displayed in the third area (for example, more detailed graphic information is displayed in the form of a pop-up window).
The video of the recommended article and the information of the article are respectively displayed in two areas of the man-machine interaction interface, the article is illustrated by utilizing the video, the comprehensive display of the article can be realized, and the display area of the article information can be zoomed by triggering the sliding operation aiming at the man-machine interaction interface, which is equivalent to the realization of diversified display mode conversion through the sliding operation.
In some embodiments, referring to fig. 3B, playing the video of the recommended item in the first area of the human-computer interaction interface in step 101 may be implemented by step 1011 and step 1012 shown in fig. 3B.
In step 1011, an information stream is displayed in the human-machine interactive interface, wherein the information stream includes recommendation information for recommending the item.
In step 1012, in response to the triggering operation for the recommendation information, playing the video of the recommended item in a first area of the human-computer interaction interface, wherein the first area is a full area of the human-computer interaction interface or a partial area of the human-computer interaction interface.
According to the embodiment of the application, the video playing scene can be directly accessed through the recommended information, so that the man-machine interaction efficiency is effectively improved. Besides entering video playing through the recommended information in the information stream, video playing can be carried out through webpage advertisements or search advertisements, and the putting scene of the recommended information is not limited.
As an example, referring to fig. 4A, an information stream is displayed on the human-computer interaction interface 401A, the information stream including recommendation information 402A for recommending items, the information stream including dynamic information posted by a social user, and recommendation information posted by an advertiser, such as recommendation information 402A, and a video of the recommendation item a is played in a first region 403A of the human-computer interaction interface 401A in response to a trigger operation (e.g., clicking a jump link in the recommendation information, or clicking an image in the recommendation information) for the recommendation information 402A. The first area may be a full area of the human-machine interface (e.g., full screen play) and the first area may be a partial area of the human-machine interface (e.g., non-full screen play).
In some embodiments, the information stream further comprises at least one other recommendation information for recommending at least one other item; responding to video switching operation aiming at the video, and playing the video of the recommended target object in a first area of the man-machine interaction interface; the target object is derived from at least one other object, and other recommended information corresponding to the target object is adjacent to the recommended information in the information flow. The embodiment of the application can improve the recommendation efficiency and recommendation diversity.
As an example, referring to fig. 4A, in the man-machine interaction interface 401A, recommendation information 402A of a recommended article a is displayed, recommendation information 404A of a recommended article B is also displayed, the video switching operation may be a double click operation for a video, and a card of a next video may be displayed, the video switching operation may be a click operation for a card, by triggering the video switching operation for a video, a first area of the man-machine interaction interface is played with another video, the other video is a video of a recommended target article, the target article is an article recommended in the recommendation information 404A, and since the recommendation information 402A and the recommendation information 404A are in adjacent states in the information flow, the adjacent states refer to no other recommendation information therebetween, and thus the next video is a video of the recommended article B corresponding to the introduction information 404A.
In some embodiments, a first frame image of a video is acquired before playing the video of the recommended item in a first area of the human-machine interface; displaying a first frame image; loading the complete resources of the video; and when the complete resource loading is completed, playing the video in the first area of the man-machine interaction interface. The embodiment of the application can avoid the user from perceiving the loading delay, thereby improving the viewing experience of the user.
As an example, in response to a click operation of a user on an advertisement (recommendation information), a first frame image of a video is loaded preferentially, then a complete resource of the video is loaded, the video is played after the video resource is loaded, and a commodity card carrying commodity information (such as first information) is displayed in a second area.
In some embodiments, in step 101, playing the video of the recommended article in the first area of the human-computer interaction interface may be implemented by the following technical scheme: when the aspect ratio of the man-machine interaction interface is the same as that of the video, playing the video in full screen in the man-machine interaction interface; when the aspect ratio of the man-machine interaction interface is different from that of the video, cutting the video based on the width and the set aspect ratio of the man-machine interaction interface, and playing the cut video in the middle area of the man-machine interaction interface.
As an example, the video is played in the player, the player width and position are initialized, and the video is played, specifically, if the aspect ratio of the man-machine interface of the terminal is equal to the aspect ratio of the video, the video full screen is set, otherwise, the width of the man-machine interface is set as the video width, the video height is set as 16/9 times the video width, the part of the video exceeding the height is cut, and the video player is vertically centered, so that the area which is in the middle of the man-machine interface and accords with the video size is set as the first area.
In some embodiments, when a video of a recommended item is played in a first area of a human-machine interaction interface, the video is paused in response to a click operation for the video, and a play control is displayed on a pause screen of the video.
In some embodiments, when a video of a recommended item is played in a first area of a human-machine interaction interface, an audio control corresponding to the video is displayed; responding to triggering operation for the audio control, and switching the audio playing state of the video, wherein the audio playing state comprises the following steps: mute play state and non-mute play state.
As an example, a play control component and a sound control component are initialized, specifically, a play control button (play control) is rendered, a default play control button is hidden, a click operation for a video is detected, the play of the video is paused and the play control button (play control) is presented after the click operation is detected, the play control button is hidden and the video is played after the click operation for the play button is detected, a sound is turned on by default when the video is played, a sound switch button (audio control) is initialized, the sound switch button is always displayed, and the operation of the click sound switch button is detected to switch the sound on or off.
In some embodiments, when the area of the first region that is occluded exceeds an area threshold, the video is paused; when the area of the first area which is blocked changes from exceeding the area threshold value to not exceeding the area threshold value, continuing to play the video from the position where the video is paused.
As an example, the first area is a video playing area, the second area and the third area are areas for displaying information related to the object, the first area can be blocked by the second area or the third area, or the first area and the second area or the third area are not blocked by each other, when the blocked area of the first area exceeds an area threshold, for example, the video is blocked completely, the video is paused to play, because the user cannot see the video playing content at this time, the pause of playing can effectively save the resource consumption of a computer, and the pause can also avoid the user from missing the video content, thereby improving the display efficiency.
In some embodiments, referring to fig. 3C, at least one of the first information and the second information each includes a purchase portal for the item, the following steps 103 to 105 may also be performed.
In step 103, in response to the triggering operation for the purchase portal, displaying an order page of the item in a fourth area of the man-machine interaction interface, wherein the order page comprises a payment portal, and the fourth area is obtained by scaling the second area or the third area.
In step 104, purchase information of the item and a shipping address of the item are displayed in the order page in response to the information input operation for the order page.
In step 105, a payment process for the item is performed in response to a trigger operation for the payment portal. The embodiment of the application can help the user to complete the input operation comprising the purchase information and the receiving address in the same order page, thereby avoiding the user from switching among a plurality of interfaces and improving the purchase efficiency of the user.
As an example, referring to fig. 4C, the first information displayed in the second area 402C of the man-machine interface 401C includes a purchase portal 403C, an order page of the item is displayed in the fourth area 404C of the man-machine interface 401C in response to a trigger operation for the purchase portal 403C, the order page may be in the form of a popup window or in the form of a floating layer, the fourth area 404C is scaled to the second area 402C, purchase information (quantity, color, specification, etc.) of the item a is displayed in the order page in response to an information input operation for the order page in the fourth area, and a pickup address is displayed in response to a trigger operation for the payment portal 405C, and a payment process for the item a is performed.
As an example, referring to fig. 4D, the second information displayed in the third area 402D of the human-computer interaction interface 401D includes a purchase portal 403D, in response to a trigger operation for the purchase portal 403D, an order page of the item is displayed in the fourth area 404D of the human-computer interaction interface 401D, the order page may be in a form of a popup window or in a form of a floating layer, the fourth area 404D (the fourth area and the third area may be the same size, and when the fourth area and the third area are the same, the order page may be directly displayed in the popup window of the third area, or based on the popup window of the third area) is a scaled (herein, scaling further includes a 1:1 scaling, i.e., a case of no change), in response to an information input operation for the order page in the fourth area, purchase information (quantity, color, specification, etc.) of the item a is displayed in the order page, and a goods address are performed in response to a trigger operation for the payment portal 405D.
In some embodiments, when a video of a recommended item is played in a first area of a human-computer interaction interface, interactive information of the item is displayed in an overlaid mode on the video; wherein the interaction information comprises at least one of the following: purchase information for an item, evaluation information for an item, collection information for an item, browsing information for an item, the purchase information indicating a purchaser of the item and a purchase time of the corresponding purchaser. According to the embodiment of the application, the immersive shopping atmosphere can be provided, and the man-machine interaction efficiency is improved.
As an example, referring to fig. 4A, a video of the recommended article a is played in the first area 403A of the human-computer interaction interface 401A, and the interactive information 405A is further displayed superimposed on the video, where the interactive information 405A may be carried on an atmosphere component, and the interactive information 405A includes at least one of the following: purchase information for item A (e.g., bargain just purchased item A), evaluation information for item A (e.g., bargain: item A very overrun, brute force recommendation), collection information for item A (e.g., bargain just collected item A), and browsing information for item A (e.g., bargain just browsed item A).
In some embodiments, the purchase information and the evaluation information of the article are displayed in a superimposed manner on the video, and the method can be realized by the following technical scheme: and broadcasting the interactive information of the objects on the video in turn according to the interactive sequence of the interactive information of the objects. The embodiment of the application can help the user to acquire real-time shopping information, thereby having immersive shopping experience.
As an example, with the release of advertisements, the interactive information of the articles is more and more, in order to ensure that the viewing effect of the video is not affected, the area where the interactive information is superimposed is limited, and the interactive information is unlimited, so that the interactive information needs to be carousel in the set area, carousel is performed according to the sequence of the interactive time, the latest interactive information is preferentially played whenever the latest interactive information is generated, the interactive information with the interactive time close to the current moment is played again, and all the interactive information is played in turn in a time period without the generation of the latest interactive information.
In some embodiments, the interactive information of the item is obtained before the interactive information of the item is displayed superimposed on the video; rendering processing is carried out based on the interaction information, and a component bearing the interaction information is obtained; the component to which the set transparency is applied is displayed on the video.
As an example, a purchase atmosphere component is synthesized and presented onto a video, specifically, a purchase and browsing record (interactive information) of a corresponding commodity is pulled, rendered in a ticker form superimposed over the video, and set translucent so that it does not affect video viewing.
In some embodiments, when the human-computer interaction interface is in the portrait display mode, scaling the second area to form a third area may be achieved by the following technical scheme: when the sliding operation is an upward sliding operation, moving the movable edge of the second area to the top of the man-machine interaction interface by a first distance to form a third area; and when the sliding operation is a downward sliding operation, moving the movable edge of the second area to the bottom of the man-machine interaction interface by a first distance to form a third area. The embodiment of the application can flexibly adjust the region change and improve the man-machine interaction efficiency.
As an example, referring to fig. 4E, first information of the article a is displayed in the second area 402E of the human-computer interaction interface 401E, and in response to a sliding operation (the sliding operation may be triggered in the first area or in the second area) for the human-computer interaction interface, the second area 402E is enlarged into the third area 403E, and it can be seen that an upper edge of the second area is taken as an active edge, and the upper edge 404E is moved toward the top of the human-computer interaction interface 401E by a first distance to obtain the third area 403E. Referring to fig. 4F, first information of the article a is displayed in a second area 402F of the man-machine interface 401F, and in response to a sliding operation (the sliding operation may be triggered in the first area or in the second area) for the man-machine interface, the second area 402F is narrowed down to a third area 403F, and it can be seen that an upper edge of the second area is taken as an active edge, and the upper edge is moved toward a bottom of the man-machine interface 401F by a first distance to obtain the third area 403F.
In some embodiments, the first distance is positively correlated with a first parameter comprising at least one of: the operation duration of the sliding operation, the operation distance of the sliding operation, and the operation pressure of the sliding operation.
As an example, the magnitude of the area scaling may be controlled by the first distance, for example, the longer the operation time of the sliding operation, the larger the first distance, the longer the operation distance of the sliding operation, and the larger the operation pressure of the sliding operation, the larger the first distance. The embodiment of the application can improve man-machine interaction diversity and is convenient for a user to flexibly control the display mode.
In some embodiments, obtaining content characteristics of currently played content of a video and object characteristics of an object operating a human-computer interaction interface; acquiring a feature distance between a content feature and an object feature; when the third area is larger than the second area, acquiring a first distance positively correlated with the characteristic distance; and when the third area is smaller than the second area, acquiring a first distance inversely related to the characteristic distance. The embodiment of the application can provide personalized man-machine interaction service for the user and improve the man-machine interaction efficiency.
As an example, the content characteristics of the currently playing content of the video can be obtained through an image reading understanding technology, the data form of the content characteristics is vector representation, the object is a user operating a man-machine interaction interface, the object characteristics are obtained based on data related to the user, the data form of the object characteristics is also vector representation, whether the user is interested in the currently playing content or not is measured through the characteristic distance, the larger the characteristic distance is used for representing that the user is less interested in the currently playing content, so that when the third area is larger than the second area, the first distance can be stretched to shield more video content, the first distance is positively related to the characteristic distance, the smaller the characteristic distance is used for representing that the user is interested in the currently playing content, and when the third area is smaller than the second area, the first distance can be used for displaying more video content, and therefore the first distance is negatively related to the characteristic distance.
In some embodiments, when the third area is larger than the second area, acquiring first history data of the third area larger than the second area, extracting a plurality of times of magnification change of a first distance from the first history data, the first history data being statistical data, the first history data including the area magnification change caused by each sliding operation, and taking an average of the plurality of times of magnification change of the first distance as the first distance; when the third area is smaller than the second area, second historical record data of the third area smaller than the second area is obtained, historical first distances of multiple reduction changes are extracted from the second historical record data, the second historical record data is also statistical data, the second historical record data comprises area reduction changes caused by each sliding operation, and an average value of the historical first distances of the multiple reduction changes is taken as the first distance. The embodiment of the application can provide personalized man-machine interaction service for the user and improve the man-machine interaction efficiency.
In some embodiments, when the upper edge and the lower edge of the second area do not overlap with the edge of the human-computer interaction interface, moving the active edge of the second area to the top of the human-computer interaction interface by a first distance to form a third area, which may be achieved by the following technical scheme: taking the edge with a larger distance between the upper edge and the corresponding boundary of the lower edge and the human-computer interaction interface as the movable edge; the corresponding boundary of the upper edge is the upper boundary of the man-machine interaction interface, and the corresponding boundary of the lower edge is the lower boundary of the man-machine interaction interface; and moving the movable edge of the second area to the top of the man-machine interaction interface by a first distance, and extending the other edges of the second area to a position overlapping with the boundary of the man-machine interaction interface to form a third area.
As an example, referring to fig. 4B, second information of the article a is displayed in a second area 403B of the human-computer interaction interface 401B, in response to a sliding operation (the sliding operation may be triggered in the first area or in the second area), the second area 403B is scaled to a third area 404B, an upper edge 405B of the second area is spaced from an upper boundary by a distance greater than a lower edge 406B of the second area, and thus the upper edge 405B is regarded as an active edge, the upper edge is moved a first distance to the top, and other edges are extended to overlap the human-computer interaction interface 401B, resulting in the third area 404B.
In some embodiments, when the human-computer interaction interface is in the landscape display mode, scaling the second area to form a third area may be achieved by the following technical scheme: when the sliding operation is left sliding operation, moving the movable edge of the second area to the left side of the man-machine interaction interface by a second distance to form a third area; when the sliding operation is right sliding operation, moving the movable edge of the second area to the right side of the man-machine interaction interface by a second distance to form a third area, wherein the implementation of the horizontal screen display mode is similar to that of the vertical screen display mode.
As an example, the second distance is positively correlated with a second parameter comprising at least one of: the operation duration of the sliding operation, the operation distance of the sliding operation, and the operation pressure of the sliding operation.
As an example, acquiring content characteristics of currently played content of a video and object characteristics of an object operating a man-machine interaction interface; acquiring a feature distance between a content feature and an object feature; when the third area is larger than the second area, acquiring a second distance positively correlated with the characteristic distance; and when the third area is smaller than the second area, acquiring a second distance which is inversely related to the characteristic distance.
As an example, when the third area is larger than the second area, third history data of the third area larger than the second area is acquired, a history second distance of the plurality of amplified changes is extracted from the third history data, and an average value of the history second distances of the plurality of amplified changes is taken as the second distance; and when the third area is smaller than the second area, acquiring fourth historical record data of which the third area is smaller than the second area, extracting historical second distances of the multiple reduced changes from the fourth historical record data, and taking an average value of the historical second distances of the multiple reduced changes as the second distance.
In some embodiments, when the second area is an area that is not zoomed, the displaying the first information of the object in the second area of the man-machine interface may be achieved by the following technical scheme: at least one of the following processes is performed: displaying first information of the article in a second area conforming to the set size; displaying first information of the article in a second area which does not cause shielding of the first area; and displaying the first information of the object in a second area conforming to the reading habit of the object, wherein the object is an object for operating the man-machine interaction interface.
As an example, the second area may be a default size, where the default size is set, and in the embodiment of the present application, the second area that is not adjusted by any sliding operation has a default size, for example, when the recommended information is triggered to play the video corresponding to the object a, the object information (the first information) of the object a is also displayed in the second area with the default size at the same time, where the size of the second area is not yet adjusted by the sliding operation, and the second area before the second sliding operation in the subsequent process is the third area after the first sliding operation.
As an example, the second area may also be an area that is not blocked from the first area, so that information display efficiency may be improved, and blocking between the second area and the first area is avoided, thereby affecting the efficiency of the user to acquire information.
As an example, the object is a user operating a man-machine interaction interface, and the reading habit of the object may be obtained through historical statistics data, for example, the second area with the highest frequency used before generating an order when the user shops each time is obtained from the historical statistics data, and the second area with the highest frequency of use may be considered to be the second area conforming to the reading habit of the object.
In the following, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
In some embodiments, the terminal receives an operation that a user clicks on the recommended information, sends a data service request to the server, sends a video and first information to the terminal, and plays the video of the recommended item in a first area of the man-machine interface and displays the first information of the item in a second area of the man-machine interface; the terminal receives the sliding operation in the man-machine interaction interface and sends a data request to the server, the server returns the second information to the terminal, the terminal zooms the second area to form a third area, and the second information of the object is displayed in the third area to replace the first information.
In some embodiments, the disclosure of the landing page is presented below, in response to a user clicking on an advertisement, displaying the landing page, for example clicking on an advertisement in an information stream or clicking on a search advertisement banner to enter the landing page; the first frame image of the video is loaded preferentially, and then video resources are loaded; playing the video after loading the video resource, and displaying a card carrying commodity information; ejecting the commodity detail popup window in response to clicking operation of a user on the card; in response to a triggering operation of a user for a purchase portal, popping up an order to generate a popup window; in response to an order fill operation (e.g., filling in a shipping address) in the order generation window, the order flow for the item is completed.
In some embodiments, referring to fig. 4A, a piece of recommendation information (e.g., an e-commerce advertisement) is displayed in an information flow interface (e.g., a friend circle of a user), an immersive video landing page is displayed in response to a trigger operation (e.g., a click operation) for the recommendation information, a video introducing goods is played full screen on the immersive video landing page, real purchase information and evaluation information of goods are carousel on the lower left of the immersive video landing page, and a goods card is displayed under the immersive video landing page.
In some embodiments, referring to fig. 4B, in response to a click operation or a slide-up operation of the user on the commodity card (i.e., the user may click on the commodity card under the screen of the terminal or perform a slide-up operation on the screen when the user needs to view a detailed description of the commodity), an item detail popup window is popped up, and information such as detailed product description, price description, etc. of the commodity is displayed on the item detail popup window.
In some embodiments, referring to fig. 4C, when the user decides to purchase the commodity, the order pop-up window is popped up in response to the triggering operation of clicking the purchase entrance in the commodity card by the user, the order information is displayed in response to the operation of filling in the purchase information by the user (equivalent to completing the order-placing process), and the whole process does not need to leave the video playing page.
In some embodiments, a user browses advertisements of selling one-piece dress put by an advertiser through a friend circle, triggers clicking operation for the advertisements, then enters a landing page, displays an immersive video of a vertical full screen on the landing page, and models in the video fully show the upper body effect of the one-piece dress from various angles, so that the user views the video immersively, and meanwhile sees the purchase information of carousel to feel the promotion atmosphere, and the user can be promoted to click commodity cards to rapidly submit orders.
In some embodiments, the immersive video provides a new type of e-commerce advertising presentation for advertisers. The common e-commerce advertisement is to directly display a commodity purchase page for a user to order, but the embodiment of the application can simulate a short-view commodity scene, so that the user can know commodities, such as real person color testing, star commodity, and the like, through video materials of merchants in an immersive manner. Thus, the user can fully know the selling point of the commodity and then decide whether to place an order or not.
The embodiment of the application relates to the realization of two parts, namely a configuration end and a playing end, wherein the configuration end carries out the configuration of a landing page and comprises the uploading, the compression and the transcoding of video materials; the playing end is responsible for pulling page configuration and rendering according to the landing page identification.
In some embodiments, referring to fig. 5, in step 501, a landing page is newly created, in step 502, merchandise is selected and other configuration items are set, in step 503, video material is uploaded, in step 504, video is transcoded into Mp4 format, in step 505, video is compressed into 720P, in step 506, uploading cdn, in step 507, the configuration of the video link to the landing page is saved, in step 508, a landing page audit procedure is performed, and in step 509, the landing page is dropped.
In some embodiments, referring to fig. 6, in step 601, the operation of clicking the advertisement is triggered and the landing page is opened, in step 602, the configuration of the landing page is pulled according to the identification of the landing page, in step 603, the video is rendered, in step 604, the order popup is rendered and hidden, in step 605, the graphic detail popup is rendered and hidden, the implementation order of steps 603 to 605 is not limited to the order, and in step 606, the interface display is transformed in response to the user interaction.
In some embodiments, the landing page interface is divided into three modes, namely a video playing mode, half-screen image-text details and full-screen image-text details, and an order popup window can be pulled up in all the three modes. The core of the technical realization of the embodiment of the application comprises the play control of the video and the switching of three interface forms.
Turning to the video playing control logic, referring to fig. 7, in step 701, initializing the player width and height and playing the video, specifically, setting the video full screen if the terminal aspect ratio is equal to the video aspect ratio, otherwise, setting the screen width as the video width, the video height as 16/9 times the video width, cropping the portion of the video exceeding the height, and vertically centering the video player; in step 702, initializing a play control component and a sound control component, specifically, rendering a play control button, hiding a default play control button, detecting a click operation for a video, suspending playing the video and displaying the play control button after detecting the click operation, hiding the play control button and playing the video after detecting the click operation for the play button, defaulting to turn on sound, initializing a sound switch button, always displaying the sound switch button, and detecting that the operation of the click sound switch button switches sound on or off; in step 703, detecting a page form switching, and controlling a video playing state, specifically, suspending video playing when switching from other two states to full-screen graphics context, and continuing playing from a suspending position when switching from full-screen graphics context to other two states; in step 704, the purchase atmosphere component is synthesized and displayed on the video, specifically, the purchase and browse records of the corresponding commodities on the landing page are pulled, the purchase and browse records are rendered to be overlapped on the video in a horse race lamp mode, and the semitransparent display is set so that the purchase atmosphere component does not affect the video watching.
Referring to fig. 8, the video playing state is switched to the half-screen image-text detail state by the clicking operation and the upward sliding operation, the half-screen image-text detail state is switched to the full-screen image-text detail state by the upward sliding operation, and the full-screen image-text detail state is switched to the video playing state by the downward sliding operation.
According to the embodiment of the application, the recommendation effect of the commodity is enhanced by immersing and displaying the video content, so that an immersed shopping experience is created for the user. The user can rapidly place orders or view commodity information while watching short videos to know commodity details, and can directly and flexibly switch between placing orders and watching videos, so that the user's lower monomer experience is improved, the conversion effect of client advertisements is further improved, and finally the recommendation efficiency is improved.
It will be appreciated that in the embodiments of the present application, related data such as user information is involved, and when the embodiments of the present application are applied to specific products or technologies, user permissions or agreements need to be obtained, and the collection, use and processing of related data need to comply with relevant laws and regulations and standards of relevant countries and regions.
Continuing with the description below of an exemplary architecture of the display processing device 455 implemented as a software module provided by embodiments of the present application, in some embodiments, as shown in fig. 2, the software modules stored in the display processing device 455 of the memory 450 may include: the display module 4551 is configured to play a video of a recommended item in a first region of the human-computer interaction interface and display first information of the item in a second region of the human-computer interaction interface; and the sliding module 4552 is used for zooming the second area to form a third area in response to the sliding operation in the man-machine interaction interface, and displaying the second information of the article in the third area to replace the first information.
In some embodiments, the display module 4551 is further configured to: displaying an information stream in a human-computer interaction interface, wherein the information stream comprises recommendation information for recommending the article; and responding to the triggering operation aiming at the recommendation information, and playing the video of the recommended article in a first area of the human-computer interaction interface, wherein the first area is the whole area of the human-computer interaction interface or a local area of the human-computer interaction interface.
In some embodiments, the information stream further comprises at least one other recommendation information for recommending at least one other item; the display module 4551 is further configured to: responding to video switching operation aiming at the video, and playing the video of the recommended target object in a first area of the man-machine interaction interface; the target object is derived from at least one other object, and other recommended information corresponding to the target object is adjacent to the recommended information in the information flow.
In some embodiments, before playing the video of the recommended item in the first area of the human-machine interface, the display module 4551 is further configured to: acquiring a first frame image of a video; displaying a first frame image; loading the complete resources of the video; and when the complete resource loading is completed, playing the video in the first area of the man-machine interaction interface.
In some embodiments, the display module 4551 is further configured to: when the aspect ratio of the man-machine interaction interface is the same as that of the video, playing the video in full screen in the man-machine interaction interface; when the aspect ratio of the man-machine interaction interface is different from that of the video, cutting the video based on the width and the set aspect ratio of the man-machine interaction interface, and playing the cut video in the middle area of the man-machine interaction interface.
In some embodiments, when playing the video of the recommended item in the first area of the human-machine interface, the display module 4551 is further configured to: and responding to clicking operation on the video, pausing playing the video, and displaying a playing control on a pause picture of the video.
In some embodiments, when playing the video of the recommended item in the first area of the human-machine interface, the display module 4551 is further configured to: displaying an audio control corresponding to the video; responding to triggering operation for the audio control, and switching the audio playing state of the video, wherein the audio playing state comprises the following steps: mute play state and non-mute play state.
In some embodiments, the display module 4551 is further configured to: when the blocked area of the first area exceeds an area threshold value, pausing playing the video; when the area of the first area which is blocked changes from exceeding the area threshold value to not exceeding the area threshold value, continuing to play the video from the position where the video is paused.
In some embodiments, at least one of the first information and the second information includes a purchase portal for the item, the display module 4551 further configured to: responding to triggering operation for a purchase entrance, and displaying an order page of the object in a fourth area of a man-machine interaction interface, wherein the order page comprises a payment entrance, and the fourth area is obtained by scaling the second area or the third area; displaying purchase information of the item and a receiving address of the item in the order page in response to an information input operation for the order page; in response to a trigger operation for the payment portal, a payment process for the item is performed.
In some embodiments, when playing the video of the recommended item in the first area of the human-machine interface, the display module 4551 is further configured to: superposing and displaying interactive information of the object on the video; wherein the interaction information comprises at least one of the following: purchase information for an item, evaluation information for an item, collection information for an item, browsing information for an item, the purchase information indicating a purchaser of the item and a purchase time of the corresponding purchaser.
In some embodiments, the display module 4551 is further configured to: and broadcasting the purchase information of the articles in turn according to the sequence of purchasing the articles, and broadcasting the evaluation information of the articles in turn according to the sequence of issuing the evaluation information.
In some embodiments, before the interactive information of the display object is superimposed on the video, the display module 4551 is further configured to: acquiring interaction information of an article; rendering processing is carried out based on the interaction information, and a component bearing the interaction information is obtained; the component to which the set transparency is applied is displayed on the video.
In some embodiments, the first arrangement of the first information and the first amount of information are adapted to the size of the second area; the second arrangement of the second information and the second amount of information are adapted to the size of the third area.
In some embodiments, when the human-computer interaction interface is in the portrait display mode, the sliding module 4552 is further configured to: when the sliding operation is an upward sliding operation, moving the movable edge of the second area to the top of the man-machine interaction interface by a first distance to form a third area; and when the sliding operation is a downward sliding operation, moving the movable edge of the second area to the bottom of the man-machine interaction interface by a first distance to form a third area.
In some embodiments, the first distance is positively correlated with a first parameter comprising at least one of: the operation duration of the sliding operation, the operation distance of the sliding operation, and the operation pressure of the sliding operation.
In some embodiments, the sliding module 4552 is further to: acquiring content characteristics of current playing content of a video and object characteristics of an object for operating a human-computer interaction interface; acquiring a feature distance between a content feature and an object feature; when the third area is larger than the second area, acquiring a first distance positively correlated with the characteristic distance; and when the third area is smaller than the second area, acquiring a first distance inversely related to the characteristic distance.
In some embodiments, the sliding module 4552 is further to: when the third area is larger than the second area, acquiring first historical record data of the third area larger than the second area, extracting historical first distances of multiple amplified changes from the first historical record data, and taking an average value of the historical first distances of the multiple amplified changes as the first distance; and when the third area is smaller than the second area, acquiring second historical record data of which the third area is smaller than the second area, extracting historical first distances of multiple reduced changes from the second historical record data, and taking an average value of the historical first distances of the multiple reduced changes as the first distance.
In some embodiments, when the upper edge and the lower edge of the second region do not overlap with the edge of the human-machine interface, the sliding module 4552 is further configured to: taking the edge with a larger distance between the upper edge and the corresponding boundary of the lower edge and the human-computer interaction interface as the movable edge; the corresponding boundary of the upper edge is the upper boundary of the man-machine interaction interface, and the corresponding boundary of the lower edge is the lower boundary of the man-machine interaction interface; and moving the movable edge of the second area to the top of the man-machine interaction interface by a first distance, and extending the other edges of the second area to a position overlapping with the boundary of the man-machine interaction interface to form a third area.
In some embodiments, when the human-machine interface is in the landscape display mode, the sliding module 4552 is further configured to: when the sliding operation is left sliding operation, moving the movable edge of the second area to the left side of the man-machine interaction interface by a second distance to form a third area; when the sliding operation is rightward sliding operation, moving the movable edge of the second area to the right side of the man-machine interaction interface by a second distance to form a third area; wherein the second distance is positively correlated with a second parameter, the second parameter comprising at least one of: the operation duration of the sliding operation, the operation distance of the sliding operation, and the operation pressure of the sliding operation.
In some embodiments, the display module 4551 is further configured to: at least one of the following processes is performed: displaying first information of the article in a second area conforming to the set size; displaying first information of the article in a second area which does not cause shielding of the first area; and displaying the first information of the object in a second area conforming to the reading habit of the object, wherein the object is an object for operating the man-machine interaction interface.
Embodiments of the present application provide a computer program product comprising a computer program or computer instructions stored in a computer-readable storage medium. The processor of the electronic device reads the computer-executable instructions from the computer-readable storage medium, and the processor executes the computer-executable instructions, so that the electronic device executes the display processing method according to the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, cause the processor to perform a display processing method provided by embodiments of the present application, for example, the display processing method shown in fig. 3A-3C.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, computer-executable instructions may be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, in the form of programs, software modules, scripts, or code, and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, computer-executable instructions may be deployed to be executed on one electronic device or on multiple electronic devices located at one site or, alternatively, on multiple electronic devices distributed across multiple sites and interconnected by a communication network.
In summary, the video of the recommended article and the information of the article are displayed in two areas of the man-machine interaction interface, the article is illustrated by the video, the article can be fully displayed, and the display area of the article information can be scaled by triggering the sliding operation on the man-machine interaction interface, which is equivalent to the display mode conversion which can be realized in a diversified manner by the sliding operation.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (24)

1. A display processing method, the method comprising:
playing a video of a recommended article in a first area of a man-machine interaction interface, and displaying first information of the article in a second area of the man-machine interaction interface;
And in response to a sliding operation in the man-machine interaction interface, zooming the second area to form a third area, and displaying second information of the object in the third area to replace the first information.
2. The method of claim 1, wherein playing the video of the recommended item in the first area of the human-machine interface comprises:
displaying an information stream in the man-machine interaction interface, wherein the information stream comprises recommendation information for recommending the article;
and responding to the triggering operation aiming at the recommendation information, and playing a video recommending the article in a first area of the man-machine interaction interface, wherein the first area is the whole area of the man-machine interaction interface or a local area of the man-machine interaction interface.
3. The method of claim 2, wherein the information stream further comprises at least one other recommendation information for recommending at least one other item; the method further comprises the steps of:
responding to the video switching operation aiming at the video, and playing the video of the recommended target object in a first area of the man-machine interaction interface;
The target object is derived from the at least one other object, and other recommended information corresponding to the target object is adjacent to the recommended information in the information flow.
4. The method of claim 1, wherein prior to playing the video of the recommended item in the first area of the human-machine interface, the method further comprises:
acquiring a first frame image of the video;
displaying the first frame image;
loading the complete resources of the video;
the playing the video of the recommended article in the first area of the man-machine interaction interface comprises:
and when the complete resource loading is completed, playing the video in a first area of the man-machine interaction interface.
5. The method of claim 1, wherein playing the video of the recommended item in the first area of the human-machine interface comprises:
when the aspect ratio of the man-machine interaction interface is the same as that of the video, playing the video in a full screen mode in the man-machine interaction interface;
and when the aspect ratio of the man-machine interaction interface is different from the aspect ratio of the video, cutting the video based on the width and the set aspect ratio of the man-machine interaction interface, and playing the cut video in the middle area of the man-machine interaction interface.
6. The method of claim 1, wherein when playing the video of the recommended item in the first area of the human-machine interaction interface, the method further comprises:
and responding to clicking operation on the video, suspending playing the video, and displaying a playing control on a suspended picture of the video.
7. The method of claim 1, wherein when playing the video of the recommended item in the first area of the human-machine interaction interface, the method further comprises:
displaying an audio control corresponding to the video;
responding to the triggering operation of the audio control, and switching the audio playing state of the video, wherein the audio playing state comprises the following steps: mute play state and non-mute play state.
8. The method according to claim 1, wherein the method further comprises:
when the blocked area of the first area exceeds an area threshold value, pausing playing the video;
and continuing to play the video from the position where the video is paused when the blocked area of the first area changes from exceeding the area threshold to not exceeding the area threshold.
9. The method of claim 1, wherein at least one of the first information and the second information each includes a purchase portal for the item, the method further comprising:
responding to triggering operation for the purchase entrance, and displaying an order page of the object in a fourth area of the man-machine interaction interface, wherein the order page comprises a payment entrance, and the fourth area is obtained by zooming the second area or the third area;
displaying purchase information of the item and a receiving address of the item in the order page in response to an information input operation for the order page;
in response to a trigger operation for the payment portal, a payment process for the item is performed.
10. The method of claim 1, wherein when playing the video of the recommended item in the first area of the human-machine interaction interface, the method further comprises:
superposing and displaying the interactive information of the object on the video;
wherein the interaction information includes at least one of: purchase information for the item, evaluation information for the item, collection information for the item, browsing information for the item, the purchase information indicating a purchaser of the item and a purchase time corresponding to the purchaser.
11. The method of claim 10, wherein displaying interactive information of the item superimposed on the video comprises:
and broadcasting the interactive information of the objects on the video in turn according to the interactive sequence of the interactive information of the objects.
12. The method of claim 10, wherein prior to overlaying interactive information for displaying the item on the video, the method further comprises:
acquiring interaction information of the article;
rendering processing is carried out based on the interaction information, and a component bearing the interaction information is obtained;
the component to which the set transparency is applied is displayed on the video.
13. The method according to claim 1, characterized in that the first arrangement of the first information and the first amount of information are adapted to the size of the second area; the second arrangement of the second information and the second amount of information are adapted to the size of the third area.
14. The method of claim 1, wherein the zooming the second region to form a third region when the human-machine interface is in a portrait display mode comprises:
When the sliding operation is an upward sliding operation, moving the movable edge of the second area to the top of the man-machine interaction interface by a first distance to form the third area;
and when the sliding operation is a downward sliding operation, moving the movable edge of the second area to the bottom of the man-machine interaction interface by a first distance so as to form the third area.
15. The method of claim 14, wherein the first distance is positively correlated with a first parameter comprising at least one of: the operation duration of the sliding operation, the operation distance of the sliding operation, and the operation pressure of the sliding operation.
16. The method of claim 14, wherein the method further comprises:
acquiring content characteristics of the current playing content of the video and object characteristics of an object for operating the man-machine interaction interface;
acquiring a feature distance between the content feature and the object feature;
when the third area is larger than the second area, acquiring a first distance positively correlated to the characteristic distance;
and when the third area is smaller than the second area, acquiring a first distance inversely related to the characteristic distance.
17. The method of claim 14, wherein the method further comprises:
when the third area is larger than the second area, acquiring first historical record data of the third area larger than the second area, extracting historical first distances of multiple amplified changes from the first historical record data, and taking an average value of the historical first distances of the multiple amplified changes as the first distance;
and when the third area is smaller than the second area, acquiring second historical record data of which the third area is smaller than the second area, extracting historical first distances of multiple reduced changes from the second historical record data, and taking an average value of the historical first distances of the multiple reduced changes as the first distance.
18. The method of claim 14, wherein moving the active edge of the second region a first distance toward the top of the human-machine interface to form the third region when the upper and lower edges of the second region do not overlap the edges of the human-machine interface comprises:
taking the edge with a larger distance between the upper edge and the corresponding boundary of the lower edge and the man-machine interaction interface as the movable edge;
The corresponding boundary of the upper edge is the upper boundary of the man-machine interaction interface, and the corresponding boundary of the lower edge is the lower boundary of the man-machine interaction interface;
and moving the movable edge of the second area to the top of the man-machine interaction interface by a first distance, and extending the other edges of the second area to a position overlapping with the boundary of the man-machine interaction interface to form the third area.
19. The method of claim 1, wherein the zooming the second region to form a third region when the human-machine interface is in a landscape display mode comprises:
when the sliding operation is left sliding operation, moving the movable edge of the second area to the left side of the man-machine interaction interface by a second distance to form the third area;
when the sliding operation is rightward sliding operation, moving the movable edge of the second area to the right side of the man-machine interaction interface by a second distance to form the third area;
wherein the second distance is positively correlated with a second parameter comprising at least one of: the operation duration of the sliding operation, the operation distance of the sliding operation, and the operation pressure of the sliding operation.
20. The method of claim 1, wherein displaying the first information of the item in the second area of the human-machine interface comprises:
at least one of the following processes is performed:
displaying first information of the article in a second area conforming to a set size;
displaying first information of the article in a second area which does not cause shielding of the first area;
and displaying the first information of the object in a second area conforming to the reading habit of the object, wherein the object is an object for operating the man-machine interaction interface.
21. A display processing apparatus, the apparatus comprising:
the display module is used for playing the video of the recommended article in the first area of the man-machine interaction interface and displaying the first information of the article in the second area of the man-machine interaction interface;
and the sliding module is used for responding to the sliding operation in the man-machine interaction interface, zooming the second area to form a third area, and displaying second information of the article in the third area to replace the first information.
22. An electronic device, the electronic device comprising:
A memory for storing computer executable instructions;
a processor for implementing the display processing method of any one of claims 1 to 20 when executing computer executable instructions stored in the memory.
23. A computer-readable storage medium storing computer-executable instructions which, when executed by a processor, implement the display processing method of any one of claims 1 to 20.
24. A computer program product comprising a computer program or computer-executable instructions which, when executed by a processor, implement the display processing method of any one of claims 1 to 20.
CN202211666821.XA 2022-12-23 2022-12-23 Display processing method, display processing device, electronic device, storage medium, and program product Pending CN117215442A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211666821.XA CN117215442A (en) 2022-12-23 2022-12-23 Display processing method, display processing device, electronic device, storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211666821.XA CN117215442A (en) 2022-12-23 2022-12-23 Display processing method, display processing device, electronic device, storage medium, and program product

Publications (1)

Publication Number Publication Date
CN117215442A true CN117215442A (en) 2023-12-12

Family

ID=89043147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211666821.XA Pending CN117215442A (en) 2022-12-23 2022-12-23 Display processing method, display processing device, electronic device, storage medium, and program product

Country Status (1)

Country Link
CN (1) CN117215442A (en)

Similar Documents

Publication Publication Date Title
US9008491B2 (en) Snapshot feature for tagged video
US20170263035A1 (en) Video-Associated Objects
KR20190088974A (en) Machine-based object recognition of video content
CN105701217A (en) Information processing method and server
CN112073583A (en) Multimedia information display method and related equipment
US20150248722A1 (en) Web based interactive multimedia system
WO2013138370A1 (en) Interactive overlay object layer for online media
CN103456254A (en) Multi-touch interactive multimedia digital signage system
CN103384253B (en) The play system and its construction method of multimedia interaction function are presented in video
US20160110884A1 (en) Systems and methods for identifying objects within video content and associating information with identified objects
US20180348972A1 (en) Lithe clip survey facilitation systems and methods
WO2015078260A1 (en) Method and device for playing video content
CN111683267A (en) Method, system, device and storage medium for processing media information
US20240086056A1 (en) Method and apparatus for interacting with application program, and electronic device
US10424009B1 (en) Shopping experience using multiple computing devices
CN105760420B (en) Realize the method and device with multimedia file content interaction
CN113490063A (en) Method, device, medium and program product for live broadcast interaction
CN111667313A (en) Advertisement display method and device, client device and storage medium
CN115039174A (en) System and method for interactive live video streaming
CN113301413A (en) Information display method and device
US20230071445A1 (en) Video picture display method and apparatus, device, medium, and program product
US20160110041A1 (en) Systems and methods for selecting and displaying identified objects within video content along with information associated with the identified objects
CN117215442A (en) Display processing method, display processing device, electronic device, storage medium, and program product
US20240055025A1 (en) System and method for dynamic, data-driven videos
RU2520394C1 (en) Method of distributing advertising and informational messages on internet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication