WO2024051601A1 - 多媒体组件触发方法、装置、电子设备及存储介质 - Google Patents

多媒体组件触发方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2024051601A1
WO2024051601A1 PCT/CN2023/116534 CN2023116534W WO2024051601A1 WO 2024051601 A1 WO2024051601 A1 WO 2024051601A1 CN 2023116534 W CN2023116534 W CN 2023116534W WO 2024051601 A1 WO2024051601 A1 WO 2024051601A1
Authority
WO
WIPO (PCT)
Prior art keywords
component
identification
gesture
video
container
Prior art date
Application number
PCT/CN2023/116534
Other languages
English (en)
French (fr)
Inventor
陈濛
赵冬
廖一伦
Original Assignee
抖音视界有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 抖音视界有限公司 filed Critical 抖音视界有限公司
Publication of WO2024051601A1 publication Critical patent/WO2024051601A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk

Definitions

  • the embodiments of the present disclosure relate to the field of Internet technology, and in particular, to a multimedia component triggering method, device, electronic device, and storage medium.
  • Embodiments of the present disclosure provide a multimedia component triggering method, device, electronic device, and storage medium.
  • embodiments of the present disclosure provide a multimedia component triggering method, including:
  • Play the target video in the video playback interface ; display the component identification in response to the trigger gesture for the identification trigger area in the video playback interface; move the component identification in response to the drag gesture for the component identification, and After the component identifier is moved to the target position, the hidden information is displayed in the video playback interface.
  • an embodiment of the present disclosure provides a multimedia component triggering device, including:
  • the playback module is used to play the target video in the video playback interface
  • a trigger module configured to display the component logo in response to a trigger gesture targeting the logo trigger area within the video playback interface
  • a display module configured to move the component identifier in response to a drag gesture on the component identifier, and display hidden information in the video playback interface after the component identifier moves to the target position.
  • an electronic device including:
  • a processor and a memory communicatively connected to the processor
  • the memory stores computer execution instructions
  • the processor executes computer execution instructions stored in the memory to implement the multimedia component triggering method described in the first aspect and various possible designs of the first aspect.
  • embodiments of the present disclosure provide a computer-readable storage medium.
  • Computer-executable instructions are stored in the computer-readable storage medium.
  • the processor executes the computer-executable instructions, the above first aspect and the first aspect are implemented. Aspects of various possible designs for multimedia component triggering methods.
  • embodiments of the present disclosure provide a computer program product, including a computer program, the computer program being When the processor executes, the multimedia component triggering method described in the first aspect and various possible designs of the first aspect is implemented.
  • embodiments of the present disclosure provide a computer program that, when executed by a processor, implements the multimedia component triggering method described in the first aspect and various possible designs of the first aspect.
  • the multimedia component triggering method, device, electronic device and storage medium provided by the embodiments of the present disclosure play the target video in the video playback interface; display the component identification in response to a trigger gesture targeting the identification trigger area in the video playback interface; respond to A drag gesture for the component identifier moves the component identifier, and after the component identifier moves to the target position, hidden information is displayed in the video playback interface.
  • Figure 1 is an application scenario diagram of the multimedia component triggering method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart 1 of a multimedia component triggering method provided by an embodiment of the present disclosure
  • Figure 3 is a schematic diagram of a video playback interface provided by an embodiment of the present disclosure.
  • Figure 4 is a schematic diagram of component identification provided by an embodiment of the present disclosure.
  • Figure 5 is a schematic logical structure diagram of a video playback interface provided by an embodiment of the present disclosure.
  • FIG. 6 is a flow chart of the specific implementation of step S102 in the embodiment shown in Figure 2;
  • Figure 7 is a schematic diagram of displaying component identification through a component container according to an embodiment of the present disclosure.
  • Figure 8 is a schematic diagram of a process for displaying hidden information according to an embodiment of the present disclosure.
  • Figure 9 is a schematic flowchart 2 of a multimedia component triggering method provided by an embodiment of the present disclosure.
  • Figure 10 is a schematic diagram of a process for displaying component identification through a long press gesture according to an embodiment of the present disclosure
  • Figure 11 is a schematic diagram of a drag gesture trajectory provided by an embodiment of the present disclosure.
  • Figure 12 is a schematic diagram of changes in an identification attribute provided by an embodiment of the present disclosure.
  • Figure 13 is a structural block diagram of a multimedia component triggering device provided by an embodiment of the present disclosure.
  • Figure 14 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • Figure 15 is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the present disclosure.
  • FIG 1 is an application scenario diagram of a multimedia component triggering method provided by an embodiment of the present disclosure.
  • the multimedia component triggering method provided by an embodiment of the present disclosure can be applied to the application scenario of short video playback. More specifically, it can be applied to short video playback.
  • the method provided by the embodiment of the present disclosure can be applied to terminal Terminal devices, such as smartphones, tablets, etc. Among them, for example, a short video application is running in the terminal device. During the process of the terminal device playing the target video (short video) through the video playback interface of the application, some hidden information is hidden in the video playback interface. , that is, an "Easter egg".
  • the method of triggering hidden information for terminal devices such as smartphones and tablets is usually based on gesture operations such as click gestures and long press gestures. of.
  • gesture operations such as click gestures and long press gestures. of.
  • the gesture operation touches the logo trigger area, such as clicking or long pressing the logo trigger area, the corresponding hidden information will be triggered.
  • the guessed trigger areas may overlap, resulting in conflicts in the Easter egg trigger logic.
  • the triggering method of hidden information in the video is single and the number of hidden information settings is limited.
  • Embodiments of the present disclosure provide a multimedia component triggering method to solve the above problems.
  • FIG 2 is a schematic flowchart 1 of a multimedia component triggering method provided by an embodiment of the present disclosure.
  • the method of this embodiment can be applied in terminal devices.
  • the multimedia component triggering method includes:
  • Step S101 Play the target video in the video playback interface.
  • Step S102 In response to a trigger gesture directed at the identification trigger area in the video playback interface, display the component identification.
  • the execution subject of the method provided in this embodiment may be a terminal device, such as a smart phone, a tablet, etc.
  • the terminal device runs an application for playing videos, more specifically, for example, a short video application based on feed streaming technology, in which the application plays short videos through a video playback interface.
  • Figure 3 is a schematic diagram of a video playback interface provided by an embodiment of the present disclosure.
  • the video playback interface can be a functional module implemented in the target application based on the functional class for playing videos provided by the operating system.
  • Video playback While playing the target video the interface can perform corresponding application functions in response to the user's gesture operations on the video playback interface, such as the function of switching videos through sliding gestures and the function of collecting videos through long press gestures.
  • a logo triggering area for triggering component logos is preset in the video playback interface, where the component logos, that is, Easter egg logos, can be icons used to trigger hidden information (trigger Easter eggs).
  • the logo trigger area may be invisible.
  • the corresponding component logo event will be triggered, and the component logo will be displayed at the corresponding position of the logo trigger area; on the other hand, if the first contact coordinate is located in the logo trigger area Outside the area, the corresponding video control event will be triggered through the application's video control control to execute the corresponding video control function, that is, the application's native program functions, such as switching videos, collecting videos, etc.
  • the component identification may be a graphic, text, or symbol with a specific shape.
  • the component identifier plays a role in guiding the user's operation. In subsequent steps, by dragging the component identifier, the hidden information (Easter egg) of the response is triggered.
  • Figure 4 is a schematic diagram of a component identification provided by an embodiment of the present disclosure.
  • the component identification in the initial state, the component identification is not visible in the video playback interface for playing the target video.
  • the terminal device responds to the trigger gesture and displays the component logo at the corresponding location, such as the logo trigger area.
  • the triggering gesture can be click, double-click, long press, etc. corresponding to the fixed view. The operation of touching the video playback interface.
  • the video playback interface includes a video container and a component container arranged overlappingly, wherein the video container is used to display the target video, and the component container is used to display component identification and hidden information.
  • Figure 5 is a schematic logical structure diagram of a video playback interface provided by an embodiment of the present disclosure. As shown in Figure 5, the video playback interface includes overlapping video containers and component containers. The video containers and component containers each correspond to a display layer (Layer). ) Therefore, the video container and component container can also be called the video container layer and component container layer. By default, the component container is set above the video container.
  • the component container When the component container does not display component identification and hidden information, the component container is invisible, that is, the user can see the target video played in the video container layer through the component container layer. Further, the video container is also used to respond to video control events by setting video components, thereby realizing the video control function; the component container responds to component control events by setting hidden components, thereby realizing display control of component identification and hidden information.
  • Video containers and component containers are both software modules, implemented through corresponding classes. For example, the video container is implemented based on canvas; the component container is implemented based on lynx container.
  • step S102 the specific implementation steps of step S102 include:
  • Step S1011 Obtain the first contact coordinate corresponding to the trigger gesture, and detect the component container based on the first contact coordinate.
  • the first contact coordinate represents the coordinate of the contact point of the trigger gesture on the video playback interface.
  • Step S1012 If the first contact coordinate is located in the logo trigger area corresponding to the component container, trigger the component logo event corresponding to the trigger gesture, and respond to the component logo event through the component container to display the component logo corresponding to the logo trigger area.
  • Step S1013 If the first contact coordinate is outside the logo trigger area corresponding to the component container, trigger the video control event corresponding to the trigger gesture, and respond to the video control event through the video container to execute the corresponding video control function.
  • the component container corresponds to at least one identification trigger area.
  • the terminal device detects the trigger gesture
  • the corresponding first contact coordinates are obtained according to the contact point of the trigger gesture on the video display interface.
  • the identification trigger area corresponding to the component container can be information preset through the configuration file and can be obtained directly
  • the identification trigger area corresponding to the component container can be information preset through the configuration file and can be obtained directly
  • the first contact coordinate is located in the logo triggering area corresponding to the component container.
  • the component identification event corresponding to the triggering gesture is triggered, wherein the triggering gesture may include one or more types.
  • each A trigger gesture corresponds to a component identification event.
  • the triggering gesture is a click gesture
  • the A event is triggered.
  • the component container responds to the A event
  • the component identifier #1 is displayed, and then based on the drag gesture for the component identifier #1, the hidden information a is triggered; the triggering gesture is long
  • the B event is triggered.
  • the component identifier #2 is displayed, and then based on the drag gesture for the component identifier #2, the hidden information b is triggered.
  • the component container includes the first hidden component.
  • the specific implementation method of responding to the component identification event through the component container to display the component identification corresponding to the identification triggering area includes:
  • Figure 7 is a schematic diagram of displaying component identification through a component container provided by an embodiment of the present disclosure.
  • the terminal device can pass While the video container plays the target video, it responds to the component identification event through the first hidden component set in the component container, and displays the component identification in the component container.
  • the component logo and the target video displayed simultaneously in the video playback interface achieve the purpose of displaying the component logo without affecting the display of the target video, and avoid affecting the display effect of the target video.
  • the video container includes a video control control, and the video control control is used to respond to video control events and execute corresponding video control functions. If the first contact coordinate is located outside the logo trigger area corresponding to the component container, the video control event corresponding to the gesture is triggered. Similar to the component logo event, the trigger gesture can include one or more types. When the trigger gesture includes multiple types, each Each trigger gesture corresponds to a video control event. For example, when the trigger gesture is a click gesture, the corresponding video control event is responded to by the video container and the target video playback is paused; when the trigger gesture is a long press gesture, the corresponding video control event After being responded to by the video container, collect the target video.
  • the component container is a lynx container
  • the video container is a canvas container.
  • the terminal device the target application running inside
  • the default behavior is that the upper element will consume the event regardless of whether it has a bound click call function.
  • the event The target detection mechanism continues to execute until it finds the bottom video element (video control control) and executes the video element to implement the video control function.
  • the component container is first checked, and the component container is used first to respond to the corresponding trigger event (responding as a component identification event). After the component container cannot respond to the corresponding trigger event (section 1) Once the contact coordinate is outside the logo trigger area corresponding to the component container), the video container is then used to respond to the trigger event (response as a video control event), which avoids logical conflicts when the trigger gesture responds on the video container and the component container. , realizes the operation logic control when the component container and video container in the video playback interface overlap each other.
  • Step S103 In response to the drag gesture for the component identification, move the component identification, and after the component identification moves to the target position, display the hidden information in the video playback interface.
  • the terminal device synchronously updates the position of the displayed component identifier on the video playback interface according to the drag trajectory of the drag gesture to realize movement.
  • the display effect of the component logo then, after the component logo moves to the target position, the hidden information is triggered, that is, the hidden information is displayed in the video playback interface.
  • the hidden information is, for example, a target page displayed in the video playback interface, more specifically, such as a promotion page, product parameter page, etc.
  • the specific implementation content of the hidden information can be set according to specific needs, and will not be described again here.
  • the specific implementation steps of moving the component identifier include: Add an area view element of a specified size (corresponding to the hidden information trigger area) on the upper layer of the screen view element (video container) (component container), add block-native-event as true, and the user can operate this element area through gestures (such as click gestures) , will be captured and consumed by the component container, and will not be transmitted to the underlying client (video container) for processing.
  • gestures such as click gestures
  • the coordinates of the click will be recorded, and the component identification will be drawn at the coordinates.
  • the user moves the finger and moves out of the specified area, triggering the mobile event and intercepting the mobile event and transmitting it to the underlying client (video container) ), using this to calculate the next coordinate position of the component identification, so that the component identification characteristics move with the finger.
  • Figure 8 is a schematic diagram of a process for displaying hidden information provided by an embodiment of the present disclosure.
  • the component identifier in the video playback interface is located at position D1, and a drag gesture is used to drag the component identifier in the video playback interface.
  • the component logo is moved to the D2 position in response to the drag gesture, the corresponding hidden information A is triggered.
  • a target page is displayed in the video playback interface to display the content of the relevant hidden information.
  • the initial position of the component logo corresponds to the logo trigger area
  • the target position (D2 position) corresponds to the hidden information trigger area.
  • the target position can be preset based on the configuration file.
  • the target position may include multiple target positions.
  • different hidden information is correspondingly triggered.
  • the D2 position in addition to the D2 position, it also includes the D3 position and the D4 position.
  • the corresponding hidden information B is triggered; when the component identifier moves from D1 to D4, the corresponding hidden information is triggered.
  • Information C Since the target position can be set flexibly, when the target position is different, different hidden information can be triggered correspondingly, thereby improving the triggering flexibility of the hidden information and the diversity of triggering methods, and increasing the number of hidden information that can be set in the target video.
  • the target video is played in the video playback interface; the component identifier is displayed in response to the trigger gesture for the identifier trigger area in the video playback interface; the component identifier is moved in response to the drag gesture for the component identifier, and After the component identifier is moved to the target position, the hidden information is displayed in the video playback interface. Since the component identifier is first displayed and then the hidden information is moved to the target position based on the dragging gesture for the component identifier, more diverse triggering methods can be achieved by changing the target position.
  • the operation dimension is more, which solves the problem of a single triggering method of hidden information in the video, and then can set multiple hidden information based on different triggering methods in the video to improve the hidden information.
  • the set number of messages is more, which solves the problem of a single triggering method of hidden information in the video, and then can set multiple hidden information based on different triggering methods in the video to improve the hidden information.
  • FIG. 9 is a schematic flowchart 2 of a multimedia component triggering method provided by an embodiment of the present disclosure. This embodiment further refines step S102 based on the embodiment shown in Figure 2, where the video playback interface includes overlapping video containers and component containers.
  • the multimedia component triggering method provided by this embodiment includes:
  • Step S201 Play the target video in the video container of the video playback interface.
  • Step S202 In response to a long press gesture on the logo triggering area, display a guidance logo that changes over time.
  • Step S203 After detecting that the long press gesture continues for the first duration, display the component logo corresponding to the logo trigger area.
  • the user's gesture operation may cause an accidental touch.
  • the duration of the "long press gesture” operation is too short, it may be confused with the "click gesture” operation, resulting in an accidental touch and affecting the interactive experience.
  • Figure 10 is a schematic diagram of a process for displaying component identification through a long press gesture according to an embodiment of the present disclosure.
  • the terminal device displays a The guidance logo can be an icon that changes over time.
  • the guidance logo is a donut-shaped progress bar.
  • the long press gesture continues, the donut-shaped progress bar continues. Go forward.
  • the progress bar reaches the end. After that, the component identification corresponding to the identification trigger area is displayed.
  • steps S202 to S203 are optional steps. If the component identification is displayed using, for example, the method provided by the embodiment shown in FIG. 2, it will not affect the normal execution of subsequent steps.
  • Step S204 Obtain the second contact coordinate corresponding to the drag gesture.
  • the second contact coordinate represents the coordinate of the real-time contact point of the drag gesture on the video playback interface.
  • Step S205 Update the display position of the component identifier in the component container according to the second contact coordinates.
  • the terminal device detects the drag gesture to obtain the corresponding second contact coordinates, that is, the coordinates of the real-time contact point of the drag gesture. Afterwards, based on the second contact coordinate, the component identifier is synchronously rendered to the position of the second contact coordinate, thereby realizing synchronous movement of the component identifier and the drag gesture.
  • Step S206 Obtain the drag gesture trajectory corresponding to the drag gesture according to the second contact coordinates.
  • Step S207 If the drag gesture trajectory is the same as the preset target trajectory, determine the second contact coordinate as the end point coordinate.
  • the drag gesture can be obtained based on the current second contact coordinate and the corresponding second contact coordinate of the historical position of the previous drag gesture.
  • Figure 11 is a schematic diagram of a drag gesture trajectory provided by an embodiment of the present disclosure. As shown in Figure 11, as the drag gesture continues to move, the terminal device detects the contact point of the drag gesture in the video playback interface, Obtain a series of ordered sets of second contact coordinates, and the set of second contact coordinates is the drag gesture trajectory. Among them, as the drag gesture continues to move, the corresponding drag gesture trajectory also changes accordingly. When it is detected that the drag gesture trajectory coincides with the preset target trajectory, the trigger state is reached.
  • the drag gesture will be constituted.
  • the last second contact coordinate of the gesture trajectory that is, the latest obtained second contact coordinate, is determined as the end point coordinate.
  • hidden information triggering is performed, thereby realizing an Easter egg triggering method based on the dragging gesture trajectory.
  • the target trajectory may include one or more, and when there are multiple target trajectories, each target trajectory corresponds to a kind of hidden information.
  • the triggering method of the hidden information triggering is further enriched. Since the drag trajectory gesture consists of a series of ordered contact points (second contact coordinates), its information dimension is higher. By changing the target trajectory and triggering different hidden information with different target trajectories, it can greatly Increase the amount of hidden information set in the video to achieve more flexible and diversified hidden information triggering methods.
  • step S206 it also includes:
  • Step S208 Based on the movement trajectory of the component identifier, update the identification attribute of the displayed component identification, where the identification attribute is used to characterize the distance and/or direction between the current position of the component identification and the target position.
  • the component identifier moves synchronously with the drag gesture, and the movement trajectory of the component identifier is consistent with the drag gesture trajectory corresponding to the drag gesture.
  • the terminal device After obtaining the drag gesture trajectory corresponding to the drag gesture, further, the terminal device performs the drag gesture according to the drag gesture.
  • the change of the gesture trajectory dynamically displays the identification attribute of the component identification, thereby indicating the distance and/or direction between the current position of the component identification and the target position.
  • the identification attribute of the component identification refers to the visual attribute of the component identification, such as one or more of the color of the component identification, the size of the component identification, the shape of the component identification, and the transparency of the component identification.
  • the component identifier As the drag gesture trajectory extends and the component identifier gets closer to the target position (end point coordinates), the component identifier The lower the transparency and the farther the component logo is from the target position (end coordinate), the higher the transparency of the component logo. Therefore, the logo attribute of the component logo can be used to guide the user to change the position and direction of the drag gesture, so as to achieve the goal of dragging the component. Identification trigger.
  • the identification attribute of the component identification can be used to guide the user to change the position and direction of the drag gesture, and finally achieve the purpose of guiding the user to trigger the hidden hidden information.
  • the identification attributes include one of the following indications: the color of the component identification, the size of the component identification, the shape of the component identification, and the transparency of the component identification.
  • Figure 12 is a schematic diagram of changes in identification attributes provided by an embodiment of the present disclosure.
  • the size of the component identification is R1; thereafter, when the drag gesture trajectory is located at position A, When the position extends to position B, the size of the component logo is R2, where R2 is smaller than R1. In this case, it means that the component logo becomes farther from the target position; and when the drag gesture track extends from position A to position C, the component The size of the logo is R3, where R3 is larger than R1. In this case, it means that the component logo is closer to the target position.
  • the user can adjust the drag gesture by observing the size changes of the component logo in real time, thereby guiding the component logo dragged by the drag gesture to move to the target position and triggering hidden information.
  • the identification attribute of the component identification is updated in real time according to the movement trajectory of the component identification, and the identification attribute of the component identification is used to guide the user's drag gesture, thereby realizing rapid triggering of hidden information and reducing the user's risk of triggering hidden information. Difficulty, improve the interaction efficiency of hidden information.
  • Step S209 If the second contact coordinate is not the end point coordinate, return to step S203. If the second contact coordinate is the end point coordinate, display the hidden information in the component container of the video playback interface.
  • the trigger gesture is currently moving to the target position. If it is at the target position, the hidden information trigger is performed; otherwise, return to the previous step and continue to detect the trigger gesture until the trigger gesture meets the predetermined Set requirements (for example, the trigger gesture trajectory is consistent with the target trajectory). Specifically, if the second contact coordinate of the current contact point is the end point coordinate, it is judged that the trigger gesture has moved to the target position. At this time, the hidden information is triggered and the hidden information is displayed in the component container of the video playback interface; if the current contact If the second contact coordinate of the point is not the end point coordinate, then return to step S203 to obtain a new second contact coordinate.
  • the predetermined Set requirements for example, the trigger gesture trajectory is consistent with the target trajectory.
  • FIG. 13 is a structural block diagram of a multimedia component triggering device provided in an embodiment of the present disclosure. For convenience of explanation, only parts related to the embodiments of the present disclosure are shown.
  • the multimedia component triggering device 3 includes:
  • the playback module 31 is used to play the target video in the video playback interface
  • the triggering module 32 is configured to display the component identification in response to a triggering gesture for the identification triggering area in the video playback interface;
  • the display module 33 is configured to move the component identifier in response to a drag gesture on the component identifier, and display hidden information in the video playback interface after the component identifier moves to the target position.
  • the video playback interface includes a video container and a component container arranged overlappingly, wherein the video container is used to display the target video, and the component container is used to display component identification and hidden information.
  • the component container corresponds to at least one identification trigger area; the trigger module 32 is specifically configured to: obtain the first contact coordinate corresponding to the trigger gesture, and detect the component container based on the first contact coordinate. catch The touch coordinate represents the coordinate of the touch point of the trigger gesture on the video playback interface; if the first contact coordinate is located in the logo trigger area corresponding to the component container, the component logo event corresponding to the trigger gesture is triggered, and the component logo event is processed through the component container. Response to display the component ID corresponding to the identification trigger area.
  • the component container includes a first hidden component; when the trigger module 32 responds to a component identification event through the component container to display the component identification corresponding to the identification trigger area, it is specifically configured to: move the component container Set to the first display level, set the video container to the second display level, the first display level is higher than or equal to the second display level; trigger the first hidden component in the component container to respond to the component identification event, in the target video The upper layer or the same layer displays the component identification.
  • the trigger module 32 is also configured to: if the first contact coordinate is located outside the identification trigger area corresponding to the component container, trigger the video control event corresponding to the trigger gesture, and respond to the video control event through the video container. Respond and execute the corresponding video control function.
  • the display module 33 is specifically configured to perform the following steps in a loop until the termination condition is reached: obtain the second contact coordinate corresponding to the drag gesture, and the second contact coordinate represents the drag gesture on the video playback interface. the coordinates of the real-time contact point on; update the display position of the component identifier in the component container according to the second contact coordinate; where the termination condition is that the second contact coordinate is the end point coordinate corresponding to the target position.
  • the display module 33 is further configured to: obtain the drag gesture trajectory corresponding to the drag gesture according to the second contact coordinate; When the gesture trajectory is the same as the preset target trajectory, the second contact coordinate is determined as the end point coordinate corresponding to the target position.
  • the display module 33 before the component identifier moves to the target position, the display module 33 is also configured to: update the identification attribute of the displayed component identifier based on the movement trajectory of the component identifier, where the identification attribute is used to characterize the component identifier. The distance and/or direction between the current position and the target position.
  • the identification attribute includes one of the following indications: the color of the component identification, the size of the component identification, the shape of the component identification, and the transparency of the component identification.
  • the trigger gesture is a long press gesture.
  • the display module 33 is specifically configured to: respond to the logo trigger zone.
  • the long press gesture displays the guide logo that changes with time; after the long press gesture is detected and continues for the first period of time, the component logo corresponding to the logo trigger area is displayed.
  • the multimedia component triggering device 3 provided in this embodiment can execute the technical solution of the above method embodiment. Its implementation principles and technical effects are similar, and will not be described again in this embodiment.
  • FIG 14 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure. As shown in Figure 14, the electronic device 4 includes:
  • Processor 41 and memory 42 communicatively connected to processor 41;
  • Memory 42 stores computer execution instructions
  • the processor 41 executes the computer execution instructions stored in the memory 42 to implement the multimedia component triggering method in the embodiment shown in FIGS. 2 to 12 .
  • processor 41 and the memory 42 are connected through the bus 43 .
  • the electronic device 900 may be a terminal device or a server.
  • terminal devices may include but are not limited to mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, PDA), tablet computers (Portable Android Device, PAD), portable multimedia players (Portable Media Player , PMP), mobile terminals such as vehicle-mounted terminals (such as vehicle-mounted navigation terminals), and fixed terminals such as digital televisions (Television, TV), desktop computers, etc.
  • PDA Personal Digital Assistant
  • PMP portable multimedia players
  • mobile terminals such as vehicle-mounted terminals (such as vehicle-mounted navigation terminals)
  • fixed terminals such as digital televisions (Television, TV), desktop computers, etc.
  • the electronic device shown in FIG. 15 is only an example, and should not bring any limitations to the functions and scope of use of the embodiments of the present disclosure.
  • the electronic device 900 may include a processing device (such as a central processing unit, a graphics processor, etc.) 901, which may process data according to a program stored in a read-only memory (Read Only Memory, ROM) 902 or from a storage device 908
  • the program loaded into the random access memory (Random Access Memory, RAM) 903 performs various appropriate actions and processes.
  • RAM 903 various programs and data required for the operation of the electronic device 900 are also stored.
  • the processing device 901, ROM 902 and RAM 903 are connected to each other via a bus 904.
  • An input/output (I/O) interface 905 is also connected to bus 904.
  • the following devices can be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD) , an output device 907 such as a speaker, a vibrator, etc.; a storage device 908 including a magnetic tape, a hard disk, etc.; and a communication device 909.
  • the communication device 909 may allow the electronic device 900 to communicate wirelessly or wiredly with other devices to exchange data.
  • FIG. 15 illustrates electronic device 900 with various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product including a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via communication device 909, or from storage device 908, or from ROM 902.
  • the processing device 901 the above-mentioned functions defined in the method of the embodiment of the present disclosure are performed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof.
  • Computer readable storage media may include, but are not limited to: an electrical connection having one or more conductors, a portable computer disk, a hard drive, random access memory (RAM), read only memory (ROM), erasable Programmable Read Only Memory (Erasable Programmable Read Only Memory, EPROM) or flash memory, optical fiber, portable compact disk read only memory (Compact Disc Read Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above .
  • a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • the computer-readable signal medium may also be any computer-readable storage medium. Any computer-readable signal medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device. Program code contained on a computer-readable medium can be transmitted using any appropriate medium, including but not limited to: wires, optical cables, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the above.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
  • the computer-readable medium carries one or more programs.
  • the electronic device When the one or more programs are executed by the electronic device, the electronic device performs the method shown in the above embodiment.
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including object-oriented programming languages such as Java, Smalltalk, C++, and conventional Procedural programming language—such as "C" or a similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or it can be connected to an external computer ( For example, using an Internet service provider to connect via the Internet).
  • LAN Local Area Network
  • WAN Wide Area Network
  • each block in the flowchart or block diagram may represent a module, segment, or portion of code that contains one or more logic functions that implement the specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
  • each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , or can be implemented using a combination of specialized hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure can be implemented in software or hardware.
  • the name of the unit does not constitute a limitation on the unit itself under certain circumstances.
  • the first acquisition unit can also be described as "the unit that acquires at least two Internet Protocol addresses.”
  • exemplary types of hardware logic components include: Field-Programmable Gate Array (FPGA), Application Specific Integrated Circuit (Application Specific Integrated Circuit, ASIC), Application Specification Standard Product (Application Specification) Specific Standard Parts (ASSP), System On Chip (SOC), Complex Programmable Logic Device (CPLD), etc.
  • FPGA Field-Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • ASSP Application Specification Standard Product
  • SOC System On Chip
  • CPLD Complex Programmable Logic Device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing.
  • machine-readable storage media may include an electrical connection based on one or more wires, a portable computer disk, a hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM portable compact disk read-only memory
  • magnetic storage device or any suitable combination of the above.
  • a multimedia component triggering method including:
  • Play the target video in the video playback interface ; display the component identification in response to the trigger gesture for the identification trigger area in the video playback interface; move the component identification in response to the drag gesture for the component identification, and After the component identifier is moved to the target position, the hidden information is displayed in the video playback interface.
  • the video playback interface includes an overlapping video container and a component container, wherein the video container is used to display the target video, and the component container is used to display the component Identification and said hidden information.
  • the component container corresponds to at least one of the identification trigger areas; and displaying the component identification in response to a trigger gesture for the identification trigger area in the video playback interface includes: obtaining the Trigger the first contact coordinate corresponding to the gesture, and detect the component container based on the first contact coordinate, where the first contact coordinate represents the coordinate of the contact point of the trigger gesture on the video playback interface; if If the first contact coordinate is located in the logo triggering area corresponding to the component container, the component logo event corresponding to the trigger gesture is triggered, and the component logo event is responded to through the component container to display the logo.
  • the component ID corresponding to the trigger area.
  • the component container includes a first hidden component; the component container responds to the component identification event to display the component identification corresponding to the identification triggering area,
  • the method includes: setting the component container to a first display level, setting the video container to a second display level, the first display level being higher than or equal to the second display level; triggering the The first hidden component responds to the component identification event and displays the component identification on an upper layer or the same layer of the target video.
  • the method further includes: if the first contact coordinate is located outside the identification trigger area corresponding to the component container, triggering the video control event corresponding to the trigger gesture, and passing The video container responds to the video control event and executes the corresponding video control function.
  • Displaying hidden information includes: performing the following steps in a loop until a termination condition is reached: obtaining a second contact coordinate corresponding to the drag gesture, the second contact coordinate representing the real-time effect of the drag gesture on the video playback interface The coordinates of the contact point; update the display position of the component identifier in the component container according to the second contact coordinate; wherein the termination condition is that the second contact coordinate is the end point coordinate corresponding to the target position .
  • the method further includes: obtaining the drag gesture trajectory corresponding to the drag gesture according to the second contact coordinates; When the drag gesture trajectory is the same as the preset target trajectory, the second contact coordinate is determined as the end point coordinate corresponding to the target position.
  • the method before the component identification moves to the target position, the method further includes: updating an identification attribute that displays the component identification based on the movement trajectory of the component identification, wherein the identification attribute Used to characterize the distance and/or direction between the current location identified by the component and the target location.
  • the identification attribute includes one of the following indications: the color of the component identification, the size of the component identification, the shape of the component identification, and the transparency of the component identification.
  • the trigger gesture is a long press gesture
  • the response to the video playback The trigger gesture of the identification trigger area in the interface and displaying the component identification include: in response to the long press gesture for the identification trigger area, displaying a guide identification that changes with time; and after detecting that the long press gesture continues for the first time After the duration, the component identification corresponding to the identification triggering area is displayed.
  • a multimedia component triggering device including:
  • the playback module is used to play the target video in the video playback interface
  • a trigger module configured to display the component logo in response to a trigger gesture targeting the logo trigger area within the video playback interface
  • a display module configured to move the component identifier in response to a drag gesture on the component identifier, and display hidden information in the video playback interface after the component identifier moves to the target position.
  • the video playback interface includes an overlapping video container and a component container, wherein the video container is used to display the target video, and the component container is used to display the component Identification and said hidden information.
  • the component container corresponds to at least one of the identification trigger areas;
  • the trigger module is specifically configured to: obtain the first contact coordinate corresponding to the trigger gesture, and based on the third A contact coordinate is detected on the component container, and the first contact coordinate represents the coordinate of the contact point of the trigger gesture on the video playback interface; if the first contact coordinate is located at the mark corresponding to the component container Within the trigger area, the component identification event corresponding to the trigger gesture is triggered, and the component container responds to the component identification event to display the component identification corresponding to the identification trigger area.
  • the component container includes a first hidden component; the trigger module responds to the component identification event through the component container to display the component corresponding to the identification trigger area.
  • the component is specifically used to: set the component container to the first display level, set the video container to the second display level, and the first display level is higher than or equal to the second display level; trigger
  • the first hidden component in the component container responds to the component identification event and displays the component identification on an upper layer or the same layer of the target video.
  • the trigger module is further configured to: if the first contact coordinate is located outside the identification trigger area corresponding to the component container, trigger the video control event corresponding to the trigger gesture. , and respond to the video control event through the video container and execute the corresponding video control function.
  • the display module is specifically configured to perform the following steps in a loop until the termination condition is reached: obtain the second contact coordinate corresponding to the drag gesture, and the second contact coordinate represents The coordinates of the real-time contact point of the drag gesture on the video playback interface; updating the display position of the component identifier in the component container according to the second contact coordinate; wherein the termination condition is The second contact coordinate is the end point coordinate corresponding to the target position.
  • the display module is further configured to: obtain the drag gesture corresponding to the second contact coordinate according to the second contact coordinate. the drag gesture trajectory; when the drag gesture trajectory is the same as the preset target trajectory, the second contact coordinate is determined as the end point coordinate corresponding to the target position.
  • the display module is further configured to: update and display the identification attribute of the component identifier based on the movement trajectory of the component identifier,
  • the identification attribute is used to characterize the distance and/or direction between the current location identified by the component and the target location.
  • the identification attribute includes one of the following indications: the color of the component identification color, size of the component logo, shape of the component logo, and transparency of the component logo.
  • the trigger gesture is a long press gesture
  • the display module when displaying the component identifier in response to the trigger gesture targeting the identification trigger area in the video playback interface, is specifically configured to: respond In response to the long press gesture for the logo trigger area, a guide logo that changes with time is displayed; after detecting that the long press gesture continues for a first period of time, the component logo corresponding to the logo trigger area is displayed.
  • an electronic device including: a processor, and a memory communicatively connected to the processor;
  • the memory stores computer execution instructions
  • the processor executes computer execution instructions stored in the memory to implement the multimedia component triggering method described in the first aspect and various possible designs of the first aspect.
  • a computer-readable storage medium is provided.
  • Computer-executable instructions are stored in the computer-readable storage medium.
  • a processor executes the computer-executed instructions, Implement the multimedia component triggering method as described in the first aspect and various possible designs of the first aspect.
  • embodiments of the present disclosure provide a computer program product, including a computer program that, when executed by a processor, implements the multimedia component triggering method described in the first aspect and various possible designs of the first aspect.
  • embodiments of the present disclosure provide a computer program that, when executed by a processor, implements the multimedia component triggering method described in the first aspect and various possible designs of the first aspect.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本公开实施例提供一种多媒体组件触发方法、装置、电子设备及存储介质,通过在视频播放界面内播放目标视频;响应于针对视频播放界面内的标识触发区域的触发手势,显示组件标识;响应于针对组件标识的拖动手势,移动组件标识,并在组件标识移动至目标位置后,在视频播放界面内显示隐藏信息。

Description

多媒体组件触发方法、装置、电子设备及存储介质
相关申请交叉引用
本申请要求于2022年09月08日提交中国专利局、申请号为202211098313.6、发明名称为“多媒体组件触发方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用并入本文。
技术领域
本公开实施例涉及互联网技术领域,尤其涉及一种多媒体组件触发方法、装置、电子设备及存储介质。
背景技术
当前,基于互联网的短视频应用(Application),凭借其自由发布、内容丰富等优点,受到越来越多用户的认可。同时,通过在短视频中设置基于多媒体组件的隐藏信息,实现了进一步地提高用户与平台之间的互动性的目的。
发明内容
本公开实施例提供一种多媒体组件触发方法、装置、电子设备及存储介质。
第一方面,本公开实施例提供一种多媒体组件触发方法,包括:
在视频播放界面内播放目标视频;响应于针对视频播放界面内的标识触发区域的触发手势,显示组件标识;响应于针对所述组件标识的拖动手势,移动所述组件标识,并在所述组件标识移动至目标位置后,在所述视频播放界面内显示隐藏信息。
第二方面,本公开实施例提供一种多媒体组件触发装置,包括:
播放模块,用于在视频播放界面内播放目标视频;
触发模块,用于响应于针对视频播放界面内的标识触发区域的触发手势,显示组件标识;
显示模块,用于响应于针对所述组件标识的拖动手势,移动所述组件标识,并在所述组件标识移动至目标位置后,在所述视频播放界面内显示隐藏信息。
第三方面,本公开实施例提供一种电子设备,包括:
处理器,以及与所述处理器通信连接的存储器;
所述存储器存储计算机执行指令;
所述处理器执行所述存储器存储的计算机执行指令,以实现如上第一方面以及第一方面各种可能的设计所述的多媒体组件触发方法。
第四方面,本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上第一方面以及第一方面各种可能的设计所述的多媒体组件触发方法。
第五方面,本公开实施例提供一种计算机程序产品,包括计算机程序,该计算机程序被 处理器执行时实现如上第一方面以及第一方面各种可能的设计所述的多媒体组件触发方法。
第六方面,本公开实施例提供一种计算机程序,该计算机程序被处理器执行时实现如上第一方面以及第一方面各种可能的设计所述的多媒体组件触发方法。
本公开实施例提供的多媒体组件触发方法、装置、电子设备及存储介质,通过在视频播放界面内播放目标视频;响应于针对视频播放界面内的标识触发区域的触发手势,显示组件标识;响应于针对所述组件标识的拖动手势,移动所述组件标识,并在所述组件标识移动至目标位置后,在所述视频播放界面内显示隐藏信息。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的多媒体组件触发方法的一种应用场景图;
图2为本公开实施例提供的多媒体组件触发方法的流程示意图一;
图3为本公开实施例提供的一种视频播放界面的示意图;
图4为本公开实施例提供的一种组件标识示意图;
图5为本公开实施例提供的一种视频播放界面的逻辑结构示意图;
图6为图2所示实施例中步骤S102的具体实现方式流程图;
图7为本公开实施例提供的一种通过组件容器显示组件标识的示意图;
图8为本公开实施例提供一种显示隐藏信息的过程示意图;
图9为本公开实施例提供的多媒体组件触发方法的流程示意图二;
图10为本公开实施例提供的一种通过长按手势显示组件标识的过程示意图;
图11为本公开实施例提供的一种拖动手势轨迹的示意图;
图12为本公开实施例提供的一种标识属性的变化示意图;
图13为本公开实施例提供的多媒体组件触发装置的结构框图;
图14为本公开实施例提供的一种电子设备的结构示意图;
图15为本公开实施例提供的电子设备的硬件结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
下面对本公开实施例的应用场景进行解释:
图1为本公开实施例提供的多媒体组件触发方法的一种应用场景图,本公开实施例提供的多媒体组件触发方法,可以应用于短视频播放的应用场景,更具体地,可以应用于在短视频中触发隐藏信息的应用场景中。如图1所示,本公开实施例提供的方法,可以应用于终 端设备,例如智能手机、平板电脑等。其中,示例性地,终端设备内运行有短视频类的应用程序,在终端设备通过该应用程序的视频播放界面播放目标视频(短视频)过程中,视频播放界面中隐藏设置有某些隐藏信息,即“彩蛋”,当用户针对视频播放界面进行手势操作时,若触发彩蛋对应的多媒体组件(图中示为施加长按手势时触发彩蛋),则在视频播放界面的上层显示隐藏信息对应的隐藏页面,即“彩蛋页面”。更具体地,例如为宣传视频、促销活动页面等,从而实现在短视频中触发隐藏信息的目的。
相关技术中,在视频中触发隐藏信息(彩蛋)的应用场景下,针对智能手机、平板电脑等终端设备,对隐藏信息进行触发的方式,通常是基于点击手势、长按手势等手势操作来实现的。为实现上述的视频触发方案,需要在应用的视频播放界面内,设置固定的标识触发区域,当手势操作接触该标识触发区域时,例如点击、长按该标识触发区域,会触发对应的隐藏信息(彩蛋)。然而,受限于视频播放界面的显示和触控面积,在视频播放界面内仅能够预设少量的标识触发区域,否则可能导致猜测触发区域的重叠,从而导致彩蛋触发逻辑的冲突。从而,导致视频中隐藏信息的触发方式单一、隐藏信息的设置数量受限的问题。本公开实施例提供一种多媒体组件触发方法以解决上述问题。
参考图2,图2为本公开实施例提供的多媒体组件触发方法的流程示意图一。本实施例的方法可以应用在终端设备中,该多媒体组件触发方法包括:
步骤S101:在视频播放界面内播放目标视频。
步骤S102:响应于针对视频播放界面内的标识触发区域的触发手势,显示组件标识。
示例性地,本实施例提供的方法的执行主体可以为终端设备,例如智能手机、平板电脑等。终端设备内运行有用于播放视频的应用程序,更具体地,例如基于feed流技术的短视频类应用程序,其中,应用程序通过视频播放界面播放短视频。图3为本公开实施例提供的一种视频播放界面的示意图,如图3所示,视频播放界面可以为目标应用内基于操作系统提供的用于播放视频的功能类实现的功能模块,视频播放界面可以在播放目标视频的同时,响应于用户针对该视频播放界面的手势操作,执行对应的应用功能,例如通过滑动手势实现切换视频的功能、通过长按手势实现收藏视频的功能。
进一步地,在视频播放界面内预设置有用于触发组件标识的标识触发区域,其中,组件标识即彩蛋标识,可以是用于触发隐藏信息(触发彩蛋)的图标。标识触发区域可以是不可见的,当用户通过施加触发手势至该标识触发区域后,终端设备通过检测触发手势对应的第一接触坐标,并判断第一接触坐标是否位于预设的标识触发区域,一方面,若第一接触坐标位于该标识触发区域内,则会触发对应的组件标识事件,并在该标识触发区域的对应位置显示组件标识;另一方面,若第一接触坐标位于该标识触发区域外,则会通过应用程序的视频控制控件,触发对应的视频控制事件,来执行对应的视频控制功能,即应用程序原生的程序功能,例如切换视频、收藏视频等。其中,组件标识可以是具有特定形状的图形、文字、或符号。组件标识起到引导用户操作的作用,在后续步骤中,通过针对该组件标识进行拖动,来触发响应的隐藏信息(彩蛋)。图4为本公开实施例提供的一种组件标识示意图,如图4所示,初始状态下,在用于播放目标视频的视频播放界面内,组件标识不可见。当用户在视频播放解码的标识触发区域施加触发手势后,终端设备响应该触发手势,而在对应位置,例如标识触发区域,显示组件标识。其中,触发手势可以为点击、双击、长按等对应固定的视 频播放界面接触位置的操作。
在一种可能的实现方式中,视频播放界面包括重叠设置的视频容器和组件容器,其中,视频容器用于显示目标视频,组件容器用于显示组件标识和隐藏信息。图5为本公开实施例提供的一种视频播放界面的逻辑结构示意图,如图5所示,视频播放界面包括重叠设置的视频容器和组件容器,视频容器和组件容器各对应一个显示层级(Layer)因此,视频容器和组件容器也可以称为视频容器层和组件容器层。默认状态下,组件容器设置于视频容器上层,在组件容器不显示组件标识和隐藏信息时,组件容器不可见,即用户可以透过组件容器层看到视频容器层内播放的目标视频。进一步地,视频容器还用于通过设置视频组件,来响应视频控制事件,从而实现视频控制功能;组件容器通过设置隐藏组件,来响应组件控制事件,从而实现针对组件标识和隐藏信息的显示控制。视频容器和组件容器,均是软件模块,通过对应的类实现,其中,示例性地,视频容器基于canvas实现;组件容器基于lynx容器实现。
进一步地,为了实现隐藏信息的全屏显示,视频播放界面内的组件容器和视频容器可能是相互重叠设置的,因此,在生成触发手势对应的触发事件后,若视频容器和组件容器均对该触发事件进行响应,则会导致处理逻辑的冲突,因此,为解决上述问题,需要对响应触发手势的逻辑进一步细化,如图6所示,步骤S102的具体实现步骤包括:
步骤S1011:获取触发手势对应的第一接触坐标,并基于第一接触坐标对组件容器进行检测,第一接触坐标表征触发手势在视频播放界面上的接触点的坐标。
步骤S1012:若第一接触坐标位于组件容器对应的标识触发区域内,则触发触发手势对应的组件标识事件,并通过组件容器对组件标识事件进行响应,以显示标识触发区域对应的组件标识。
步骤S1013:若第一接触坐标位于组件容器对应的标识触发区域外,则触发触发手势对应的视频控制事件,并通过视频容器对视频控制事件进行响应,执行对应的视频控制功能。
示例性地,组件容器对应至少一个标识触发区域。在终端设备检测到触发手势后,根据触发手势在视频显示界面的接触点,得到对应的第一接触坐标。之后,基于该第一接触坐标,判断该第一接触坐标是否落入组件容器对应的标识触发区域内,其中,组件容器对应的标识触发区域可以是通过配置文件预设的信息,可以直接获取,之后对比标识触发区域和第一接触坐标的坐标关系,即可确定第一接触坐标是否位于组件容器对应的标识触发区域。
一方面,若第一接触坐标位于组件容器对应的标识触发区域内,则触发该触发手势对应的组件标识事件,其中,触发手势可以包括一种或多种,当触发手势包括多种时,每一种触发手势对应一种组件标识事件。例如,触发手势为点击手势时,触发A事件,组件容器对A事件进行响应后,显示组件标识#1,之后基于针对该组件标识#1的拖动手势,触发隐藏信息a;触发手势为长按手势时,触发B事件,组件容器对B事件进行响应后,显示组件标识#2,之后基于针对该组件标识#2的拖动手势,触发隐藏信息b。
其中,进一步地,示例性地,组件容器内包括第一隐藏组件,步骤S1012中通通过组件容器对组件标识事件进行响应,以显示标识触发区域对应的组件标识的具体实现方法包括:
将组件容器设置为第一显示层级,将视频容器设置为第二显示层级,第一显示层级高于或等于第二显示层级;触发组件容器内的第一隐藏组件对组件标识事件进行响应,在目标视频的上层或同层显示组件标识。
图7为本公开实施例提供的一种通过组件容器显示组件标识的示意图,如图7所示,以组件容器与视频容器同显示层级(第一显示层级等于第二显示层级)的情况为例,在基于第一接触坐标对组件容器进行检测,确定第一接触坐标位于组件容器对应的标识触发区域内的情况下,视频容器和组件容器设置为同一显示层级,此时,终端设备可以在通过视频容器播放目标视频的同时,通过组件容器内设置的第一隐藏组件对组件标识事件进行响应,在组件容器内显示组件标识。在视觉上,视频播放界面内同时显示的组件标识和目标视频,实现了在不影响目标视频显示的情况下,显示组件标识的目的,避免影响目标视频的显示效果。
另一方面,视频容器内包括视频控制控件,视频控制控件用于响应视频控制事件,执行对应的视频控制功能。若第一接触坐标位于组件容器对应的标识触发区域外,则触发手势对应的视频控制事件,与组件标识事件类似,触发手势可以包括一种或多种,当触发手势包括多种时,每一种触发手势对应一种视频控制事件,例如,当触发手势为点击手势时,对应的视频控制事件被视频容器响应后,暂停目标视频播放;当触发手势为长按手势时,对应的视频控制事件被视频容器响应后,收藏目标视频。
进一步地,更具体地,例如,组件容器为lynx容器,视频容器为canvas容器。通过视频容器进行事件响应的具体实现方式为:为底部全屏canvas添加user-interaction-enabled=false表明不需要响应事件,canvas上层其余的元素不添加user-interaction-enabled=false,正常响应事件。当用户在屏幕检测到操作(例如点击操作)时,终端设备(内运行的目标应用)会根据点击坐标,检测应该响应该事件的元素,对于重叠的元素(这里指底部video和上层全屏canvas),默认行为是上层元素不论有没有绑定点击的调用函数,都会消费掉该事件,但因为canvas设置了user-interaction-enabled=false,所以会在检测阶段始终无法作为事件目标(target),事件目标(target)检测机制继续执行,直到找到底部video元素(视频控制控件),并执行该video元素,实现视频控制功能。
本实施例中,在检测到触发手势后,通过先针对组件容器进行检查,优先使用组件容器响应对应的触发事件(作为组件标识事件进行响应),在组件容器无法响应对应的触发事件后(第一接触坐标位于组件容器对应的标识触发区域外),再利用视频容器对该触发事件进行响应(作为视频控制事件进行响应),避免了触发手势在视频容器和组件容器上进行响应时的逻辑冲突,实现了视频播放界面内的组件容器和视频容器相互重叠设置情况下的操作逻辑控制。
步骤S103:响应于针对组件标识的拖动手势,移动组件标识,并在组件标识移动至目标位置后,在视频播放界面内显示隐藏信息。
示例性地,在显示组件标识后,为了进一步触发隐藏信息,使用拖动组件标识的方式,进行隐藏信息触发。具体地,例如,用户针对在视频播放界面内显示的组件标识,施加拖动手势后,终端设备根据该拖动手势的拖动轨迹,同步更新显示组件标识在视频播放界面上的位置,实现移动组件标识的显示效果;之后,在组件标识移动地目标位置后,触发隐藏信息,即在视频播放界面内显示该隐藏信息。其中,隐藏信息例如在视频播放界面内显示一个目标页面,更具体地,例如促销页面、产品参数页面等。其中隐藏信息的具体实现内容可以根据具体需求设置,此处不再赘述。
示例性地,响应于针对组件标识的拖动手势,移动组件标识的具体实现步骤包括:在全 屏view元素(视频容器)上层(组件容器)添加一块规定大小的区域view元素(对应隐藏信息触发区域),添加block-native-event为true,用户通过手势操作(例如点击手势)这块元素区域,会被组件容器捕获并消费,不会透给底层客户端(视频容器)处理。用户在隐藏信息触发区域内点击,会记录下点击的坐标,并在该坐标处绘制出组件标识,用户移动手指并移出规定区域,触发移动事件并能拦截移动事件透给底层客户端(视频容器),以此计算出组件标识的下一个坐标位置,使组件标识特性随手指移动。
图8为本公开实施例提供一种显示隐藏信息的过程示意图,如图8所示,视频播放界面内的组件标识位于D1位置,利用拖动手势对视频播放界面内的组件标识进行拖动,当该组件标识响应于拖动手势被移动至D2位置时,触发对应的隐藏信息A,隐藏信息A触发后,在视频播放界面内显示一个目标页面,来展示相关的隐藏信息的内容。其中,组件标识的初始位置(D1位置)对应标识触发区域,目标位置(D2位置)对应隐藏信息的触发区域,该目标位置可以基于配置文件进行预设。示例性地,目标位置可以包括多个,当视频标识由初始位置移动至不同的目标位置时,对应触发不同的隐藏信息。参考图8所示,除D2位置外,还包括D3位置、D4位置,当组件标识由D1移动至D3时,触发对应的隐藏信息B;当组件标识由D1移动至D4时,触发对应的隐藏信息C。由于目标位置可灵活设置,因此,当目标位置不同时,可以对应触发不同的隐藏信息,从而提高了隐藏信息的触发灵活性和触发方式多样性,增加目标视频中可设置的隐藏信息的数量。
本实施例中,通过在视频播放界面内播放目标视频;响应于针对视频播放界面内的标识触发区域的触发手势,显示组件标识;响应于针对组件标识的拖动手势,移动组件标识,并在组件标识移动至目标位置后,在视频播放界面内显示隐藏信息。由于通过先显示组件标识,再基于针对组件标识的拖动手势来移动隐藏信息至目标位置的触发方式,通过改变目标位置,可以实现更加多样化的触发方式。相比与传统的通过点击触发隐藏信息的触发方式,操作维度更多,解决了视频中隐藏信息触发方式单一的问题,进而可以实现在视频中设置多个基于不同触发方式的隐藏信息,提高隐藏信息的设置数量。
参考图9,图9为本公开实施例提供的多媒体组件触发方法的流程示意图二。本实施例在图2所示实施例的基础上,进一步地步骤S102进行细化,其中,视频播放界面包括重叠设置的视频容器和组件容器,本实施例提供的多媒体组件触发方法包括:
步骤S201:在视频播放界面的视频容器播放目标视频。
步骤S202:响应于针对标识触发区域的长按手势,显示随时间变化的引导标识。
步骤S203:在检测到长按手势持续第一时长后,显示标识触发区域对应的组件标识。
示例性地,在触发隐藏信息的应用场景下,考虑到组件标识在未触发前处于不可见的状态,因此,用户的手势操作可能出现误触的问题。例如当“长按手势”操作的持续时间过短时,可能与“点击手势”操作混淆,从而发生误触,影响交互体验。
图10为本公开实施例提供的一种通过长按手势显示组件标识的过程示意图,如图10所示,在用户施加针对视频播放界面的长按手势操作后,终端设备在视频播放界面显示一个引导标识,引导标识可以为一个随时间变化的图标,参考图10所示,示例性地,引导标识为一个圆环状的进度条,随着长按手势的持续,圆环状的进度条不断前进,当长按手势持续第一时长后,该进度条走到尽头,之后,显示标识触发区域对应的组件标识。
本实施例步骤中,通过设置引导标识,在持续长按第一时长后,再对组件标识进行触发显示,避免了误触,提高交互效率和使用体验。可以理解的是,在其他实施例中,例如基于图2所示实施例中的实现方式,可以在不显示引导标识的情况下,直接响应于长按手势而显示组件标识。因此,步骤S202-步骤S203为可选步骤,使用例如图2所示实施例提供的方法显示组件标识的情况下,并不会影响后续步骤的正常执行。
步骤S204:获取拖动手势对应的第二接触坐标,第二接触坐标表征拖动手势在视频播放界面上的实时接触点的坐标。
步骤S205:根据第二接触坐标,更新组件标识在组件容器内的显示位置。
示例性地,进一步地,用户在针对组件标识进行拖动手势操作后,终端设备通过检测该拖动手势,得到对应的第二接触坐标,即拖动手势的实时接触点的坐标。之后,基于该第二接触坐标,将组件标识同步渲染至该第二接触坐标的位置,实现组件标识与拖动手势的同步移动。
步骤S206:根据第二接触坐标,获得拖动手势对应的拖动手势轨迹。
步骤S207:若拖动手势轨迹与预设的目标轨迹相同时,将第二接触坐标确定为终点坐标。
示例性地,在获得表征拖动手势的实时位置的第二接触坐标后,根据当前的第二接触坐标,以及之前的拖动手势的历史位置的对应的第二接触坐标,可以得到拖动手势轨迹。图11为本公开实施例提供的一种拖动手势轨迹的示意图,如图11所示,随着拖动手势的不断移动,终端设备通过检测该拖动手势在视频播放界面内的接触点,获得一系列有序的第二接触坐标的集合,该第二接触坐标的集合,即为即拖动手势轨迹。其中,随着拖动手势的不断移动,该对应的拖动手势轨迹也随之变化,当检测到该拖动手势轨迹与预设的目标轨迹重合时,达到触发状态,此时,将构成拖动手势轨迹的最后一个第二接触坐标,也即,最新获得的第二接触坐标,确定为终点坐标。在之后的步骤中,若检测到第二接触坐标为终点坐标,则进行隐藏信息触发,从而实现基于拖动手势轨迹的彩蛋触发方式。其中,示例性地,目标轨迹可以包括一个或多个,当目标轨迹为多个时,每一目标轨迹对应一种隐藏信息。
本实施例中,通过检测拖动轨迹手势,并根据拖动轨迹手势来进行隐藏信息触发,进一步地丰富了隐藏信息触发的触发方式。由于拖动轨迹手势由一些列有序的接触点(第二接触坐标)构成,因此其信息维度更高,通过改变目标轨迹,并以不同的目标轨迹来触发不同的隐藏信息,可以极大的增加视频中设置的隐藏信息的数量,实现更加灵活、多样化的隐藏信息触发方式。
可选地,在步骤S206之后,还包括:
步骤S208:基于组件标识的移动轨迹,更新显示组件标识的标识属性,其中,标识属性用于表征组件标识的当前位置与目标位置的距离和/或方向。
示例性地,组件标识随拖动手势同步移动,组件标识的移动轨迹与拖动手势对应的拖动手势轨迹一致,在获得拖动手势对应的拖动手势轨迹之后,进一步地,终端设备根据拖动手势轨迹的变化动态的显示组件标识的标识属性,从而来指示组件标识的当前位置与目标位置的距离和/或方向。其中,组件标识的标识属性是指组件标识的可视化属性,例如组件标识的颜色、组件标识的尺寸、组件标识的形状、组件标识的透明度中的一种或多种。更具体地,例如,随着拖动手势轨迹的延伸,组件标识距离目标位置(终点坐标)越近时,组件标识的 透明度越低、组件标识距离目标位置(终点坐标)越远时,组件标识的透明度越高,从而,实现通过组件标识的标识属性,来引导用户改变拖动手势的位置、方向,实现对该组件标识的触发。再例如,随着拖动手势轨迹的延伸,拖动手势轨迹与目标轨迹重合度越高时,组件标识的颜色越深、拖动手势轨迹与目标轨迹重合度越低时,组件标识的颜色越浅,从而,实现通过组件标识的标识属性,来引导用户改变拖动手势的位置、方向,最终实现引导用户触发隐藏的隐藏信息的目的。
示例性地,标识属性包括以下指示一种:组件标识的颜色、组件标识的尺寸、组件标识的形状、组件标识的透明度。
图12为本公开实施例提供的一种标识属性的变化示意图,如图12所示,拖动手势轨迹的末端位于A位置时,组件标识的尺寸为R1;之后,当拖动手势轨迹由A位置延伸至B位置时,组件标识的尺寸为R2,其中,R2小于R1,此种情况下,说明组件标识距离目标位置变远;而当拖动手势轨迹由A位置延伸至C位置时,组件标识的尺寸为R3,其中,R3大于R1,此种情况下,说明组件标识距离目标位置变近。用户可以通过实时观察组件标识的尺寸的变化情况,来调整拖动手势,从而引导拖动手势所拖动的组件标识移动至目标位置,触发隐藏信息。
本实施例中,通过根据组件标识的移动轨迹,实时的更新组件标识的标识属性,并通过组件标识的标识属性来引导用户的拖动手势,实现隐藏信息的快速触发,降低用户触发隐藏信息的难度,提高隐藏信息的交互效率。
步骤S209:若第二接触坐标不为终点坐标,则返回步骤S203,若第二接触坐标为终点坐标,则在视频播放界面的组件容器内显示隐藏信息。
示例性地,在执行完上述步骤后,判断触发手势当前是否移动至目标位置,若处于目标位置,则进行隐藏信息触发;否则,返回之前步骤,继续对触发手势进行检测,直至触发手势满足预设要求(例如触发手势轨迹与目标轨迹一致)。其中,具体地,若当前接触点的第二接触坐标为终点坐标,则判断触发手势已经移动至目标位置,此时,触发隐藏信息,在视频播放界面的组件容器内显示隐藏信息;若当前接触点的第二接触坐标不为终点坐标,则返回步骤S203,获取新的第二接触坐标。
对应于上文实施例提供的多媒体组件触发方法,图13为本公开实施例提供的多媒体组件触发装置的结构框图。为了便于说明,仅示出了与本公开实施例相关的部分。参照图13,多媒体组件触发装置3包括:
播放模块31,用于在视频播放界面内播放目标视频;
触发模块32,用于响应于针对视频播放界面内的标识触发区域的触发手势,显示组件标识;
显示模块33,用于响应于针对组件标识的拖动手势,移动组件标识,并在组件标识移动至目标位置后,在视频播放界面内显示隐藏信息。
在本公开的一个实施例中,视频播放界面包括重叠设置的视频容器和组件容器,其中,视频容器用于显示目标视频,组件容器用于显示组件标识和隐藏信息。
在本公开的一个实施例中,组件容器对应至少一个标识触发区域;触发模块32,具体用于:获取触发手势对应的第一接触坐标,并基于第一接触坐标对组件容器进行检测,第一接 触坐标表征触发手势在视频播放界面上的接触点的坐标;若第一接触坐标位于组件容器对应的标识触发区域内,则触发触发手势对应的组件标识事件,并通过组件容器对组件标识事件进行响应,以显示标识触发区域对应的组件标识。
在本公开的一个实施例中,组件容器内包括第一隐藏组件;触发模块32在通过组件容器对组件标识事件进行响应,以显示标识触发区域对应的组件标识时,具体用于:将组件容器设置为第一显示层级,将视频容器设置为第二显示层级,第一显示层级高于或等于第二显示层级;触发组件容器内的第一隐藏组件对组件标识事件进行响应,在目标视频的上层或同层显示组件标识。
在本公开的一个实施例中,触发模块32,还用于:若第一接触坐标位于组件容器对应的标识触发区域外,则触发触发手势对应的视频控制事件,并通过视频容器对视频控制事件进行响应,执行对应的视频控制功能。
在本公开的一个实施例中,显示模块33,具体用于:循环执行以下步骤,直至达到终止条件:获取拖动手势对应的第二接触坐标,第二接触坐标表征拖动手势在视频播放界面上的实时接触点的坐标;根据第二接触坐标,更新组件标识在组件容器内的显示位置;其中,终止条件为第二接触坐标为目标位置对应的终点坐标。
在本公开的一个实施例中,在获取拖动手势对应的第二接触坐标之后,显示模块33,还用于:根据第二接触坐标,获得拖动手势对应的拖动手势轨迹;在拖动手势轨迹与预设的目标轨迹相同时,将第二接触坐标确定为目标位置对应的终点坐标。
在本公开的一个实施例中,在组件标识移动至目标位置之前,显示模块33,还用于:基于组件标识的移动轨迹,更新显示组件标识的标识属性,其中,标识属性用于表征组件标识的当前位置与目标位置的距离和/或方向。
在本公开的一个实施例中,标识属性包括以下指示一种:组件标识的颜色、组件标识的尺寸、组件标识的形状、组件标识的透明度。
在本公开的一个实施例中,触发手势为长按手势,显示模块33在响应于针对视频播放界面内的标识触发区域的触发手势,显示组件标识时,具体用于:响应于针对标识触发区域的长按手势,显示随时间变化的引导标识;在检测到长按手势持续第一时长后,显示标识触发区域对应的组件标识。
其中,播放模块31、触发模块32和显示模块33连接。本实施例提供的多媒体组件触发装置3可以执行上述方法实施例的技术方案,其实现原理和技术效果类似,本实施例此处不再赘述。
图14为本公开实施例提供的一种电子设备的结构示意图,如图14所示,该电子设备4包括:
处理器41,以及与处理器41通信连接的存储器42;
存储器42存储计算机执行指令;
处理器41执行存储器42存储的计算机执行指令,以实现如图2-图12所示实施例中的多媒体组件触发方法。
其中,可选地,处理器41和存储器42通过总线43连接。
相关说明可以对应参见图2-图12所对应的实施例中的步骤所对应的相关描述和效果进行 理解,此处不做过多赘述。
参考图15,其示出了适于用来实现本公开实施例的电子设备900的结构示意图,该电子设备900可以为终端设备或服务器。其中,终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、平板电脑(Portable Android Device,PAD)、便携式多媒体播放器(Portable Media Player,PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字电视(Television,TV)、台式计算机等等的固定终端。图15示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图15所示,电子设备900可以包括处理装置(例如中央处理器、图形处理器等)901,其可以根据存储在只读存储器(Read Only Memory,ROM)902中的程序或者从存储装置908加载到随机访问存储器(Random Access Memory,RAM)903中的程序而执行各种适当的动作和处理。在RAM 903中,还存储有电子设备900操作所需的各种程序和数据。处理装置901、ROM 902以及RAM 903通过总线904彼此相连。输入/输出(Input/Output,I/O)接口905也连接至总线904。
通常,以下装置可以连接至I/O接口905:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置906;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置907;包括例如磁带、硬盘等的存储装置908;以及通信装置909。通信装置909可以允许电子设备900与其他设备进行无线或有线通信以交换数据。虽然图15示出了具有各种装置的电子设备900,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置909从网络上被下载和安装,或者从存储装置908被安装,或者从ROM 902被安装。在该计算机程序被处理装置901执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器Erasable Programmable Read Only Memory,EPROM)或闪存、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任 何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备执行上述实施例所示的方法。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取至少两个网际协议地址的单元”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field-Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、片上系统(System On Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例可以包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM 或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
第一方面,根据本公开的一个或多个实施例,提供了一种多媒体组件触发方法,包括:
在视频播放界面内播放目标视频;响应于针对视频播放界面内的标识触发区域的触发手势,显示组件标识;响应于针对所述组件标识的拖动手势,移动所述组件标识,并在所述组件标识移动至目标位置后,在所述视频播放界面内显示隐藏信息。
根据本公开的一个或多个实施例,所述视频播放界面包括重叠设置的视频容器和组件容器,其中,所述视频容器用于显示所述目标视频,所述组件容器用于显示所述组件标识和所述隐藏信息。
根据本公开的一个或多个实施例,所述组件容器对应至少一个所述标识触发区域;所述响应于针对视频播放界面内的标识触发区域的触发手势,显示组件标识,包括:获取所述触发手势对应的第一接触坐标,并基于所述第一接触坐标对所述组件容器进行检测,所述第一接触坐标表征所述触发手势在所述视频播放界面上的接触点的坐标;若所述第一接触坐标位于所述组件容器对应的标识触发区域内,则触发所述触发手势对应的组件标识事件,并通过所述组件容器对所述组件标识事件进行响应,以显示所述标识触发区域对应的组件标识。
根据本公开的一个或多个实施例,所述组件容器内包括第一隐藏组件;所述通过所述组件容器对所述组件标识事件进行响应,以显示所述标识触发区域对应的组件标识,包括:将所述组件容器设置为第一显示层级,将所述视频容器设置为第二显示层级,所述第一显示层级高于或等于所述第二显示层级;触发所述组件容器内的第一隐藏组件对所述组件标识事件进行响应,在所述目标视频的上层或同层显示所述组件标识。
根据本公开的一个或多个实施例,所述方法还包括:若所述第一接触坐标位于所述组件容器对应的标识触发区域外,则触发所述触发手势对应的视频控制事件,并通过所述视频容器对所述视频控制事件进行响应,执行对应的视频控制功能。
根据本公开的一个或多个实施例,所述响应于针对所述组件标识的拖动手势,移动所述组件标识,并在所述组件标识移动至目标位置后,在所述视频播放界面内显示隐藏信息,包括:循环执行以下步骤,直至达到终止条件:获取所述拖动手势对应的第二接触坐标,所述第二接触坐标表征所述拖动手势在所述视频播放界面上的实时接触点的坐标;根据所述第二接触坐标,更新所述组件标识在所述组件容器内的显示位置;其中,所述终止条件为所述第二接触坐标为所述目标位置对应的终点坐标。
根据本公开的一个或多个实施例,在获取所述拖动手势对应的第二接触坐标之后,还包括:根据所述第二接触坐标,获得所述拖动手势对应的拖动手势轨迹;在所述拖动手势轨迹与预设的目标轨迹相同时,将所述第二接触坐标确定为所述目标位置对应的终点坐标。
根据本公开的一个或多个实施例,在所述组件标识移动至目标位置之前,还包括:基于所述组件标识的移动轨迹,更新显示所述组件标识的标识属性,其中,所述标识属性用于表征所述组件标识的当前位置与所述目标位置的距离和/或方向。
根据本公开的一个或多个实施例,所述标识属性包括以下指示一种:所述组件标识的颜色、所述组件标识的尺寸、所述组件标识的形状、所述组件标识的透明度。
根据本公开的一个或多个实施例,所述触发手势为长按手势,所述响应于针对视频播放 界面内的标识触发区域的触发手势,显示组件标识,包括:响应于针对所述标识触发区域的所述长按手势,显示随时间变化的引导标识;在检测到所述长按手势持续第一时长后,显示所述标识触发区域对应的组件标识。
第二方面,根据本公开的一个或多个实施例,提供了一种多媒体组件触发装置,包括:
播放模块,用于在视频播放界面内播放目标视频;
触发模块,用于响应于针对视频播放界面内的标识触发区域的触发手势,显示组件标识;
显示模块,用于响应于针对所述组件标识的拖动手势,移动所述组件标识,并在所述组件标识移动至目标位置后,在所述视频播放界面内显示隐藏信息。
根据本公开的一个或多个实施例,所述视频播放界面包括重叠设置的视频容器和组件容器,其中,所述视频容器用于显示所述目标视频,所述组件容器用于显示所述组件标识和所述隐藏信息。
根据本公开的一个或多个实施例,所述组件容器对应至少一个所述标识触发区域;所述触发模块,具体用于:获取所述触发手势对应的第一接触坐标,并基于所述第一接触坐标对所述组件容器进行检测,所述第一接触坐标表征所述触发手势在所述视频播放界面上的接触点的坐标;若所述第一接触坐标位于所述组件容器对应的标识触发区域内,则触发所述触发手势对应的组件标识事件,并通过所述组件容器对所述组件标识事件进行响应,以显示所述标识触发区域对应的组件标识。
根据本公开的一个或多个实施例,所述组件容器内包括第一隐藏组件;所述触发模块在通过所述组件容器对所述组件标识事件进行响应,以显示所述标识触发区域对应的组件标识时,具体用于:将所述组件容器设置为第一显示层级,将所述视频容器设置为第二显示层级,所述第一显示层级高于或等于所述第二显示层级;触发所述组件容器内的第一隐藏组件对所述组件标识事件进行响应,在所述目标视频的上层或同层显示所述组件标识。
根据本公开的一个或多个实施例,所述触发模块,还用于:若所述第一接触坐标位于所述组件容器对应的标识触发区域外,则触发所述触发手势对应的视频控制事件,并通过所述视频容器对所述视频控制事件进行响应,执行对应的视频控制功能。
根据本公开的一个或多个实施例,所述显示模块,具体用于:循环执行以下步骤,直至达到终止条件:获取所述拖动手势对应的第二接触坐标,所述第二接触坐标表征所述拖动手势在所述视频播放界面上的实时接触点的坐标;根据所述第二接触坐标,更新所述组件标识在所述组件容器内的显示位置;其中,所述终止条件为所述第二接触坐标为所述目标位置对应的终点坐标。
根据本公开的一个或多个实施例,在获取所述拖动手势对应的第二接触坐标之后,所述显示模块,还用于:根据所述第二接触坐标,获得所述拖动手势对应的拖动手势轨迹;在所述拖动手势轨迹与预设的目标轨迹相同时,将所述第二接触坐标确定为所述目标位置对应的终点坐标。
根据本公开的一个或多个实施例,在所述组件标识移动至目标位置之前,所述显示模块,还用于:基于所述组件标识的移动轨迹,更新显示所述组件标识的标识属性,其中,所述标识属性用于表征所述组件标识的当前位置与所述目标位置的距离和/或方向。
根据本公开的一个或多个实施例,所述标识属性包括以下指示一种:所述组件标识的颜 色、所述组件标识的尺寸、所述组件标识的形状、所述组件标识的透明度。
根据本公开的一个或多个实施例,所述触发手势为长按手势,所述显示模块在响应于针对视频播放界面内的标识触发区域的触发手势,显示组件标识时,具体用于:响应于针对所述标识触发区域的所述长按手势,显示随时间变化的引导标识;在检测到所述长按手势持续第一时长后,显示所述标识触发区域对应的组件标识。
第三方面,根据本公开的一个或多个实施例,提供了一种电子设备,包括:处理器,以及与所述处理器通信连接的存储器;
所述存储器存储计算机执行指令;
所述处理器执行所述存储器存储的计算机执行指令,以实现如上第一方面以及第一方面各种可能的设计所述的多媒体组件触发方法。
第四方面,根据本公开的一个或多个实施例,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上第一方面以及第一方面各种可能的设计所述的多媒体组件触发方法。
第五方面,本公开实施例提供一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现如上第一方面以及第一方面各种可能的设计所述的多媒体组件触发方法。
第六方面,本公开实施例提供一种计算机程序,该计算机程序被处理器执行时实现如上第一方面以及第一方面各种可能的设计所述的多媒体组件触发方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (15)

  1. 一种多媒体组件触发方法,包括:
    在视频播放界面内播放目标视频;
    响应于针对视频播放界面内的标识触发区域的触发手势,显示组件标识;
    响应于针对所述组件标识的拖动手势,移动所述组件标识,并在所述组件标识移动至目标位置后,在所述视频播放界面内显示隐藏信息。
  2. 根据权利要求1所述的方法,其中,所述视频播放界面包括重叠设置的视频容器和组件容器,其中,所述视频容器用于显示所述目标视频,所述组件容器用于显示所述组件标识和所述隐藏信息。
  3. 根据权利要求2所述的方法,其中,所述组件容器对应至少一个所述标识触发区域;所述响应于针对视频播放界面内的标识触发区域的触发手势,显示组件标识,包括:
    获取所述触发手势对应的第一接触坐标,并基于所述第一接触坐标对所述组件容器进行检测,所述第一接触坐标表征所述触发手势在所述视频播放界面上的接触点的坐标;
    若所述第一接触坐标位于所述组件容器对应的标识触发区域内,则触发所述触发手势对应的组件标识事件,并通过所述组件容器对所述组件标识事件进行响应,以显示所述标识触发区域对应的组件标识。
  4. 根据权利要求3所述的方法,其中,所述组件容器内包括第一隐藏组件;所述通过所述组件容器对所述组件标识事件进行响应,以显示所述标识触发区域对应的组件标识,包括:
    将所述组件容器设置为第一显示层级,将所述视频容器设置为第二显示层级,所述第一显示层级高于或等于所述第二显示层级;
    触发所述组件容器内的第一隐藏组件对所述组件标识事件进行响应,在所述目标视频的同层显示所述组件标识。
  5. 根据权利要求3或4所述的方法,其中,所述方法还包括:
    若所述第一接触坐标位于所述组件容器对应的标识触发区域外,则触发所述触发手势对应的视频控制事件,并通过所述视频容器对所述视频控制事件进行响应,执行对应的视频控制功能。
  6. 根据权利要求2所述的方法,其中,所述响应于针对所述组件标识的拖动手势,移动所述组件标识,并在所述组件标识移动至目标位置后,在所述视频播放界面内显示隐藏信息,包括:
    循环执行以下步骤,直至达到终止条件:
    获取所述拖动手势对应的第二接触坐标,所述第二接触坐标表征所述拖动手势在所述视频播放界面上的实时接触点的坐标;
    根据所述第二接触坐标,更新所述组件标识在所述组件容器内的显示位置;
    其中,所述终止条件为所述第二接触坐标为所述目标位置对应的终点坐标。
  7. 根据权利要求6所述的方法,其中,在获取所述拖动手势对应的第二接触坐标之后,还包括:
    根据所述第二接触坐标,获得所述拖动手势对应的拖动手势轨迹;
    在所述拖动手势轨迹与预设的目标轨迹相同时,将所述第二接触坐标确定为所述目标位 置对应的终点坐标。
  8. 根据权利要求1至7中任一项所述的方法,其中,在所述组件标识移动至目标位置之前,还包括:
    基于所述组件标识的移动轨迹,更新显示所述组件标识的标识属性,其中,所述标识属性用于表征所述组件标识的当前位置与所述目标位置的距离和/或方向。
  9. 根据权利要求8所述的方法,其中,所述标识属性包括以下指示一种:
    所述组件标识的颜色、所述组件标识的尺寸、所述组件标识的形状、所述组件标识的透明度。
  10. 根据权利要求1、8或9所述的方法,其中,所述触发手势为长按手势,所述响应于针对视频播放界面内的标识触发区域的触发手势,显示组件标识,包括:
    响应于针对所述标识触发区域的所述长按手势,显示随时间变化的引导标识;
    在检测到所述长按手势持续第一时长后,显示所述标识触发区域对应的组件标识。
  11. 一种多媒体组件触发装置,包括:
    播放模块,用于在视频播放界面内播放目标视频;
    触发模块,用于响应于针对视频播放界面内的标识触发区域的触发手势,显示组件标识;
    显示模块,用于响应于针对所述组件标识的拖动手势,移动所述组件标识,并在所述组件标识移动至目标位置后,在所述视频播放界面内显示隐藏信息。
  12. 一种电子设备,包括:处理器,以及与所述处理器通信连接的存储器;
    所述存储器存储计算机执行指令;
    所述处理器执行所述存储器存储的计算机执行指令,以实现如权利要求1至10中任一项所述的多媒体组件触发方法。
  13. 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1至10中任一项所述的多媒体组件触发方法。
  14. 一种计算机程序产品,包括计算机程序,该所述计算机程序被处理器执行时实现权利要求1至10中任一项所述的多媒体组件触发方法。
  15. 一种计算机程序,所述计算机程序用于实现如权利要求1至10中任一项所述的多媒体组件触发方法。
PCT/CN2023/116534 2022-09-08 2023-09-01 多媒体组件触发方法、装置、电子设备及存储介质 WO2024051601A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211098313.6 2022-09-08
CN202211098313.6A CN117714791A (zh) 2022-09-08 2022-09-08 多媒体组件触发方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2024051601A1 true WO2024051601A1 (zh) 2024-03-14

Family

ID=90150279

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/116534 WO2024051601A1 (zh) 2022-09-08 2023-09-01 多媒体组件触发方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN117714791A (zh)
WO (1) WO2024051601A1 (zh)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106383642A (zh) * 2016-09-09 2017-02-08 北京金山安全软件有限公司 一种媒体播放界面的控件的显示方法及相关装置
CN108536344A (zh) * 2018-01-09 2018-09-14 网易(杭州)网络有限公司 信息处理方法、装置、电子设备及存储介质
CN109246464A (zh) * 2018-08-22 2019-01-18 Oppo广东移动通信有限公司 用户界面显示方法、装置、终端及存储介质
CN111314759A (zh) * 2020-03-02 2020-06-19 腾讯科技(深圳)有限公司 视频处理方法、装置、电子设备及存储介质
US20210191578A1 (en) * 2016-06-12 2021-06-24 Apple Inc. User interfaces for retrieving contextually relevant media content
CN113031842A (zh) * 2021-04-12 2021-06-25 北京有竹居网络技术有限公司 基于视频的交互方法、装置、存储介质及电子设备
CN113961121A (zh) * 2021-10-20 2022-01-21 维沃移动通信有限公司 对象处理方法、装置、电子设备及存储介质
CN114422843A (zh) * 2022-03-10 2022-04-29 北京达佳互联信息技术有限公司 视频彩蛋的播放方法、装置、电子设备及介质
CN114519155A (zh) * 2020-11-20 2022-05-20 腾讯科技(深圳)有限公司 数据处理方法、装置、客户端及存储介质
CN114692038A (zh) * 2022-03-28 2022-07-01 北京字跳网络技术有限公司 页面显示方法、装置、设备及存储介质

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210191578A1 (en) * 2016-06-12 2021-06-24 Apple Inc. User interfaces for retrieving contextually relevant media content
CN106383642A (zh) * 2016-09-09 2017-02-08 北京金山安全软件有限公司 一种媒体播放界面的控件的显示方法及相关装置
CN108536344A (zh) * 2018-01-09 2018-09-14 网易(杭州)网络有限公司 信息处理方法、装置、电子设备及存储介质
CN109246464A (zh) * 2018-08-22 2019-01-18 Oppo广东移动通信有限公司 用户界面显示方法、装置、终端及存储介质
CN111314759A (zh) * 2020-03-02 2020-06-19 腾讯科技(深圳)有限公司 视频处理方法、装置、电子设备及存储介质
CN114519155A (zh) * 2020-11-20 2022-05-20 腾讯科技(深圳)有限公司 数据处理方法、装置、客户端及存储介质
CN113031842A (zh) * 2021-04-12 2021-06-25 北京有竹居网络技术有限公司 基于视频的交互方法、装置、存储介质及电子设备
CN113961121A (zh) * 2021-10-20 2022-01-21 维沃移动通信有限公司 对象处理方法、装置、电子设备及存储介质
CN114422843A (zh) * 2022-03-10 2022-04-29 北京达佳互联信息技术有限公司 视频彩蛋的播放方法、装置、电子设备及介质
CN114692038A (zh) * 2022-03-28 2022-07-01 北京字跳网络技术有限公司 页面显示方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN117714791A (zh) 2024-03-15

Similar Documents

Publication Publication Date Title
US11206444B2 (en) Method and device for video previewing, electronic equipment, and computer-readable storage medium
WO2022156368A1 (zh) 推荐信息展示方法及装置
WO2022161335A1 (zh) 互动方法、装置、电子设备和存储介质
CN112261459B (zh) 一种视频处理方法、装置、电子设备和存储介质
WO2021259301A1 (zh) 直播互动方法、装置、可读介质及电子设备
KR102049784B1 (ko) 데이터 표시 방법 및 장치
JP2024514660A (ja) コントロールの表示方法、装置、電子機器および記憶媒体
US20230359337A1 (en) Interaction method, and electronic device and storage medium
US20170300151A1 (en) Management of the channel bar
EP2680125A2 (en) Enhanced user interface to transfer media content
WO2023045783A1 (zh) 页面处理方法、装置、设备及存储介质
WO2016048308A1 (en) Management of the channel bar
EP4333440A1 (en) Video interaction method and apparatus, electronic device, and storage medium
US20230328330A1 (en) Live streaming interface display method, and device
EP4344235A1 (en) Video interaction method and apparatus, and device and medium
WO2023005575A1 (zh) 基于兴趣标签的处理方法、装置、设备及存储介质
WO2022077994A1 (zh) 信息处理方法、装置及介质
EP4343519A1 (en) Control display method and apparatus, device and medium
WO2021093688A1 (zh) 目标对象显示方法、装置、电子设备和计算机可读介质
WO2023169484A1 (zh) 信息流的显示方法、装置、设备、存储介质及程序
CN110647286A (zh) 屏幕元素控制方法、装置、设备、存储介质
WO2023202359A1 (zh) 直播内容显示方法、装置、设备、可读存储介质及产品
US20240126422A1 (en) Progress adjustment method and apparatus, electronic device and storage medium
WO2023236875A1 (zh) 页面显示方法、装置、设备、计算机可读存储介质及产品
WO2024037563A1 (zh) 内容展示方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23862295

Country of ref document: EP

Kind code of ref document: A1