WO2024041568A1 - 一种直播视频处理方法、装置、设备及介质 - Google Patents

一种直播视频处理方法、装置、设备及介质 Download PDF

Info

Publication number
WO2024041568A1
WO2024041568A1 PCT/CN2023/114440 CN2023114440W WO2024041568A1 WO 2024041568 A1 WO2024041568 A1 WO 2024041568A1 CN 2023114440 W CN2023114440 W CN 2023114440W WO 2024041568 A1 WO2024041568 A1 WO 2024041568A1
Authority
WO
WIPO (PCT)
Prior art keywords
special effect
user
image
special
displayed
Prior art date
Application number
PCT/CN2023/114440
Other languages
English (en)
French (fr)
Inventor
刘燕
雍子馨
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2024041568A1 publication Critical patent/WO2024041568A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4314Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for fitting data in a restricted space on the screen, e.g. EPG data in a rectangular grid
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4318Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Definitions

  • the present disclosure relates to the field of computer technology, and in particular, to a live video processing method, device, equipment and medium.
  • embodiments of the present application provide a live video processing method, device, equipment and medium to shorten the multiplexing path and improve user experience.
  • a live video processing method which method includes:
  • the live broadcast room interface includes a first display image corresponding to the first user and a second display image corresponding to the second user;
  • the first special effect image is displayed on the first display image.
  • a live video processing device includes:
  • a first display unit configured to display a live broadcast room interface, where the live broadcast room interface includes a first display image corresponding to a first user and a second display image corresponding to a second user;
  • a second display unit configured to display a first special effect logo corresponding to the first special effect image on the live broadcast room interface in response to the second display image displaying a first special effect image
  • a third display unit configured to display the first special effect image on the first display image in response to the first user's triggering operation on the first special effect logo.
  • an electronic device including: a processor and a memory;
  • the memory is used to store instructions or computer programs
  • the processor is configured to execute the instructions or computer programs in the memory, so that the electronic device executes the method described in the first aspect.
  • a computer-readable storage medium is provided. Instructions are stored in the computer-readable storage medium. When the instructions are run on a device, they cause the device to execute the method described in the first aspect. method.
  • a computer program product includes a computer program/instruction.
  • the computer program product includes a computer program/instruction.
  • the method described in the first aspect is implemented.
  • the live broadcast room interface corresponding to the first user displays a first display image corresponding to the first user and a second display image corresponding to the second user. If the second display image displays a first special effect image, in the The live broadcast room interface corresponding to a user displays the first special effect logo corresponding to the first special effect image. In response to the first user's triggering operation on the first special effect logo, the first special effect image is displayed on the first display image. That is, by displaying the first special effect logo corresponding to the first special effect image on the live broadcast room interface corresponding to the first user, a quick entry for reusing the first special effect image is provided for the first user without the need for the first user to go through cumbersome operations. Improve user experience. Moreover, multiple users of Lianmai can quickly use multiple identical special effects images to enrich the interactive methods in the live broadcast room, which is conducive to improving the interactive atmosphere of Lianmai scenes, especially PK scenes.
  • Figure 1 is a schematic diagram of a display special effects panel provided by an embodiment of the present application.
  • Figure 2 is a flow chart of a live video processing method provided by an embodiment of the present application.
  • Figure 3a is a schematic diagram of displaying a first special effect logo on a live broadcast room interface provided by an embodiment of the present application
  • Figure 3b is a schematic diagram of displaying a user logo on a first special effect logo provided by an embodiment of the present application
  • Figure 3c is a schematic diagram of displaying a user identification on a special effects panel provided by an embodiment of the present application.
  • Figure 4a is a schematic diagram of displaying two user logos on a special effects panel provided by an embodiment of the present application
  • Figure 4b is another schematic diagram of displaying two user logos on a special effects panel provided by an embodiment of the present application.
  • Figure 5 is a schematic diagram of a live video processing device provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • live streaming can include live streaming between different anchors, or it can also be live streaming between a certain anchor and the audience in the live broadcast room.
  • the schematic diagram of the live broadcast room interface shown in Figure 1 takes multiple anchors connecting to the microphone as an example for illustration in this scenario.
  • a navigation bar is displayed at the bottom of the live broadcast room interface corresponding to anchor A.
  • the navigation bar includes interactive controls 201, sharing controls 202, enhanced controls 203 and more controls. 204.
  • the interactive control 201 is used for anchor A to interact with the audience in the live broadcast room
  • the sharing control 202 is used for anchor A to share the current live broadcast room with other users
  • the enhancement space 203 is used to enhance the display image corresponding to the current anchor A.
  • More controls 204 are used to display more functional controls that can be triggered in response to host A's triggering operation. It can be seen from Figure 1 that when anchor A wants to reuse a certain special effect image, he first clicks the enhancement control 203 to display the enhancement panel, which includes a beauty control, a special effects control and a sticker control. In response to anchor A's triggering operation on the special effects control, a special effects panel is displayed, and the special effects panel includes a variety of special effect identifiers.
  • Anchor A needs to click on a certain special effects logo (for example, the special effects logo corresponding to the special effects image used by anchor B) to trigger the use of the special effects image corresponding to the special effects logo. It can be seen that the path for anchor A to reuse special effects images is long, which affects the user experience.
  • this application provides a live video processing method, which specifically includes: displaying a live broadcast room interface corresponding to the first user, the live broadcast room interface including a first display image corresponding to the first user and a second display corresponding to the second user. image.
  • a first special effect logo corresponding to the first special effect image is displayed on the live broadcast room interface corresponding to the first user.
  • the first special effect logo provides the first user with quick use of the first special effect image. entrance.
  • the first special effect image is displayed on the first display image.
  • this application provides the first user with an entrance to quickly use the first special effects image by displaying the first special effects logo on the first user's live broadcast room interface, so that the first user triggers the first special effects logo.
  • the first special effect image is displayed on the first display image to enhance user experience.
  • this figure shows a live video processing method provided by an embodiment of the present application.
  • the method can be executed by a live broadcast client.
  • the live broadcast client is a live broadcast client corresponding to the first user and can be installed in an electronic device.
  • electronic devices can include mobile phones, tablets, laptops, desktop computers, vehicle-mounted terminals, wearable electronic devices, all-in-one machines, smart home devices and other devices with communication functions, or they can be devices simulated by virtual machines or simulators.
  • the method may include the following steps:
  • S201 Display the live broadcast room interface, which includes a first display image corresponding to the first user and a second display image corresponding to the second user.
  • the live broadcast room interface including the first display image corresponding to the first user and the second display image corresponding to the second user is displayed.
  • the live broadcast room interface can be the Lianmai live broadcast room interface.
  • the first user and the second user are users participating in Lianmai.
  • the first user can be the anchor or guest of the live broadcast room, and can be the initiator/inviter of Lianmai. It can also be the invitee of Lianmai.
  • the second user can also be the initiator of Lianmai or the invitee of Lianmai.
  • the following is an example of a scenario where a host is connected to a host.
  • the first user is the inviter of the connection
  • the second/third/fourth/fifth users are the invitees of the connection.
  • the first special effect image is displayed on the second display image.
  • the first special effect logo corresponding to the first special effect image is displayed on the live broadcast room interface corresponding to the first user. It should be noted that, under the condition that the first user does not use the first special effect image, the first special effect image corresponding to the first special effect image is displayed on the live broadcast room interface corresponding to the first user. The first special effects logo.
  • the first special effect identifier refers to an identifier associated with the first special effect/first special effect image.
  • the first special effects logo can be displayed in different places.
  • the first special effects logo can be displayed in the live broadcast room interface, on the first control, in the navigation bar, in the special effects panel, etc.
  • Any special effect logo associated with the first special effect/first special effect image can be called the first special effect logo.
  • the first special effect logo when the first special effect logo is displayed on the live broadcast room interface corresponding to the first user, the first special effect logo may be displayed on the navigation bar of the live broadcast room interface corresponding to the first user. Further, the first special effect logo can be displayed on the first control displayed in the navigation bar.
  • the first control is used to perform enhancement processing on the first display image, and the enhancement processing may include one or more of beautification, filtering and special effects processing.
  • the live broadcast room interface displays the first control, and in response to the second display image displaying the first special effect image, the first special effect logo is displayed on the first control.
  • the first control includes an entrance to the special effects panel.
  • anchor B uses special effects in the live broadcast room, and its corresponding display image includes a special effects image. Then the special effects logo corresponding to the special effects image is displayed on the first control (enhanced control), thereby providing a quick and easy way for anchor A. Portal using this special effect image.
  • the first special effects logo displayed on the live broadcast room interface corresponding to the first user can also be displayed on the first special effects logo displayed on the live broadcast room interface corresponding to the first user.
  • the user identification may be an avatar of the second user, a thumbnail of the second display image and/or a nickname, etc.
  • the avatar of anchor B is displayed on the special effects logo.
  • the display position of the user logo on the first special effect logo can be set according to the actual application situation. For example, as shown in Figure 3b, the avatar of anchor B is displayed in the upper left corner of the first special effect logo.
  • the duration of the first special effect image displayed on the second display image can be counted. If the display duration is greater than or equal to the second preset duration, then The first special effect logo is displayed on the live broadcast room interface corresponding to the first user, thereby ensuring the stability of the information displayed on the live broadcast room interface. For example, if the second preset duration is 10 seconds, then if the second user uses the first special effect image for more than 10 seconds, the first special effect logo will be displayed on the live broadcast room interface corresponding to the first user.
  • S203 In response to the first user's triggering operation on the first special effect logo, display the first special effect image on the first display image.
  • the first special effect image is displayed on the first display image. That is, the first special effect identification is used to provide the first user with a quick access to the first special effect image.
  • the live broadcast room interface displays the first display image corresponding to the first user and the second display image corresponding to the second user, and the second display image displays the first special effects image
  • the live broadcast room interface corresponding to the first user Display the first special effect logo corresponding to the first special effect image.
  • the first special effect image is displayed on the first display image. That is, by displaying the first special effect logo corresponding to the first special effect image on the live broadcast room interface corresponding to the first user, a quick entry for reusing the first special effect image is provided for the first user without the need for the first user to go through cumbersome operations.
  • Improve user experience Moreover, multiple users of Lianmai can quickly use multiple identical special effects images to enrich the interactive methods in the live broadcast room, which is conducive to improving the interactive atmosphere of Lianmai scenes, especially PK scenes.
  • displaying the first special effect image on the first display image can be implemented in the following manner:
  • One implementation manner is to directly display the first special effect image on the first display image in response to the first user's click operation on the first special effect logo. That is, when the click operation of the first user on the first special effect logo is detected, indicating that the first user wants to use the first special effect image, the first special effect image is directly added to the first display image.
  • Another implementation manner is to display a special effects panel in response to the first user's triggering operation on the first special effects logo, and the special effects panel includes the first special effects logo; in response to the first user's triggering of the first special effects logo on the special effects panel.
  • the second user's image can also be displayed on the first special effects logo displayed on the special effects panel.
  • User ID For example, as shown in Figure 3c, Anchor A triggers the first special effects logo displayed on the live broadcast room interface to evoke the special effects panel.
  • the first special effects logo is displayed on the special effects panel, and Anchor B's name is displayed on the first special effects logo. avatar.
  • the method when the live broadcast room interface also includes a third display image corresponding to a third user, and the third display image includes a first special effects image, the method further includes: on the first special effects logo displayed on the special effects panel Display the user ID of the third user. That is, when other users in the live broadcast room also use the first special effect image, the user identifications corresponding to all users who use the first special effect image are displayed on the first special effect identification displayed on the special effects panel.
  • the user identification of the second user and the user identification of the third user when the user identification of the second user and the user identification of the third user are displayed on the first special effect identification, the user identification of the second user and the user identification of the third user can be displayed respectively at different positions of the first special effect identification, There is no overlap in the corresponding display positions of the two user IDs.
  • the avatar of anchor B is displayed in the upper left corner of the first special effects logo
  • the avatar of anchor C is displayed in the upper right corner of the first special effects logo.
  • the corresponding display position can be determined according to the start time when the user starts using the first special effect image, and then the user logo of the user is displayed at the display position. That is, the corresponding relationship between different start times and display positions can be set in advance, and then the matching display position is determined based on the corresponding relationship. For example, the display position corresponding to the user identification corresponding to the user who uses the first special effect image the latest is the upper left corner of the first special effect identification, and the display position corresponding to the user identification corresponding to the user using the first special effect image earliest is the first special effect identification. Lower right corner.
  • a quantity threshold for displaying user logos can also be set in advance.
  • the number of threshold users is determined based on time, and each of the number of threshold users is displayed on the first special effect logo. user ID.
  • the display can also be performed in the following manner: the user identification of the third user and the user identification of the second user are superimposed on the first special effect identification.
  • the user's user ID There is a partial overlap between the display area occupied by the second user's user identification on the first special effect identification and the display area occupied by the third user's user identification on the first special effect identification.
  • the third user's time can also be determined based on the time when the third user activates the first special effect image and the time when the second user activates the second special effect image. The overlay position corresponding to the user ID and the user ID of the second user.
  • the user identification of the second user is superimposed and displayed on the user identification of the third user.
  • the first special effects logo displays the avatar of anchor B and the avatar of anchor C. Since anchor C uses the first special effects image earlier than the time when anchor B uses the first special effects image, the avatar of anchor B It is displayed in a misplaced position on the avatar of anchor C.
  • the method further includes: in response to the first user not triggering the first special effect logo within the first preset time period, canceling the display of the first special effect logo on the live broadcast room interface. That is, the duration for which the first special effect logo is allowed to be displayed on the live broadcast room interface can be set in advance. When the display duration is equal to the first preset duration, the display will be automatically canceled.
  • the display of the first special effect logo on the live broadcast room interface includes: in the live broadcast room The interface carousel displays the first special effect logo and the second special effect logo corresponding to the second special effect image. That is, when multiple users in the live broadcast room except the first user use different special effect images, the special effect logos corresponding to the different special effect images will be displayed in rotation on the live broadcast room interface corresponding to the first user to provide reuse for the first user. Quick access to various special effects images to enhance user experience. Or, the first special effect logo and the second special effect logo are superimposed and displayed on the live broadcast room interface.
  • the first special effect image and the second special effect image are displayed on the first display image. That is, through one triggering operation, multiple special effect images can be added to the first display image, thereby improving the convenience of adding multiple special effect images.
  • a special effects panel in response to the first user's triggering operation of the first special effect logo, a special effects panel may also be displayed.
  • the special effects panel is used to display multiple special effect logos, and the plurality of special effect logos include the first special effect logo; in response to the first user's triggering operation of the first special effect logo, The selection operation triggered by the third special effects logo displayed on the special effects panel replaces the first special effects image displayed on the first display image with the third special effects image corresponding to the third special effects logo. That is, when the triggering operation of the first special effects logo by the first user is detected, the special effects panel is directly evoked to achieve quick access to the special effects panel, so that the user can obtain the special effects images of interest more efficiently and improve the user experience.
  • the live broadcast room interface also includes a fifth display image corresponding to the fifth user, and a fourth special effect image is displayed on the fifth display image.
  • the method further includes: obtaining the information generated by the second anchor using the first special effect image. The corresponding first activation time and the second activation time of the fifth user using the second special effect image; determining the corresponding display positions of the first special effects logo and the fourth special effects logo on the special effects panel based on the first activation time and the second activation time. ; Display the first special effects logo at a display position corresponding to the first special effects logo and display a fourth special effects logo at a display position corresponding to the fourth special effects logo.
  • the display position of the special effect logo on the special effects panel can be determined according to the time when each special effect image is activated, and then displayed at the corresponding display position.
  • the special effect logos corresponding to the special effect images used by each user are displayed in the order of activation time from late to early, so as to facilitate an intuitive understanding of the time order in which each special effect image is used. For example, if anchor B starts using special effect 1 earlier than anchor C starts using special effect 2, then the effects will be displayed in the order of special effect 2 and special effect 1 on the special effects panel.
  • the fourth special effect logo can also be displayed on the live broadcast room interface to provide the first user with an entrance to quickly reuse the fourth special effect image.
  • the fourth special effect identifier and the related implementation manner of displaying the fourth special effect image in the first display image please refer to the description of the first special effect identifier in S201-S203 above, which will not be described again in this embodiment.
  • FIG. 5 is a structural diagram of a live video processing device provided by an embodiment of the present application.
  • the device may include: a first display unit 501, a second display unit 502 and a third display unit 503 .
  • the first display unit 501 is used to display the live broadcast room interface, where the live broadcast room interface includes a first display image corresponding to the first user and a second display image corresponding to the second user;
  • the second display unit 502 is configured to display the first special effect logo corresponding to the first special effect image on the live broadcast room interface in response to the second display image displaying the first special effect image;
  • the third display unit 503 is configured to display the first special effect image on the first display image in response to the first user's triggering operation on the first special effect logo.
  • the second display unit 502 is further configured to display the user identification of the second user on the first special effect identification displayed on the live broadcast room interface.
  • the third display unit 503 is specifically configured to directly display the first special effect logo on the first display image in response to the first user's click operation on the first special effect logo.
  • the first special effects image is specifically configured to directly display the first special effect logo on the first display image in response to the first user's click operation on the first special effect logo.
  • the third display unit 503 is specifically configured to display a special effects panel in response to the first user's triggering operation on the first special effect logo, and the special effects panel includes the third A special effect logo; in response to the first user's trigger operation on the first special effect logo on the special effects panel, display the first special effect image on the first display image.
  • the device further includes: a fourth display unit;
  • the fourth display unit is configured to display the user identification of the second user on the first special effect identification displayed on the special effects panel.
  • the live broadcast room interface also includes a third display image corresponding to a third user, the third display image includes the first special effects image, and the fourth display unit is also used to The user identification of the third user is displayed on the first special effect identification displayed on the special effects panel.
  • the fourth display unit is specifically configured to superimpose and display the user identification of the third user and the user identification of the second user on the first special effect identification, and the There is a partial overlap between the display area occupied by the second user's user identification on the first special effect identification and the display area occupied by the third user's user identification on the first special effect identification.
  • the live broadcast room interface also includes a fourth display image corresponding to a fourth user, the fourth display image displays a second special effect image, and the second display unit 502 is specifically used to The first special effect logo and the second special effect logo corresponding to the second special effect image are displayed in a carousel on the live broadcast room interface.
  • the device in response to the first user's triggering operation on the first special effect identification, the device further includes: a fifth display unit;
  • the fifth display unit is used to display a special effects panel, the special effects panel displays a plurality of special effect logos, and the plurality of special effect logos include the first special effect logo;
  • the third display unit 503 is further configured to, in response to the first user's selection operation triggered by the third special effect logo displayed on the special effects panel, change the third special effect logo displayed on the first display image.
  • a special effect image is replaced with a third special effect image corresponding to the third special effect identification.
  • the live broadcast room interface also includes a fifth display image corresponding to the fifth user, so A fourth special effect image is displayed on the fifth display image, and the device further includes: an acquisition unit, a determination unit and a sixth display unit;
  • the obtaining unit is configured to obtain the first activation time corresponding to the second user's use of the first special effect image and the second activation time corresponding to the fifth user's use of the fourth special effect image;
  • the determining unit is configured to determine the corresponding display positions of the first special effects logo and the fourth special effects logo on the special effects panel according to the first activation time and the second activation time;
  • the sixth display unit is configured to display the first special effects logo at a display position corresponding to the first special effects logo and to display the fourth special effects logo at a display position corresponding to the fourth special effects logo.
  • the live broadcast room interface displays a first control
  • the second display unit 502 is configured to display a first special effect image in response to the second display image.
  • a first special effect logo corresponding to the first special effect image is displayed on the control, and the first control is used to enhance the first display image.
  • the device further includes: a processing unit;
  • the processing unit is configured to cancel the display of the first special effects logo on the live broadcast room interface in response to the first anchor not triggering the first special effects logo within a first preset time period.
  • each unit in this embodiment please refer to the relevant descriptions in the above method embodiments.
  • the division of units in the embodiments of this application is schematic and is only a logical function division. In actual implementation, there may be other division methods.
  • Each functional unit in the embodiment of the present application can be integrated into a processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the processing unit and the sending unit may be the same unit or different units.
  • the above integrated units can be implemented in the form of hardware or software functional units.
  • Terminal devices in embodiments of the present disclosure may include, but are not limited to, mobile phones, laptops, digital broadcast receivers, PDAs (Personal Digital Assistants), PADs (Tablets), PMPs (Portable Multimedia Players), vehicle-mounted terminals (such as Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, etc.
  • the electronic device shown in FIG. 6 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
  • the electronic device 600 may include a processing device (eg, central processing unit, graphics processor, etc.) 601, which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 602 or from a storage device 608.
  • the program in the memory (RAM) 603 executes various appropriate actions and processes.
  • various programs and data required for the operation of the electronic device 600 are also stored.
  • the processing device 601, ROM 602 and RAM 603 are connected to each other via a bus 604.
  • An input/output (I/O) interface 605 is also connected to bus 604.
  • input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration
  • An output device 607 such as a computer
  • a storage device 608 including a magnetic tape, a hard disk, etc.
  • Communication device 609 may allow electronic device 600 to communicate wirelessly or wiredly with other devices to exchange data.
  • FIG. 6 illustrates electronic device 600 with various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product including a computer program carried on a non-transitory computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via communication device 609, or from storage device 608, or from ROM 602.
  • the processing device 601 When the computer program is executed by the processing device 601, the above functions defined in the method of the embodiment of the present disclosure are performed.
  • the electronic device provided by the embodiments of the present disclosure and the method provided by the above embodiments belong to the same inventive concept.
  • Technical details that are not described in detail in this embodiment can be referred to the above embodiments, and this embodiment has the same beneficial effects as the above embodiments. .
  • Embodiments of the present disclosure provide a computer storage medium on which a computer program is stored. When the program is executed by a processor, the method provided by the above embodiments is implemented.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof. More specific examples of computer readable storage media may include, but are not limited to: an electrical connection having one or more wires, a portable computer disk, a hard drive, random access memory (RAM), read only memory (ROM), removable Programmd read-only memory (EPROM or flash memory), fiber optics, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer-readable medium may be transmitted using any suitable medium, including but not limited to: wire, optical cable, RF (radio frequency), etc., or any suitable combination of the above.
  • the client and server can communicate using any currently known or future developed network protocol such as HTTP (Hyper Text Transfer Protocol), and can communicate with digital data in any form or medium.
  • Data communications e.g., communications network
  • communications networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (e.g., the Internet), and end-to-end networks (e.g., ad hoc end-to-end networks), as well as any currently known or developed in the future network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
  • the computer-readable medium carries one or more programs.
  • the electronic device can perform the above method.
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages—such as "C” or similar programming languages.
  • the program code may execute completely on the user's computer, partially on the user's computer, as a stand-alone software package Executes, partially on the user's computer and partially on a remote computer, or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as an Internet service provider through Internet connection
  • each block in the flowchart or block diagram may represent a module, segment, or portion of code that contains one or more logic functions that implement the specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
  • each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , or can be implemented using a combination of specialized hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure can be implemented in software or hardware. Among them, the name of the unit/module does not constitute a limitation on the unit itself under certain circumstances.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs Systems on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, laptop disks, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM portable compact disk read-only memory
  • magnetic storage device or any suitable combination of the above.
  • At least one (item) refers to one or more, and “plurality” refers to two or more.
  • “And/or” is used to describe the relationship between associated objects, indicating that there can be three relationships. For example, “A and/or B” can mean: only A exists, only B exists, and A and B exist simultaneously. , where A and B can be singular or plural. The character “/” generally indicates that the related objects are in an "or” relationship. “At least one of the following” or similar expressions thereof refers to any combination of these items, including any combination of a single item (items) or a plurality of items (items).
  • At least one of a, b or c can mean: a, b, c, "a and b", “a and c", “b and c", or "a and b and c” ”, where a, b, c can be single or multiple.
  • RAM random access memory
  • ROM read-only memory
  • electrically programmable ROM electrically erasable programmable ROM
  • registers hard disks, removable disks, CD-ROMs, or anywhere in the field of technology. any other known form of storage media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本申请公开了一种直播视频处理方法、装置、设备及介质,第一用户对应的直播间界面显示有第一用户对应的第一显示图像和第二用户对应的第二显示图像,若第二显示图像显示有第一特效图像,在直播间界面显示第一特效图像对应的第一特效标识。响应于第一用户对第一特效标识的触发操作,在第一显示图像上显示第一特效图像。即,通过在第一用户对应的直播间界面上显示第一特效图像对应的第一特效标识的方式,为第一用户提供复用第一特效图像的快捷入口,无需第一用户经过繁琐操作,提升用户使用体验。

Description

一种直播视频处理方法、装置、设备及介质
本申请要求于2022年08月23日提交中国专利局、申请号为202211014041.7、申请名称为“一种直播视频处理方法、装置、设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及计算机技术领域,尤其涉及一种直播视频处理方法、装置、设备及介质。
背景技术
随着直播技术的不断发展,越来越多的用户通过直播进行社交。在直播过程中,用户可以添加特效的方式增加直播间的趣味性,从而吸引更多观看者。进一步地,还可以通过连麦、连线或者PK的方式增加热度,在此场景下,其想要使用连麦用户所使用的特效,需要经过繁琐的操作后,才可以使用相同的特效,影响用户使用体验。
发明内容
有鉴于此,本申请实施例提供一种直播视频处理方法、装置、设备及介质,缩短复用路径,提升用户使用体验。
为实现上述目的,本申请提供的技术方案如下:
在本申请第一方面,提供了一种直播视频处理方法,所述方法包括:
显示直播间界面,所述直播间界面包括第一用户对应的第一显示图像和第二用户对应的第二显示图像;
响应于所述第二显示图像显示有第一特效图像,在所述直播间界面显示所述第一特效图像对应的第一特效标识;
响应于所述第一用户对所述第一特效标识的触发操作,在所述第一显示图像上显示所述第一特效图像。
在本申请第二方面,提供了一种直播视频处理装置,所述装置包括:
第一显示单元,用于显示直播间界面,所述直播间界面包括第一用户对应的第一显示图像和第二用户对应的第二显示图像;
第二显示单元,用于响应于所述第二显示图像显示有第一特效图像,在所述直播间界面显示所述第一特效图像对应的第一特效标识;
第三显示单元,用于响应于所述第一用户对所述第一特效标识的触发操作,在所述第一显示图像上显示所述第一特效图像。
在本申请第三方面,提供了一种电子设备,所述设备包括:处理器和存储器;
所述存储器,用于存储指令或计算机程序;
所述处理器,用于执行所述存储器中的所述指令或计算机程序,以使得所述电子设备执行第一方面所述的方法。
在本申请第四方面,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令在设备上运行时,使得所述设备执行第一方面所述的方法。
在本申请第五方面,提供了一种计算机程序产品,所述计算机程序产品包括计算机程序/指令,所述计算机程序/指令被处理器执行时实现第一方面所述的方法。
由此可见,本申请实施例具有如下有益效果:
本申请实施例中,第一用户对应的直播间界面显示有第一用户对应的第一显示图像和第二用户对应的第二显示图像,若第二显示图像显示有第一特效图像,在第一用户对应的直播间界面显示第一特效图像对应的第一特效标识。响应于第一用户对第一特效标识的触发操作,在第一显示图像上显示第一特效图像。即,通过在第一用户对应的直播间界面上显示第一特效图像对应的第一特效标识的方式,为第一用户提供复用第一特效图像的快捷入口,无需第一用户经过繁琐操作,提升用户使用体验。而且通过连麦的多个用户快捷使用多个相同的特效图像丰富直播间互动方式,有利于提升连麦场景尤其是PK场景的互动氛围。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种显示特效面板的示意图;
图2为本申请实施例提供的一种直播视频处理方法流程图;
图3a为本申请实施例提供的一种在直播间界面显示第一特效标识示意图;
图3b为本申请实施例提供的一种在第一特效标识上显示用户标识示意图;
图3c为本申请实施例提供的一种在特效面板上显示用户标识示意图;
图4a为本申请实施例提供的一种在特效面板上显示两个用户标识示意图;
图4b为本申请实施例提供的另一种在特效面板上显示两个用户标识示意图;
图5为本申请实施例提供的一种直播视频处理装置示意图;
图6为本申请实施例提供的一种电子设备结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
目前,在直播连麦的场景下,当其中一个主播想要使用连麦用户所使用的特效图像时,需要在直播间界面触发多次操作后才可以唤起特效面板,进而在特效面板中点击连麦用户所使用的特效图像,实现特效复用。该种复用方式对应的复用链路较长,影响用户使用体验。其中,直播连麦可以包括不同主播之间的连麦,也可以为某一主播与直播间的观众进行连麦。
例如,图1所示的直播间界面示意图,在该场景下以多个主播连麦为例进行说明。在该直播间共存在3位主播,以主播A的视觉为例进行说明。在主播A对应的直播间界面的底部显示导航栏,该导航栏包括互动控件201、分享控件202、增强控件203和更多控件 204。其中,互动控件201用于主播A与直播间的观众进行互动,分享控件202用于主播A将当前直播间分享给其他用户,增强空间203用于对当前主播A对应的显示图像进行增强处理,更多控件204用于响应于主播A的触发操作,显示更多可被触发的功能控件。通过图1可知,当主播A想要复用某一特效图像时,先点击增强控件203,显示增强面板,该增强面板包括美颜控件、特效控件和贴纸控件。响应于主播A对特效控件的触发操作,显示特效面板,该特效面板中包括多种特效标识。主播A需要点击某一特效标识(例如主播B使用的特效图像对应的特效标识)触发对该特效标识对应的特效图像的使用。可见,主播A复用特效图像的路径较长,影响使用体验。
基于此,本申请提供了一种直播视频处理方法,具体包括:显示第一用户对应的直播间界面,该直播间界面包括第一用户对应的第一显示图像和第二用户对应的第二显示图像。响应于第二显示图像显示有第一特效图像,在第一用户对应的直播间界面显示第一特效图像对应的第一特效标识,该第一特效标识为第一用户提供快速使用第一特效图像的入口。响应于第一用户对应第一特效标识的触发操作,在第一显示图像上显示第一特效图像。即,本申请通过在第一用户的直播间界面显示第一特效标识的方式,为第一用户提供快速使用第一特效图像的入口,以使得第一用户通过触发第一特效标识的方式,在第一显示图像上显示第一特效图像,提升用户使用体验。
为便于理解本申请实施例提供的技术方案,下面将结合附图进行说明。下面将以直播间中第一用户的视角为例对直播间界面的显示信息进行说明。
参见图2,该图为本申请实施例提供的一种直播视频处理方法,该方法可以由直播客户端执行,该直播客户端为第一用户对应的直播客户端,可以安装在电子设备中。其中,电子设备可以包括移动电话、平板电脑、笔记本电脑、台式电脑、车载终端、可穿戴电子设备、一体机、智能家居设备等具有通信功能的设备,也可以是虚拟机或者模拟器模拟的设备。如图2所示,该方法可以包括如下步骤:
S201:显示直播间界面,该直播间界面包括第一用户对应的第一显示图像和第二用户对应的第二显示图像。
本实施例中,当第一用户和第二用户在直播间发起连麦时,显示包括第一用户对应的第一显示图像和第二用户对应的第二显示图像的直播间界面。
其中,直播间界面可以是连麦直播间界面,第一用户和第二用户是参与连麦的用户,第一用户可以是直播间的主播或嘉宾,可以是连麦的发起者/邀请者,也可以是连麦的受邀者,第二用户也可以是连麦的发起者,也可以是连麦的受邀者。下面以主播与主播连线的场景进行说明,第一用户为连线的邀请者,第二/三/四/五用户为连线的受邀者为例进行说明。
S202:响应于第二显示图像显示有第一特效图像,在直播间界面显示第一特效图像对应的第一特效标识。
本实施例中,当第二用户通过自身对应的直播间界面触发使用第一特效图像时,在第二显示图像上显示第一特效图像。在检测到第二显示图像上显示有第一特效图像时,在第一用户对应的直播间界面显示第一特效图像对应的第一特效标识。需要说明的是,在第一用户未使用第一特效图像的条件下,在第一用户对应的直播间界面显示第一特效图像对应 的第一特效标识。
需要说明的是,第一特效标识是指与第一特效/第一特效图像相关联的标识。在不同的实施例中,第一特效标识可以显示在不同的地方,例如,第一特效标识可以显示在直播间界面中、第一控件上、导航栏中、特效面板中等。在本公开中,对第一特效标识的显示位置不作限制,任何与第一特效/第一特效图像相关联的特效标识,都可以被称作第一特效标识。
其中,在第一用户对应的直播间界面显示第一特效标识时,可以在第一用户对应的直播间界面的导航栏显示第一特效标识。进一步地,可以在导航栏中所展示的第一控件上显示第一特效标识。其中,第一控件用于对第一显示图像进行增强处理,该增强处理可以包括美颜、滤镜和特效处理中的一种或多种。具体地,直播间界面显示有第一控件,响应于第二显示图像显示有第一特效图像,在第一控件上显示第一特效标识。其中,第一控件包括特效面板的入口。
例如,图3a所示,直播间中主播B使用特效,其对应的显示图像上包括特效图像,则在第一控件(增强控件)上显示该特效图像对应的特效标识,从而为主播A提供快捷使用该特效图像的入口。
在一些实施方式中,为使得第一用户可以直观、明确地获知在直播间界面所展示的特效标识被哪个用户使用,还可以在第一用户对应的直播间界面所显示的第一特效标识上显示第二用户的用户标识。其中,用户标识可以为第二用户的头像、第二显示图像的缩略图和/或昵称等。例如,图3b所示的显示效果图,在特效标识上显示主播B的头像。其中,用户标识在第一特效标识上的显示位置可以根据实际应用情况进行设定,例如图3b所示,在第一特效标识的左上角展示主播B的头像。
在一些实施方式中,在检测到第二显示图像显示有第一特效图像时,可以统计在第二显示图像上显示第一特效图像的时长,若该显示时长大于等于第二预设时长,则在第一用户对应的直播间界面显示第一特效标识,从而保证在直播间界面所显示信息的稳定性。例如,第二预设时长为10秒,则如果第二用户使用第一特效图像的时长超过10秒时,在第一用户对应的直播间界面显示第一特效标识。
S203:响应于第一用户对第一特效标识的触发操作,在第一显示图像上显示第一特效图像。
本实施例中,当第一用户对直播间界面所显示的第一特效标识触发,则在第一显示图像上显示第一特效图像。即,通过第一特效标识为第一用户提供快捷使用第一特效图像的入口。
可见,当直播间界面显示有第一用户对应的第一显示图像和第二用户对应的第二显示图像,且第二显示图像显示有第一特效图像时,在第一用户对应的直播间界面显示第一特效图像对应的第一特效标识。响应于第一用户对第一特效标识的触发操作,在第一显示图像上显示第一特效图像。即,通过在第一用户对应的直播间界面上显示第一特效图像对应的第一特效标识的方式,为第一用户提供复用第一特效图像的快捷入口,无需第一用户经过繁琐操作,提升用户使用体验。而且通过连麦的多个用户快捷使用多个相同的特效图像丰富直播间互动方式,有利于提升连麦场景尤其是PK场景的互动氛围。
其中,基于第一用户对第一特效标识的触发操作,在第一显示图像上显示第一特效图像,可以通过以下方式实现:
一种实现方式是,响应于第一用户对第一特效标识的点击操作,直接在第一显示图像上显示第一特效图像。也就是,当检测到第一用户对第一特效标识的点击操作,表明第一用户要使用第一特效图像,则在第一显示图像上直接增加第一特效图像。
另一种实现方式是,响应于第一用户对第一特效标识的触发操作,显示特效面板,该特效面板包括第一特效标识;响应于第一用户对特效面板上的第一特效标识的触发操作,在第一显示图像上显示第一特效图像。也就是,当检测到第一用户对直播间界面所显示的第一特效标识的触发操作,直接唤起特效面板,以使得第一用户在特效面板上对第一特效标识触发确认操作,从而在第一显示图像上显示第一特效图像。即,本实施例提供了快速唤起特效面板的方式,以使得第一用户可以快速复用第一特效图像,提升使用体验。
其中,为使得第一用户可以直观地获知特效面板上所显示的特效标识对应的特效图像正在被直播间的哪些用户使用,还可以在特效面板所显示的第一特效标识上显示第二用户的用户标识。例如,图3c所示,主播A通过触发直播间界面上所显示的第一特效标识,唤起特效面板,该特效面板上显示有第一特效标识,且在该第一特效标识上显示主播B的头像。
在一些实施方式中,当直播间界面还包括第三用户对应的第三显示图像,该第三显示图像包括第一特效图像,所述方法还包括:在特效面板所显示的第一特效标识上显示第三用户的用户标识。即,当直播间的其他用户也使用第一特效图像时,在特效面板所显示的第一特效标识上显示所有使用第一特效图像的用户对应的用户标识。
其中,在第一特效标识上显示第二用户的用户标识和第三用户的用户标识时,可以在第一特效标识的不同位置上分别显示第二用户的用户标识和第三用户的用户标识,该两个用户标识各自对应的显示位置不存在重叠。例如,图4a所示,在第一特效标识的左上角显示主播B的头像、在第一特效标识的右上角显示主播C的头像,两个头像对应的显示区域不存在重叠部分。其中,当在第一特效标识的不同位置显示不同用户的用户标识时,可以根据用户开始使用第一特效图像的开始时间确定其对应的显示位置,进而在该显示位置显示用户的用户标识。即,可以预先设定不同开始时间与显示位置的对应关系,进而根据该对应关系确定匹配的显示位置。例如,最晚使用第一特效图像的用户对应的用户标识对应的显示位置为第一特效标识的左上角、最早使用第一特效图像的用户对应的用户标识对应的显示位置为第一特效标识的右下角。
此外,还可以预先设置显示用户标识的数量阈值,当使用第一特效图像的用户的数量大于数量阈值时,按照时间确定数量阈值个用户,并在第一特效标识上显示该数量阈值个用户各自的用户标识。
其中,在第一特效标识上显示第二用户的用户标识和第三用户的用户标识时,还可以通过以下方式进行显示:在第一特效标识上叠加显示第三用户的用户标识上及第二用户的用户标识。其中,第二用户的用户标识在第一特效标识上所占用的显示区域与第三用户的用户标识在第一特效标识上所占用的显示区域存在部分重叠区域。进一步地,还可以根据第三用户启用第一特效图像的时间和第二用户启用第二特效图像的时间,确定第三用户的 用户标识和第二用户的用户标识对应的叠加位置。例如,第三用户启用第一特效图像的时间早于第二用户启用第二特效图像的时间,则在第三用户的用户标识上叠加显示第二用户的用户标识。例如,图4b所示,在第一特效标识展示主播B的头像和主播C的头像,由于主播C使用第一特效图像的时间早于主播B使用第一特效图像的时间,则主播B的头像在主播C的头像上错位显示。
在一些实施方式中,所述方法还包括:响应于第一用户在第一预设时长内未触发第一特效标识,在直播间界面取消显示第一特效标识。即,可以预先设置允许第一特效标识在直播间界面所显示的时长,当显示时长等于第一预设时长时,将自动取消显示。
在一些实施方式中,当直播间界面还包括第四用户对应的第四显示图像,则第四显示图像显示有第二特效图像,则所述在直播间界面显示第一特效标识包括:在直播间界面轮播显示第一特效标识和第二特效图像对应的第二特效标识。即,当直播间除第一用户外,其它多个用户在使用不同特效图像时,将在第一用户对应的直播间界面轮播显示不同特效图像对应的特效标识,以为第一用户提供复用各特效图像的快捷入口,提升用户使用体验。或者,在直播间界面叠加显示第一特效标识和第二特效标识。在叠加显示的情况下,响应于第一用户对第一特效标识的触发操作,在第一显示图像上显示第一特效图像和第二特效图像。即,通过一次触发操作,可以为第一显示图像添加多个特效图像,提高添加多个特效图像的便捷性。
其中,响应于第一用户对第一特效标识的触发操作,还可以显示特效面板,该特效面板用于展示可多个特效标识,该多个特效标识包括第一特效标识;响应于第一用户对特效面板上所显示的第三特效标识触发的选择操作,将第一显示图像上所显示的第一特效图像替换成第三特效标识对应的第三特效图像。即,当检测到第一用户对第一特效标识的触发操作,直接唤起特效面板,实现快速进入特效面板,以使用户更高效的获取感兴趣的特效图像,提升用户使用体验。
在一些实施方式中,直播间界面还包括第五用户对应的第五显示图像,该第五显示图像上显示有第四特效图像,所述方法还包括:获取第二主播使用第一特效图像所对应的第一启用时间以及第五用户使用第二特效图像的第二启用时间;根据第一启用时间和第二启用时间确定第一特效标识和第四特效标识在特效面板上各自对应的显示位置;在第一特效标识对应的显示位置显示第一特效标识以及在第四特效标识对应的显示位置显示第四特效标识。即,可以根据各特效图像被启用的时间确定特效标识在特效面板上的显示位置,进而在对应的显示位置上进行显示。例如,按照启用时间从晚到早的顺序展示各用户所使用特效图像对应的特效标识,便于直观了解各特效图像被使用的时间顺序。例如,主播B开始使用特效1的时间早于主播C开始使用特效2的时间,则在特效面板上按照特效2、特效1的顺序进行展示。
进一步地,还可以在直播间界面显示第四特效标识,为第一用户提供快捷复用第四特效图像的入口。其中,关于第四特效标识以及在第一显示图像显示第四特效图像的相关实现方式,可以参见上述S201-S203中第一特效标识的描述,本实施例在此不再赘述。
基于上述方法实施例,本申请实施例提供了一种直播视频处理装置和设备,下面将结 合附图进行说明。
参见图5,该图为本申请实施例提供的一种直播视频处理装置结构图,如图5所示,该装置可以包括:第一显示单元501、第二显示单元502和第三显示单元503。
其中,第一显示单元501,用于显示直播间界面,所述直播间界面包括第一用户对应的第一显示图像和第二用户对应的第二显示图像;
第二显示单元502,用于响应于所述第二显示图像显示有第一特效图像,在所述直播间界面显示所述第一特效图像对应的第一特效标识;
第三显示单元503,用于响应于所述第一用户对所述第一特效标识的触发操作,在所述第一显示图像上显示所述第一特效图像。
在本公开的一个实施例中,所述第二显示单元502,还用于在所述直播间界面所显示的所述第一特效标识上显示所述第二用户的用户标识。
在本公开的一个实施例中,所述第三显示单元503,具体用于响应于所述第一用户对所述第一特效标识的点击操作,直接在所述第一显示图像上显示所述第一特效图像。
在本公开的一个实施例中,所述第三显示单元503,具体用于响应于所述第一用户对所述第一特效标识的触发操作,显示特效面板,所述特效面板包括所述第一特效标识;响应于所述第一用户对所述特效面板上的所述第一特效标识的触发操作,在所述第一显示图像上显示所述第一特效图像。
在本公开的一个实施例中,所述装置还包括:第四显示单元;
所述第四显示单元,用于在所述特效面板所显示的所述第一特效标识上显示所述第二用户的用户标识。
在本公开的一个实施例中,所述直播间界面还包括第三用户对应的第三显示图像,所述第三显示图像包括所述第一特效图像,所述第四显示单元,还用于在所述特效面板所显示的所述第一特效标识上显示所述第三用户的用户标识。
在本公开的一个实施例中,所述第四显示单元,具体用于在所述第一特效标识上叠加显示所述第三用户的用户标识上及所述第二用户的用户标识,所述第二用户的用户标识在所述第一特效标识上所占用的显示区域与所述第三用户的用户标识在所述第一特效标识上所占用的显示区域存在部分重叠区域。
在本公开的一个实施例中,所述直播间界面还包括第四用户对应的第四显示图像,所述第四显示图像显示有第二特效图像,所述第二显示单元502,具体用于在所述直播间界面轮播显示所述第一特效标识和所述第二特效图像对应的第二特效标识。
在本公开的一个实施例中,响应于所述第一用户对所述第一特效标识的触发操作,所述装置还包括:第五显示单元;
所述第五显示单元,用于显示特效面板,所述特效面板显示有多个特效标识,所述多个特效标识包括所述第一特效标识;
所述第三显示单元503,还用于响应于所述第一用户对所述特效面板上所显示的第三特效标识触发的选择操作,将所述第一显示图像上所显示的所述第一特效图像替换成所述第三特效标识对应的第三特效图像。
在本公开的一个实施例中,所述直播间界面还包括第五用户对应的第五显示图像,所 述第五显示图像上显示有第四特效图像,所述装置还包括:获取单元、确定单元和第六显示单元;
所述获取单元,用于获取所述第二用户使用所述第一特效图像所对应的第一启用时间以及所述第五用户使用所述第四特效图像的第二启用时间;
所述确定单元,用于根据所述第一启用时间和所述第二启用时间确定所述第一特效标识和第四特效标识在特效面板上各自对应的显示位置;
所述第六显示单元,用于在所述第一特效标识对应的显示位置显示所述第一特效标识以及在所述第四特效标识对应的显示位置显示所述第四特效标识。
在本公开的一个实施例中,所述直播间界面显示有第一控件,所述第二显示单元502,用于响应于所述第二显示图像显示有第一特效图像,在所述第一控件上显示所述第一特效图像对应的第一特效标识,所述第一控件用于对所述第一显示图像进行增强处理。
在本公开的一个实施例中,所述装置还包括:处理单元;
所述处理单元,用于响应于所述第一主播在第一预设时长内未触发所述第一特效标识,在所述直播间界面取消显示所述第一特效标识。
需要说明的是,本实施例中各个单元的具体实现可以参见上述方法实施例中的相关描述。本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。本申请实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。例如,上述实施例中,处理单元和发送单元可以是同一个单元,也可以是不同的单元。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
参见图6,其示出了适于用来实现本公开实施例的电子设备600的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图6示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图6所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储装置608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM603中,还存储有电子设备600操作所需的各种程序和数据。处理装置601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。
通常,以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置607;包括例如磁带、硬盘等的存储装置608;以及通信装置609。通信装置609可以允许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的电子设备600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件 程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608被安装,或者从ROM602被安装。在该计算机程序被处理装置601执行时,执行本公开实施例的方法中限定的上述功能。
本公开实施例提供的电子设备与上述实施例提供的方法属于同一发明构思,未在本实施例中详尽描述的技术细节可参见上述实施例,并且本实施例与上述实施例具有相同的有益效果。
本公开实施例提供了一种计算机存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述实施例所提供的方法。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(Hyper Text Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备可以执行上述方法。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包 执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元/模块的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
需要说明的是,本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的系统或装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。
应当理解,在本申请中,“至少一个(项)”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,用于描述关联对象的关联关系,表示可以存在三种关系,例如,“A和/或B”可以表示:只存在A,只存在B以及同时存在A和B三种情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b或c中的至少一项(个),可以表示:a,b,c,“a和b”,“a和c”,“b和c”,或“a和b和c”,其中a,b,c可以是单个,也可以是多个。
还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本申请。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本申请的精神或范围的情况下,在其它实施例中实现。因此,本申请将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (16)

  1. 一种直播视频处理方法,所述方法包括:
    显示直播间界面,所述直播间界面包括第一用户对应的第一显示图像和第二用户对应的第二显示图像;
    响应于所述第二显示图像显示有第一特效图像,在所述直播间界面显示所述第一特效图像对应的第一特效标识;
    响应于所述第一用户对所述第一特效标识的触发操作,在所述第一显示图像上显示所述第一特效图像。
  2. 根据权利要求1所述的方法,所述方法还包括:
    在所述直播间界面所显示的所述第一特效标识上显示所述第二用户的用户标识。
  3. 根据权利要求1所述的方法,其中,所述响应于所述第一用户对所述第一特效标识的触发操作,在所述第一显示图像上显示所述第一特效图像,包括:
    响应于所述第一用户对所述第一特效标识的点击操作,直接在所述第一显示图像上显示所述第一特效图像。
  4. 根据权利要求1所述的方法,其中,所述响应于所述第一用户对所述第一特效标识的触发操作,在所述第一显示图像上显示所述第一特效图像,包括:
    响应于所述第一用户对所述第一特效标识的触发操作,显示特效面板,所述特效面板包括所述第一特效标识;
    响应于所述第一用户对所述特效面板上的所述第一特效标识的触发操作,在所述第一显示图像上显示所述第一特效图像。
  5. 根据权利要求4所述的方法,所述方法还包括:
    在所述特效面板所显示的所述第一特效标识上显示所述第二用户的用户标识。
  6. 根据权利要求5所述的方法,其中,所述直播间界面还包括第三用户对应的第三显示图像,所述第三显示图像包括所述第一特效图像,所述方法还包括:
    在所述特效面板所显示的所述第一特效标识上显示所述第三用户的用户标识。
  7. 根据权利要求6所述的方法,其中,所述在所述特效面板所显示的所述第一特效标识上显示所述第三用户的用户标识,包括:
    在所述第一特效标识上叠加显示所述第三用户的用户标识及所述第二用户的用户标识,所述第二用户的用户标识在所述第一特效标识上所占用的显示区域与所述第三用户的用户标识在所述第一特效标识上所占用的显示区域存在部分重叠区域。
  8. 根据权利要求1所述的方法,其中,所述直播间界面还包括第四用户对应的第四显示图像,所述第四显示图像显示有第二特效图像,在所述直播间界面显示所述第一特效图像对应的第一特效标识,包括:
    在所述直播间界面轮播显示所述第一特效标识和所述第二特效图像对应的第二特效标识。
  9. 根据权利要求1所述的方法,其中,响应于所述第一用户对所述第一特效标识的触发操作,所述方法还包括:
    显示特效面板,所述特效面板显示有多个特效标识,所述多个特效标识包括所述 第一特效标识;
    响应于所述第一用户对所述特效面板上所显示的第三特效标识触发的选择操作,将所述第一显示图像上所显示的所述第一特效图像替换成所述第三特效标识对应的第三特效图像。
  10. 根据权利要求1所述的方法,其中,所述直播间界面还包括第五用户对应的第五显示图像,所述第五显示图像上显示有第四特效图像,所述方法还包括:
    获取所述第二用户使用所述第一特效图像所对应的第一启用时间以及所述第五用户使用所述第四特效图像的第二启用时间;
    根据所述第一启用时间和所述第二启用时间确定所述第一特效标识和第四特效标识在特效面板上各自对应的显示位置;
    在所述第一特效标识对应的显示位置显示所述第一特效标识以及在所述第四特效标识对应的显示位置显示所述第四特效标识。
  11. 根据权利要求1所述的方法,其中,所述直播间界面显示有第一控件,所述响应于所述第二显示图像显示有第一特效图像,在所述直播间界面显示所述第一特效图像对应的第一特效标识,包括:
    响应于所述第二显示图像显示有第一特效图像,在所述第一控件上显示所述第一特效图像对应的第一特效标识,所述第一控件包括特效面板入口。
  12. 根据权利要求1-11任一项所述的方法,所述方法还包括:
    响应于所述第一主播在第一预设时长内未触发所述第一特效标识,在所述直播间界面取消显示所述第一特效标识。
  13. 一种直播视频处理装置,所述装置包括:
    第一显示单元,用于显示直播间界面,所述直播间界面包括第一用户对应的第一显示图像和第二用户对应的第二显示图像;
    第二显示单元,用于响应于所述第二显示图像显示有第一特效图像,在所述直播间界面显示所述第一特效图像对应的第一特效标识;
    第三显示单元,用于响应于所述第一用户对所述第一特效标识的触发操作,在所述第一显示图像上显示所述第一特效图像。
  14. 一种电子设备,所述设备包括:处理器和存储器;
    所述存储器,用于存储指令或计算机程序;
    所述处理器,用于执行所述存储器中的所述指令或计算机程序,以使得所述电子设备执行权利要求1-12任一项所述的方法。
  15. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令在设备上运行时,使得所述设备执行权利要求1-12任一项所述的方法。
  16. 一种计算机程序产品,包括计算机程序指令,当所述计算机程序指令在计算机上运行时,使得计算机执行如权利要求1-12中任意一项所述的方法。
PCT/CN2023/114440 2022-08-23 2023-08-23 一种直播视频处理方法、装置、设备及介质 WO2024041568A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211014041.7A CN115396716B (zh) 2022-08-23 2022-08-23 一种直播视频处理方法、装置、设备及介质
CN202211014041.7 2022-08-23

Publications (1)

Publication Number Publication Date
WO2024041568A1 true WO2024041568A1 (zh) 2024-02-29

Family

ID=84120702

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/114440 WO2024041568A1 (zh) 2022-08-23 2023-08-23 一种直播视频处理方法、装置、设备及介质

Country Status (3)

Country Link
US (1) US20240073488A1 (zh)
CN (1) CN115396716B (zh)
WO (1) WO2024041568A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115396716B (zh) * 2022-08-23 2024-01-26 北京字跳网络技术有限公司 一种直播视频处理方法、装置、设备及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180234708A1 (en) * 2017-02-10 2018-08-16 Seerslab, Inc. Live streaming image generating method and apparatus, live streaming service providing method and apparatus, and live streaming system
CN110124310A (zh) * 2019-05-20 2019-08-16 网易(杭州)网络有限公司 游戏中的虚拟道具信息分享方法、装置、设备
CN114025180A (zh) * 2021-09-30 2022-02-08 北京达佳互联信息技术有限公司 一种游戏操作同步系统、方法、装置、设备及存储介质
CN114466213A (zh) * 2022-02-07 2022-05-10 腾讯科技(深圳)有限公司 信息同步方法、装置、计算机设备、存储介质及程序产品
CN114650440A (zh) * 2022-03-16 2022-06-21 广州方硅信息技术有限公司 直播间的虚拟道具共享方法、装置、计算机设备及介质
CN115396716A (zh) * 2022-08-23 2022-11-25 北京字跳网络技术有限公司 一种直播视频处理方法、装置、设备及介质

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100681900B1 (ko) * 2005-02-24 2007-02-12 에스케이 텔레콤주식회사 화상 통화를 위한 감정표현 애니메이션 서비스 시스템 및방법과 이를 위한 이동 통신 단말
CN106789543A (zh) * 2015-11-20 2017-05-31 腾讯科技(深圳)有限公司 会话中实现表情图像发送的方法和装置
CN106303354B (zh) * 2016-08-18 2020-04-28 北京奇虎科技有限公司 一种脸部特效推荐方法及电子设备
CN109391792B (zh) * 2017-08-03 2021-10-29 腾讯科技(深圳)有限公司 视频通信的方法、装置、终端及计算机可读存储介质
CN107770596A (zh) * 2017-09-25 2018-03-06 北京达佳互联信息技术有限公司 一种特效同步方法、装置及移动终端
CN109660570B (zh) * 2017-10-10 2021-09-07 武汉斗鱼网络科技有限公司 一种道具发放方法、存储介质、设备及系统
CN109104586B (zh) * 2018-10-08 2021-05-07 北京小鱼在家科技有限公司 特效添加方法、装置、视频通话设备以及存储介质
CN109729374B (zh) * 2018-12-27 2022-02-18 广州方硅信息技术有限公司 礼物打赏方法、移动终端以及计算机存储介质
JP7094216B2 (ja) * 2018-12-28 2022-07-01 グリー株式会社 配信ユーザの動きに基づいて生成されるキャラクタオブジェクトのアニメーションを含む動画をライブ配信する動画配信システム、動画配信方法及び動画配信プログラム
CN110944235B (zh) * 2019-11-22 2022-09-16 广州方硅信息技术有限公司 直播互动方法、装置、系统、电子设备及存储介质
CN113038287B (zh) * 2019-12-09 2022-04-01 上海幻电信息科技有限公司 多人视频直播业务实现方法、装置、计算机设备
CN112087652A (zh) * 2020-08-03 2020-12-15 北京达佳互联信息技术有限公司 视频的制作方法、共享方法、装置、电子设备及存储介质
CN113411624B (zh) * 2021-06-15 2023-03-21 北京达佳互联信息技术有限公司 游戏直播互动方法、装置、电子设备及存储介质
CN113542902B (zh) * 2021-07-13 2023-02-24 北京字跳网络技术有限公司 一种视频处理方法、装置、电子设备和存储介质
CN113573092B (zh) * 2021-07-30 2022-09-30 北京达佳互联信息技术有限公司 直播数据处理方法、装置、电子设备及存储介质
CN113727132A (zh) * 2021-08-31 2021-11-30 广州方硅信息技术有限公司 虚拟礼物显示方法、服务器、存储介质及计算机设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180234708A1 (en) * 2017-02-10 2018-08-16 Seerslab, Inc. Live streaming image generating method and apparatus, live streaming service providing method and apparatus, and live streaming system
CN110124310A (zh) * 2019-05-20 2019-08-16 网易(杭州)网络有限公司 游戏中的虚拟道具信息分享方法、装置、设备
CN114025180A (zh) * 2021-09-30 2022-02-08 北京达佳互联信息技术有限公司 一种游戏操作同步系统、方法、装置、设备及存储介质
CN114466213A (zh) * 2022-02-07 2022-05-10 腾讯科技(深圳)有限公司 信息同步方法、装置、计算机设备、存储介质及程序产品
CN114650440A (zh) * 2022-03-16 2022-06-21 广州方硅信息技术有限公司 直播间的虚拟道具共享方法、装置、计算机设备及介质
CN115396716A (zh) * 2022-08-23 2022-11-25 北京字跳网络技术有限公司 一种直播视频处理方法、装置、设备及介质

Also Published As

Publication number Publication date
CN115396716B (zh) 2024-01-26
CN115396716A (zh) 2022-11-25
US20240073488A1 (en) 2024-02-29

Similar Documents

Publication Publication Date Title
WO2021004221A1 (zh) 特效的展示处理方法、装置及电子设备
WO2022095957A1 (zh) 信息显示方法、装置、设备及介质
WO2020133373A1 (zh) 视频处理方法、装置、电子设备及计算机可读存储介质
WO2021196903A1 (zh) 视频处理方法、装置、可读介质及电子设备
WO2023274354A1 (zh) 视频共享方法、装置、设备及介质
WO2023051185A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2021218518A1 (zh) 视频的处理方法、装置、设备及介质
WO2015058623A1 (zh) 一种多媒体数据共享方法和系统、及电子设备
KR20140147329A (ko) 락 스크린을 표시하는 전자 장치 및 그 제어 방법
EP4429252A1 (en) Live streaming preview method and apparatus, and device, program product and medium
WO2023072296A1 (zh) 多媒体信息处理方法、装置、电子设备和存储介质
WO2024041568A1 (zh) 一种直播视频处理方法、装置、设备及介质
WO2023051294A1 (zh) 道具处理方法、装置、设备及介质
WO2024008184A1 (zh) 一种信息展示方法、装置、电子设备、计算机可读介质
WO2023202460A1 (zh) 页面显示方法、装置、电子设备、存储介质和程序产品
CN109600656A (zh) 一种视频榜单显示方法、装置,终端设备及存储介质
WO2023116479A1 (zh) 视频的发布方法、装置、电子设备、存储介质和程序产品
WO2023169305A1 (zh) 特效视频生成方法、装置、电子设备及存储介质
WO2021135684A1 (zh) 直播间互动方法、装置、可读介质及电子设备
CN114727146A (zh) 信息处理方法、装置、设备及存储介质
WO2024046386A1 (zh) 一种直播间访问方法、装置、设备及介质
US20230421857A1 (en) Video-based information displaying method and apparatus, device and medium
WO2024140503A1 (zh) 一种信息显示方法、装置、设备及介质
US20230370686A1 (en) Information display method and apparatus, and device and medium
WO2020207083A1 (zh) 信息分享的方法、装置、电子设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23856656

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: AU2023327991

Country of ref document: AU