CN116866678A - Virtual background generation method and device, electronic equipment and readable storage medium - Google Patents

Virtual background generation method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN116866678A
CN116866678A CN202310843369.8A CN202310843369A CN116866678A CN 116866678 A CN116866678 A CN 116866678A CN 202310843369 A CN202310843369 A CN 202310843369A CN 116866678 A CN116866678 A CN 116866678A
Authority
CN
China
Prior art keywords
target
virtual
background
virtual background
audience terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310843369.8A
Other languages
Chinese (zh)
Inventor
辛一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Boguan Information Technology Co Ltd
Original Assignee
Guangzhou Boguan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Boguan Information Technology Co Ltd filed Critical Guangzhou Boguan Information Technology Co Ltd
Priority to CN202310843369.8A priority Critical patent/CN116866678A/en
Publication of CN116866678A publication Critical patent/CN116866678A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
    • H04N21/8153Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics comprising still images, e.g. texture, background image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The disclosure belongs to the technical field of computers, and relates to a virtual background generation method and device, a storage medium and electronic equipment. The method comprises the following steps: acquiring game scene data corresponding to a target game live in a current live broadcasting room through a game server, and generating an initial virtual background based on the game scene data; according to the virtual object throwing operation sent by the game server, obtaining object scene data corresponding to the virtual object from the game server; generating a target virtual background in the live broadcasting room based on the initial virtual background and the object scene data; the target virtual background comprises a virtual object interaction model corresponding to the virtual object. In the method, the virtual object interaction model is included in the target virtual background, so that a user can interact with the virtual object interaction model through the audience terminal corresponding to the live broadcasting room, and the interaction effect between the user and the virtual live broadcasting background is improved.

Description

Virtual background generation method and device, electronic equipment and readable storage medium
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a virtual background generation method, a virtual background generation device, a computer readable storage medium and an electronic device.
Background
Today, viewing webcasts has become a very popular entertainment approach, and with the development of the virtual industry, virtual live broadcasts are enjoyed by many viewers.
In the prior art, when the content of the virtual live broadcast is a game, the background of the virtual live broadcast can be a game scene in the game in which the host is participating, however, the audience can only watch the game scene and cannot interact with the content in the game scene, so that the interaction effect between the audience and the background of the virtual live broadcast is reduced, and the experience of the audience in the live broadcast room is further reduced.
In view of this, there is a need in the art to develop a new virtual background generation method and apparatus.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The disclosure provides a virtual background generation method, a virtual background generation device, a computer readable storage medium and an electronic device, so as to overcome the problem of reduced interaction effect between a viewer and a virtual live background caused by related technology at least to a certain extent.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to a first aspect of an embodiment of the present invention, there is provided a virtual background generation method, including: acquiring game scene data corresponding to a target game live in a current live broadcasting room through a game server, and generating an initial virtual background based on the game scene data; according to the virtual object throwing operation sent by the game server, obtaining object scene data corresponding to the virtual object from the game server; generating a target virtual background in the current live room based on the initial virtual background and the object scene data; and the target virtual background comprises a virtual object interaction model corresponding to the virtual object.
According to a second aspect of an embodiment of the present invention, there is provided a virtual background generating apparatus, the apparatus including: the first acquisition module is configured to acquire game scene data corresponding to a target game live in a current live broadcasting room through a game server, and generate an initial virtual background based on the game scene data; a second acquisition module configured to acquire object scene data corresponding to a virtual object from the game server according to a virtual object delivery operation transmitted by the game server; a generation module configured to generate a target virtual background in the current live room based on the initial virtual background and the object scene data; and the target virtual background comprises a virtual object interaction model corresponding to the virtual object.
According to a third aspect of an embodiment of the present invention, there is provided an electronic apparatus including: a processor and a memory; wherein the memory has stored thereon computer readable instructions which, when executed by the processor, implement the virtual background generation method of any of the above-described exemplary embodiments.
According to a fourth aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the virtual background generation method in any of the above-described exemplary embodiments.
As can be seen from the above technical solutions, the virtual background generating method, the virtual background generating device, the computer-readable storage medium, and the electronic device according to the exemplary embodiments of the present invention have at least the following advantages and positive effects:
in the method and the device provided by the exemplary embodiment of the present disclosure, on one hand, according to the virtual object throwing operation sent by the game server, object scene data corresponding to the virtual object is obtained from the game server, so that the subsequent generation of a target virtual background including a virtual object interaction model according to the object scene data is facilitated; on the other hand, the target virtual background comprises a virtual object interaction model, and the audience terminal corresponding to the current live broadcasting room can interact with the virtual object interaction model, so that the interaction effect between the user and the target virtual background is improved, and the interaction effect between the user and the target game is also improved because the virtual object interaction model is a model existing in the target game.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
Fig. 1 schematically illustrates a flowchart of a virtual background generation method in an embodiment of the present disclosure;
fig. 2 schematically illustrates a flowchart after an initial virtual background is generated based on game scene data in a virtual background generation method according to an embodiment of the present disclosure;
fig. 3 schematically illustrates a flowchart of adjusting a background view angle of an initial virtual background in which a manipulation object model is placed in a virtual background generation method in an embodiment of the present disclosure;
fig. 4 schematically illustrates a flow chart of moving an initial virtual background in which a manipulation object model is placed in a virtual background generation method in an embodiment of the present disclosure;
Fig. 5 schematically illustrates a flowchart of generating a target virtual background in a current live room in a virtual background generating method in an embodiment of the present disclosure;
FIG. 6 schematically illustrates a flowchart of determining a background composition relationship between an interactive virtual background and an initial virtual background in a virtual background generation method in an embodiment of the disclosure;
fig. 7 schematically illustrates a flowchart of combining an initial virtual background and an interactive virtual background in a virtual background generation method in an embodiment of the disclosure;
fig. 8 schematically illustrates a schematic diagram of a target virtual background in which a background combination relationship is a background nesting relationship in a virtual background generation method in an embodiment of the present disclosure;
fig. 9 schematically illustrates a flowchart of a target audience terminal executing an interaction event in a target virtual background in a virtual background generating method in an embodiment of the disclosure;
fig. 10 schematically illustrates a flowchart after a target virtual background in a current live room is generated in a virtual background generation method in an embodiment of the present disclosure;
fig. 11 schematically illustrates a flowchart of generating an interaction control in a target virtual background in a virtual background generating method in an embodiment of the disclosure;
FIG. 12 schematically illustrates a flow chart of placing a virtual prop at a target location of a target virtual background in a virtual background generation method in an embodiment of the disclosure;
FIG. 13 schematically illustrates a flow chart after a virtual prop is placed at a target location of a target virtual background in a virtual background generation method in an embodiment of the disclosure;
fig. 14 schematically illustrates a flowchart after a target virtual background in a current live room is generated in a virtual background generation method in an embodiment of the present disclosure;
fig. 15 schematically illustrates a flowchart after a target virtual background in a current live room is generated in a virtual background generation method in an embodiment of the present disclosure;
FIG. 16 schematically illustrates an apparatus for a virtual background generation method in an embodiment of the disclosure;
fig. 17 schematically illustrates an electronic device for a virtual background generation method in an embodiment of the disclosure;
fig. 18 schematically illustrates a computer-readable storage medium for a virtual background generation method in an embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the aspects of the disclosure may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
The terms "a," "an," "the," and "said" are used in this specification to denote the presence of one or more elements/components/etc.; the terms "comprising" and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. in addition to the listed elements/components/etc.; the terms "first" and "second" and the like are used merely as labels, and are not intended to limit the number of their objects.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities.
Aiming at the problems in the related art, the present disclosure proposes a virtual background generation method. Fig. 1 shows a flow chart of a virtual background generation method, as shown in fig. 1, the virtual background generation method at least includes the following steps:
s110, obtaining game scene data corresponding to a target game live in a current live broadcasting room through a game server, and generating an initial virtual background based on the game scene data.
And S120, acquiring object scene data corresponding to the virtual object from the game server according to the virtual object throwing operation sent by the game server.
S130, generating a target virtual background in a current live broadcasting room based on the initial virtual background and object scene data; the target virtual background comprises a virtual object interaction model corresponding to the virtual object.
In the method and the device provided by the exemplary embodiment of the present disclosure, on one hand, according to the virtual object throwing operation sent by the game server, object scene data corresponding to the virtual object is obtained from the game server, so that the subsequent generation of a target virtual background including a virtual object interaction model according to the object scene data is facilitated; on the other hand, the target virtual background comprises a virtual object interaction model, and the audience terminal corresponding to the live broadcasting room can interact with the virtual object interaction model, so that the interaction effect between the user and the target virtual background is improved, and the interaction effect between the user and the target game is also improved because the virtual object interaction model is a model existing in the target game.
The steps of the virtual background generation method are described in detail below.
In step S110, game scene data corresponding to a target game currently played by a live broadcasting room is acquired through a game server, and an initial virtual background is generated based on the game scene data.
In embodiments of the present disclosure, a host may employ a live terminal to direct a target game. During the live target game, there are live terminals, game terminals, live servers, game servers, and one or more spectator terminals. It should be noted that the live terminal and the game terminal are usually the same terminal, and the live terminal and the game terminal may be different terminals.
The anchor can create the current live broadcast room by logging in the anchor terminal. When the anchor selects to associate with the target game, a live server corresponding to the anchor terminal acquires game scene data about the target game from a game server corresponding to the game terminal, and synchronizes the game scene data to the anchor terminal to display an initial virtual background corresponding to the game scene data in a current live room of the anchor terminal. Wherein the live view screen includes a game screen (game scene data) and a virtual background.
Furthermore, as the user enters the current living room through the spectator terminal, the living server synchronizes the game scene data to the spectator terminal to display an initial virtual background corresponding to the game scene data in the current living room of the spectator terminal.
Live broadcast can be generally classified into normal live broadcast and virtual live broadcast, and there is a difference between virtual live broadcast and normal live broadcast. In a normal live broadcast, the background displayed in the current live broadcast room coincides with the scene in which the current anchor is located, e.g., the current anchor is in a room with pink walls, then the background of the current live broadcast room may be pink walls. In virtual live broadcast, the background displayed in the current live broadcast room is independent of the scene in which the current anchor is located, and the current anchor typically arranges a piece of green cloth behind it. The pictures collected by the anchor terminal comprise anchor portraits and green curtain backgrounds, and the live broadcast server replaces the green curtain backgrounds with a certain virtual scene to realize virtual live broadcast. For example, when the current anchor is participating in the target game, the green cloth background may be replaced with a game scene corresponding to the target game.
It should be noted that, the host may use the same terminal to participate in the target game and perform virtual live broadcast for the target game. Based on this, the gaming terminal and the anchor terminal may be the same terminal. When the host enters the target game through the login game account, a part of game scenes are displayed in the same terminal, and meanwhile, the game server also sends game scene data corresponding to the complete game scenes to the live broadcast server, so that the live broadcast server constructs an initial virtual scene according to the game scene data. After the initial virtual background is constructed, the initial virtual background may be transmitted to the anchor terminal as well as the audience terminal. At this time, the virtual background displayed in the current live room of the anchor terminal coincides with a part of the game scene displayed in the game terminal, and the virtual background displayed in the audience terminal coincides with a part of the game scene displayed in the game terminal.
In the present exemplary embodiment, an initial virtual background is generated based on game scene data so as to generate a target virtual background for a subsequent operation from the initial virtual background and object scene data.
In an alternative embodiment, fig. 2 shows a schematic flow chart after generating an initial virtual background based on game scene data in a virtual background generating method, and as shown in fig. 2, the method at least includes the following steps: in step S210, the initial virtual background is transmitted to the viewer terminal to be displayed in the viewer terminal.
The number of the viewer terminals may be one or more, and the present exemplary embodiment is not limited thereto. After the initial virtual background is generated, the initial virtual background may be transmitted to all the viewer terminals, at which time a user corresponding to the viewer terminal may see the initial virtual background through the viewer terminal. I.e. any one or more of the audience terminals that send the initial virtual background for the receiving live server.
In step S220, receiving a manipulation object throwing operation sent by a target audience terminal, and determining a target manipulation object corresponding to the manipulation object throwing operation and a throwing position in an initial virtual background; the target control object is used for controlling a target user corresponding to the target audience terminal.
After the user sees the initial virtual background through the audience terminal, the user can choose to perform the operation of controlling the object, or choose not to perform the operation of controlling the object. The control object throwing operation refers to a throwing operation for throwing a target control object into the initial virtual background.
The manipulation object dropping operation may be a touch operation acting on the dropping control, or may be a touch operation acting on any position in the initial virtual background, which is not particularly limited in this exemplary embodiment. The target manipulation object refers to a virtual object, such as a virtual NPC (non-player character), placed in the initial virtual background, and it is worth explaining that a user corresponding to the audience terminal may manipulate the placed target manipulation object through the audience terminal.
And after the target user selects to perform the operation object throwing operation, the operation object throwing operation is sent to the live broadcast server through the target audience terminal. And the live broadcast server determines a target control object (namely, an object which is required to be put in the initial virtual background by a viewer and is controlled by the live broadcast server) corresponding to the control object putting operation and a putting position of the target control object in the initial virtual background (namely, a certain position in the initial virtual background, for example, a position in which a forest in the initial virtual background is positioned) according to the received control object putting operation.
In step S230, based on the target manipulation object, a manipulation object model corresponding to the target manipulation object is placed at a drop position in the initial virtual background.
For example, the target control object is an object a, and the delivery position is a position where the "forest" in the initial virtual background is located, and then the live broadcast server places the target control object a at the position where the "forest" in the initial virtual background is located.
In step S240, the initial virtual background in which the manipulation object model is placed is transmitted to the target audience terminal to be displayed in the target audience terminal.
Wherein, after the live broadcast server places the manipulation object model at the target position of the initial virtual background, the initial virtual background placed with the manipulation object model is transmitted to the target audience terminal.
After receiving the initial virtual background placed with the control object model, the target audience terminal displays the initial virtual background on the target audience terminal, and at the moment, the target user can see that the control object model is placed in the initial virtual background through the target audience terminal.
In this exemplary embodiment, the control object delivery operation sent by the target audience terminal is received, and according to the control object delivery operation, the target control object may be placed in the initial virtual background, so that the subsequent target user corresponding to the target audience terminal adjusts the initial virtual background by controlling the target control object, thereby improving the interaction effect between the user and the initial virtual background.
In an alternative embodiment, fig. 3 is a schematic flow chart for adjusting a background view angle of an initial virtual background in which a manipulation object model is placed in a virtual background generating method, where the method at least includes the following steps as shown in fig. 3: in step S310, an object view angle switching instruction for manipulating the object model, which is transmitted by the target audience terminal, is received, and a target view angle corresponding to the object view angle switching instruction is determined.
Wherein, after the initial virtual background with the manipulation object model placed is transmitted to the target audience terminal, the target audience terminal displays the initial virtual background. At this time, the target user can switch the object view angle of the manipulation object model by manipulating the manipulation object model, for example, switch the object view angle of the manipulation object model from the view angle toward the "forest" direction to the view angle toward the "pool" direction.
Specifically, the target user corresponding to the target terminal may act on an object view angle switching operation on the scene view control, or may act on an object view angle switching operation on the control object model, so as to send the object view angle switching operation from the target audience terminal to the live broadcast server, which is not particularly limited in this exemplary embodiment.
When the target user performs the view angle switching operation on the control object model, the live broadcast server receives a view angle switching instruction corresponding to the view angle switching operation. After receiving the object view angle switching instruction, the live broadcast server determines that the object view angle of the control object model needs to be switched to a target view angle (for example, a view angle towards a 'pool' direction).
In step S320, the target viewing angle is synchronized to the target audience terminal so that the target audience terminal adjusts the background viewing angle of the initial virtual background in which the manipulation object model is placed according to the target viewing angle.
The object view angle of the control object model is switched to the target view angle, so that after the live broadcast server determines the target view angle, the target view angle needs to be synchronized to the target audience terminal. When the target audience terminal receives the target view angle, the background view angle of the initial virtual background in which the control object virtual model is placed is adjusted according to the target view angle, so that consistency between the object view angle of the control object model and the background view angle of the initial virtual background is ensured. The background viewing angle is the display viewing angle of the initial virtual background. For example, the determined target viewing angle is a viewing angle towards the "pool" direction, and at this time, the live broadcast server synchronizes the viewing angle towards the "pool" direction to the target audience terminal, and at this time, the target audience terminal adjusts the background viewing angle of the target initial virtual background in which the manipulation object model is placed from the "forest" direction to the "pool" direction.
In the present exemplary embodiment, an object view angle switching instruction is received to adjust a background view angle of an initial virtual background in which a manipulation object model is placed based on a target view angle corresponding to the object view angle switching instruction, and an interactive effect between a viewer and the initial virtual background is achieved by manipulating the manipulation object model.
In an alternative embodiment, fig. 4 is a schematic flow chart of moving an initial virtual background in which a manipulation object model is placed in a virtual background generating method, and as shown in fig. 4, the method at least includes the following steps: in step S410, an object movement instruction for manipulating the object model, which is transmitted from the target audience terminal, is received, and a movement distance and a movement direction corresponding to the object movement instruction are determined.
The object movement instruction refers to an instruction for controlling the movement of the manipulation object model in the initial virtual background, and specifically, the object movement instruction may be generated by acting on a movement control by a target user corresponding to the target audience terminal, or may be generated by directly acting on the manipulation object model by a target audience corresponding to the target audience terminal, which is not limited in this exemplary embodiment.
After the live broadcast server receives the object movement instruction, a movement distance and a movement direction corresponding to the object movement instruction can be determined, wherein the movement direction refers to a direction for controlling the movement of the control object model in the initial virtual background, and the movement distance refers to a distance for controlling the movement of the control object model in the initial virtual background.
In step S420, the movement distance and the movement direction are synchronized to the target audience terminal, so that the target audience terminal moves the initial virtual background on which the manipulation object model is placed based on the movement distance and the movement direction.
For example, if the moving distance determined by the live broadcast server is 5cm and the determined moving direction is right ahead, the moving distance 5cm and the moving direction right ahead are synchronized to the target audience terminal. After receiving the moving distance of 5cm and the moving direction of the target audience terminal, the target audience terminal moves the initial virtual background with the control object model to the front-rear direction by 5cm.
In the present exemplary embodiment, an object movement instruction is received, so that a target audience terminal moves an initial virtual background in which a manipulation object model is placed according to a movement distance and a movement direction corresponding to a movement operation, and an interaction effect between an audience and the initial virtual background is achieved by means of the manipulation object model.
In step S120, object scene data corresponding to the virtual object is acquired from the game server according to the virtual object delivery operation transmitted by the game server.
In the process of participating in the target game, the host can interact with virtual objects existing in the target game through selected game characters, for example, control the game characters to capture virtual biological objects existing in the target game, control the game characters to capture plant objects existing in the target game, control the game characters to defeat other game characters existing in the target game (the other game characters belong to different game camps with the game characters controlled by the host), and the like.
After the host interacts with the virtual object by manipulating the game character, the host may choose whether to cast the virtual object to a virtual background in the current live room. If the host player selects to put the virtual object into the virtual background in the current live broadcasting room, the game server receives the virtual object putting operation sent by the game terminal and sends the received virtual object putting operation to the live broadcasting server. When the live broadcast server receives the virtual object delivery operation, object scene data corresponding to the virtual object can be acquired from the game server.
The object scene data refers to data related to a virtual object, for example, object appearance data describing the appearance of the virtual object, and scene environment data describing the scene environment in which the virtual object is located.
In step S130, generating a target virtual background in the current live room based on the initial virtual background and the object scene data; the target virtual background comprises a virtual object interaction model corresponding to the virtual object.
Wherein the target virtual background is generated from the initial virtual background and the object scene data, and the target virtual background refers to a virtual background that is finally displayed in the current live room. The target virtual background includes a virtual object interaction model (the virtual object interaction model refers to a model corresponding to a virtual object), an initial virtual background corresponding to game scene data, and a game screen of a target game played by the host at this time. The user can interact with the virtual object interaction model through the audience terminal corresponding to the current live broadcasting room.
In an alternative embodiment, fig. 5 shows a schematic flow chart of generating a target virtual background in a current live broadcast room in a virtual background generating method, where object scene data includes object appearance data corresponding to a virtual object and scene environment data, and as shown in fig. 5, the method at least includes the following steps: in step S510, generating an interactive virtual background based on the object appearance data and the scene environment data; the interactive virtual background comprises a virtual object interactive model corresponding to the object appearance data and an environment object corresponding to the scene environment data.
The object appearance data refers to data describing the appearance of the virtual object, and may be, for example, skin of the virtual object or model image of the virtual object. Scene environment data refers to data describing a game environment in which a virtual object is located, and may be, for example, weather data in the game environment in which the virtual object is located.
The virtual object interaction model corresponding to the virtual object can be generated based on the object appearance data, the game environment where the virtual object interaction model is located can be generated based on the scene environment data, and the interactive virtual background can be obtained according to the virtual object interaction model and the game environment.
For example, the object appearance data is data describing a virtual object "sprite", and the scene environment data is "flowers" beside the "sprite" and virtual weather "light rain" in the target game. Further, the interactive virtual background generated by the anchor server at this time includes a virtual object interactive model corresponding to "eidolon", an environmental object corresponding to "flower", and an environmental object corresponding to "light rain".
In step S520, according to the virtual object throwing operation, a background combination relationship between the interactive virtual background and the initial virtual background is determined, and the initial virtual background and the interactive virtual background are combined based on the background combination relationship, so as to generate a target virtual background in the current live broadcasting room.
When the host plays the virtual object in the target game, the target play position may or may not be determined at the same time (the target play position refers to a position in the initial virtual background). Based on the above, the live broadcast server can determine a background combination relation according to whether the target delivery position is determined, and combine the interactive virtual background and the initial virtual background to generate a target virtual background which needs to be displayed in the current live broadcast room.
In this exemplary embodiment, on the one hand, according to the object appearance data and the scene environment data, an interactive virtual background including a virtual object interaction model and an environment object may be generated, so that a viewer interacts with the virtual object interaction model through a viewer terminal; on the other hand, based on the background combination relation, the initial virtual background and the interactive virtual background are combined to generate a target virtual background in the current live broadcasting room, so that the anchor demand corresponding to different virtual object throwing operations is met.
In an alternative embodiment, fig. 6 is a schematic flow chart of determining a background combination relationship between an interactive virtual background and an initial virtual background in a virtual background generating method, and as shown in fig. 6, the method at least includes the following steps: in step S610, in response to the presence of the target delivery position corresponding to the virtual object delivery operation, determining a background combination relationship between the interactive virtual background and the initial virtual background as a background nesting relationship; the target delivery location is within the initial virtual background.
If the target throwing position corresponding to the virtual object throwing operation exists, it proves that the anchor wants to nest the interactive virtual background at the target throwing position of the initial virtual background, and therefore, the live broadcast server determines that the background combination relationship between the interactive virtual background and the initial virtual background is a background nesting relationship.
In step S620, in response to the absence of the target delivery position corresponding to the virtual object delivery operation, the background combination relationship between the interactive virtual background and the initial virtual background is determined as the background stitching relationship.
If the target throwing position does not exist, proving that the anchor does not need to nest the interactive virtual background at a certain position of the initial virtual background at the moment, and therefore, the combination relation between the interactive virtual background and the initial virtual background determined by the live broadcast server is a background splicing relation.
In this exemplary embodiment, the background combination relationship between the interactive virtual background and the initial virtual background is determined according to whether there is a target delivery position corresponding to the virtual object delivery operation, so that the anchor may adjust the background combination relationship according to the virtual object delivery operation, so as to control the finally generated target virtual background, and improve the interaction effect between the anchor and the target virtual background.
In an alternative embodiment, fig. 7 is a schematic flow chart of combining an initial virtual background and an interactive virtual background in a virtual background generating method, and as shown in fig. 7, the method at least includes the following steps: in step S710, in response to the context composition relationship being a context nesting relationship, the interactive virtual context is nested at the target placement location of the initial virtual context.
The background nesting relationship refers to that the interactive virtual background is nested in the initial virtual background, and the nested position is at the target throwing position.
Fig. 8 schematically illustrates a schematic diagram of a target virtual background in which a background combination relationship is a background nesting relationship in an embodiment of the disclosure, as shown in fig. 8, where a terminal 810 is a target audience terminal, an object 820 is a portrait of a current anchor, a background 830 is an initial virtual background, a background 840 is an interactive virtual background, and a virtual object interaction model 841 is included in the initial virtual background 840, where the interactive virtual background 840 is nested at a target delivery position 850 of the initial virtual background 840.
In step S720, in response to the background combination relationship being the stitching relationship, the interactive virtual background and the initial virtual background are stitched.
If the background combination relationship is a stitching relationship, the interactive virtual background and the initial virtual background are stitched, specifically, the interactive virtual background may be stitched on the left side of the initial virtual background, the interactive virtual background may be stitched on the right side of the initial virtual background, the interactive virtual background may be stitched on any side of the initial virtual background, and the interactive virtual background may be stitched on the periphery of the initial virtual background, which is not limited in this exemplary embodiment.
In the present exemplary embodiment, different processing is performed on the interactive virtual background and the initial virtual background according to different background combination relationships, so as to generate a target virtual background, thereby meeting different virtual object delivery requirements of the anchor.
In an alternative embodiment, fig. 9 is a schematic flow chart of a target audience terminal executing an interaction event in a target virtual background in a virtual background generating method, and as shown in fig. 9, the method at least includes the following steps: in step S910, an interaction instruction for the target virtual background sent by the target audience terminal is received, and an interaction event and an interaction object corresponding to the interaction instruction are determined.
When a user holding the target audience terminal interacts with the target virtual background, the target audience terminal sends an interaction instruction to the live broadcast server. After the live broadcast server receives the interaction instruction, an interaction event carried in the interaction instruction is determined. For example, the interaction instruction is issued when a user holding the target audience terminal clicks the umbrella control, and when the anchor server receives the interaction instruction, the determined interaction event is an umbrella opening event.
In step S920, the interaction event is synchronized to the target audience terminal such that the target audience terminal performs the interaction event with respect to the virtual object interaction model in the target virtual background.
After the live broadcast server determines the interaction event, the interaction event is synchronized to the target audience terminal, and at the moment, the target audience terminal carries out the interaction event on the virtual object interaction model included in the target virtual background.
For example, synchronizing the event of opening an umbrella to a target audience terminal, the target audience terminal will open a virtual prop "umbrella" against the virtual object interaction model.
In this exemplary embodiment, for the interaction instruction sent by the target audience terminal, an interaction event corresponding to the interaction instruction is determined, and the interaction event is sent to the target audience terminal, so that the interaction event is performed in the target audience terminal for the virtual interaction object model, an interaction effect between the user and the virtual interaction object model is achieved, and the experience of the user is improved.
In an alternative embodiment, fig. 10 shows a schematic flow chart after generating the target virtual background in the current live room in the virtual background generating method, and as shown in fig. 10, the method at least includes the following steps: in step S1010, the target virtual background is transmitted to the target audience terminal to be displayed in the target audience terminal.
Wherein, after the target virtual background is generated, the target virtual background may be transmitted to the target audience terminal, at which time the target virtual background is displayed in the target audience terminal.
It should be noted that, after the target virtual background is sent to the target audience terminal, only a part of the target virtual scene is displayed on the target audience terminal according to the size of the target audience terminal screen. And, since the initial virtual background and the interactive virtual background have different background combination relations, the part of the target virtual background seen by the target audience through the target audience terminal may or may not include the virtual object interaction model.
In step S1020, generating an interaction control in the target virtual background according to the object moving operation for the target manipulation object sent by the target audience terminal, and synchronizing the target virtual background containing the interaction control to the target audience terminal; the target control object is used for controlling a target user corresponding to the target audience terminal; the interaction control is used for interacting with the virtual object interaction model.
If the part of the target virtual background seen by the target user through the target audience terminal does not include the virtual object interaction model, the target control object (i.e., the object displayed in the target virtual background and controlled by the target user through the target audience terminal) can be subject to moving operation, and the movement of the target control object is controlled so as to search the virtual object interaction model in the complete target virtual background.
When approaching the virtual object interaction model, the live server may generate interaction controls in the target virtual context. The interactive control is provided for touching a target user corresponding to the target audience terminal. After the live broadcast server generates the interaction control in the target virtual scene, the target virtual scene containing the interaction control can be synchronized to the target audience terminal, and at the moment, the target user can see the interaction control in the target audience terminal.
In step S1030, according to the object interaction instruction corresponding to the interaction control sent by the target audience terminal, interaction behavior data corresponding to the object interaction operation is determined.
After the target user sees the interactive control displayed on the target audience terminal, the interactive control can be touched. And after the target user touches the interaction control, the live broadcast server receives an object interaction instruction sent by the target audience terminal.
The interactive behavior data refers to data for describing the interactive behavior, for example, if the virtual object interactive model is a virtual living being "sprite", the interactive behavior data may be data corresponding to feeding the interactive behavior, data corresponding to fighting the interactive behavior, or data corresponding to capturing the interactive behavior, which is not particularly limited in this exemplary embodiment.
In step S1040, the interactive behavior data is synchronized to the target audience terminal, so that the target audience terminal controls the target control object to execute the interactive behavior corresponding to the interactive behavior data in the target virtual background according to the interactive behavior data, with respect to the virtual object interaction model.
For example, the interactive behavior data is data corresponding to feeding the interactive behavior, and after the interactive behavior data is synchronized to the target audience terminal, the target control object is controlled to feed the virtual biology 'eidolon' in the target virtual background.
In the present exemplary embodiment, according to a moving operation of the target audience terminal with respect to the target manipulation object, an interaction control is generated in the target virtual background, so that the target manipulation object is controlled to execute an interaction behavior on the virtual object interaction model through the interaction control; on the other hand, the interaction effect between the audience and the target virtual background is improved.
In an alternative embodiment, fig. 11 shows a schematic flow chart of generating an interactive control in a target virtual background in a virtual background generating method, and as shown in fig. 11, the method at least includes the following steps: in step S1110, a target movement position corresponding to the target manipulation object is determined according to the object movement operation for the target manipulation object transmitted from the target audience terminal.
The object moving operation is an operation of controlling the target control object to move in the target virtual background. The target moving position is the position in the target virtual background when the moving behavior is terminated.
In step S1120, an object position of the virtual object interaction model in the target virtual background is determined, and a position distance between the target movement position and the object position is determined.
The object position is the position of the virtual object interaction model in the target virtual background. The position distance is the distance between the object position and the target movement position.
In step S1130, in response to the location distance being less than the preset distance threshold, an interaction control is generated in the target virtual background.
The preset distance threshold is a critical value for measuring the position distance, when the live broadcast server detects that the position distance is smaller than the preset distance threshold, the distance between the target control object and the virtual object interaction model is proved to be very close, so that the target user controls the target control object to interact with the virtual object interaction model, and further, an interaction control is generated in the target virtual background.
Correspondingly, if the position distance is greater than the preset distance threshold, generating no interaction control in the target virtual background at the moment, and enabling the target user to continue to perform object moving operation on the target control object so as to search out the existing virtual object interaction model in the complete target virtual background.
In the present exemplary embodiment, in response to the location distance being less than the preset distance threshold, an interaction control is generated in the target virtual background to perfect logic for generating the interaction control.
In an alternative embodiment, fig. 12 shows a schematic flow chart of placing a virtual prop at a target position of a target virtual background in a virtual background generating method, and as shown in fig. 12, the method at least includes the following steps: in step S1210, in response to the location distance being less than the preset distance threshold, a virtual prop corresponding to the environmental object is generated in the target virtual background.
When the live broadcast server detects that the position distance is smaller than the preset distance threshold, a virtual prop can be generated in the target virtual background. The virtual prop corresponds to the environmental object. For example, when the environmental object is "flower", a virtual prop such as a water kettle or a virtual prop such as a fertilizer may be generated, when the environmental object is "light rain", a virtual prop such as an umbrella or a virtual prop such as a hat may be generated, and when the environmental object is "sun", a virtual prop such as a sunglasses may be generated.
In step S1220, the virtual prop is placed at the target position of the target virtual background according to the selection operation for the virtual prop sent by the target audience terminal; there is a positional mapping relationship between the target position and the object position.
Wherein the selecting operation is an operation of selecting the virtual prop, and after the selecting operation is performed on the virtual prop, the virtual prop may be placed at a certain position of the target virtual background (i.e., placed at the target position). It should be noted that there is a positional mapping relationship between the target position and the object position. For example, the target position may be a left side position of the virtual object interaction model, a right side position of the virtual object interaction model, and a position on any side of the virtual object interaction model, which is not particularly limited in the present exemplary embodiment.
In the present exemplary embodiment, when the position distance is smaller than the preset distance threshold value. The virtual props corresponding to the environment objects can be generated in the target virtual background, and then the target control objects can be controlled to interact with the virtual object interaction model through the virtual props.
In an alternative embodiment, fig. 13 shows a schematic flow chart after the virtual prop is placed at the target position of the target virtual background in the virtual background generating method, and as shown in fig. 13, the method at least includes the following steps: in step S1310, the target virtual background with the virtual prop placed is transmitted to the target audience terminal to be displayed in the target audience terminal.
After the virtual prop is placed on the target virtual background, the target virtual background with the virtual prop placed is sent to the target audience terminal to be displayed on the target audience terminal, and at this time, a target user corresponding to the target audience terminal can see the virtual prop in the target audience terminal.
In step S1320, prop interaction data corresponding to the touch operation is determined according to the touch operation for the virtual prop sent by the target audience terminal.
If touch operation for the virtual prop is received, prop interaction data corresponding to the touch operation can be determined, and the prop interaction data is used for describing prop interaction behaviors executed by a virtual object interaction model. For example, when the environmental object is "raining", a virtual prop of "umbrella" may be generated, and if a click operation for the virtual prop of "umbrella" is received, prop interaction data corresponding to the click operation may be determined to be an opening behavior corresponding to the virtual prop of "umbrella".
For example, when the environmental object is "sun", a virtual prop such as "sunglasses" may be generated, a virtual prop such as "hat" may be generated, and a virtual prop such as "umbrella" may be generated. If the long-press operation aiming at the virtual prop of the sunglasses is received, the virtual prop interaction data corresponding to the long-press operation can be determined to be the wearing behavior corresponding to the virtual prop of the sunglasses; if the long-press operation aiming at the virtual prop of the umbrella is received, the virtual prop interaction data corresponding to the long-press operation can be determined to be the opening behavior corresponding to the umbrella of the virtual prop; if the long-press operation aiming at the virtual prop of the hat is received, the virtual prop interaction data corresponding to the long-press operation can be determined to be the wearing behavior corresponding to the virtual prop of the hat.
In step S1330, prop interaction data is synchronized to the target audience terminal, so that the target audience terminal controls the target control object to execute prop interaction behavior corresponding to the prop interaction data in the target virtual background with respect to the virtual object interaction model.
The prop interaction behavior refers to specific interaction behavior corresponding to prop interaction data, for example, in a target virtual background, a target control object is controlled, and a prop interaction behavior of opening a virtual prop umbrella is executed for a virtual biology eidolon.
In this exemplary embodiment, according to the touch operation for the virtual prop, the live broadcast server may determine prop interaction data corresponding to the touch operation, and further synchronize the prop interaction data to the target audience terminal, so that the target audience terminal controls the target control object in the target virtual background to execute the prop interaction behavior corresponding to the prop interaction data, thereby improving the interaction effect between the target user and the target virtual background.
In an alternative embodiment, fig. 14 is a schematic flow chart of a virtual background generation method after generating a target virtual background in a current live room, and as shown in fig. 14, the method at least includes the following steps: in step S1410, the target virtual background is transmitted to the viewer terminal corresponding to the current live room to be displayed in the viewer terminal.
And sending the target virtual background to all audience terminals corresponding to the current live broadcasting room so as to display the target virtual background in all audience terminals.
In step S1420, an enlargement ratio corresponding to the enlargement operation is determined according to the enlargement operation for the target virtual background transmitted by the target audience terminal.
At this time, the target user may perform an amplifying operation on the target virtual background through the target audience terminal, where the amplifying operation refers to an operation of amplifying a display scale of the target virtual background. By the enlargement operation, an enlargement ratio can be determined, for example, the enlargement ratio is determined to be 120%.
In step S1430, the display scale of the target virtual background is enlarged based on the enlargement scale, and the target virtual background enlarged in the display scale is synchronized to the target audience terminal to display the target virtual background enlarged in the display scale in the target audience terminal.
The display proportion of the target virtual background is enlarged to an enlarged proportion, and the target virtual background with the enlarged display proportion is synchronized to the target audience terminal, so that the target virtual background with the enlarged display proportion is displayed in the target audience terminal, and the enlarging requirement of a target user on the target virtual background is met.
For example, the display scale of the target virtual background is enlarged to 120%, and the target virtual background having the display scale of 120% is synchronized to the target audience terminal to display the target virtual background having the display scale of 120% in the target audience terminal.
In the present exemplary embodiment, the display scale of the target virtual background is enlarged to the enlargement scale according to the enlargement operation transmitted from the target audience terminal, thereby satisfying the enlargement requirement of the target user for the target virtual background.
In an alternative embodiment, fig. 15 shows a schematic flow chart after generating the target virtual background in the current live room in the virtual background generating method, as shown in fig. 15, and the method at least includes the following steps: in step S1510, a reduction scale corresponding to the reduction operation is determined according to the reduction operation for the target virtual background transmitted by the target audience terminal.
The target user may also reduce the display proportion of the target virtual background, where the reduction operation is an operation for reducing the display proportion of the target virtual background, and the reduction proportion corresponds to a reduction requirement of the target user for the target virtual background. For example, the reduction ratio corresponding to the reduction operation is 80%.
In step S1520, the display scale of the target virtual background is reduced based on the reduction scale, and an object identification corresponding to the virtual object interaction model is generated.
The object identifier is used to identify the virtual object interaction model, and the object identifier may be a picture corresponding to the virtual object interaction model or may be a text corresponding to the virtual object interaction model, which is not limited in particular in this exemplary embodiment.
For example, if the reduction ratio is 80%, the display ratio of the target virtual background is reduced to 80%.
In step S1530, the target virtual background with reduced display scale and the object identification are synchronized to the target audience terminal, so that the target virtual background with reduced display scale is displayed in the target audience terminal, and the object identification corresponding to the virtual object interaction model is displayed in the target virtual background with reduced display scale.
And the display proportion of the target virtual background is reduced to a reduced proportion, and the target virtual background with the reduced display proportion is synchronized to the target audience terminal, so that the target virtual background with the reduced display proportion is displayed in the target audience terminal, and the reduction requirement of a target user on the target virtual background is met. And synchronizing the object identification to the target audience terminal so as to display the object identification corresponding to the virtual object interaction model in the target virtual scene with the reduced display scale in the target audience terminal.
In the present exemplary embodiment, on the one hand, according to the zoom-out operation sent by the target audience terminal, the display scale of the target virtual background is zoomed out to the zoom-out scale, so as to meet the zoom-out requirement of the target user on the target virtual background; on the other hand, the object identifier is placed in the target virtual background with the reduced display scale, so that even if the target virtual background is reduced, the position of the virtual object interaction model in the target virtual background can be known, and the target control object can be controlled to interact with the virtual object interaction model.
In the method and the device provided by the exemplary embodiment of the present disclosure, on one hand, according to the virtual object throwing operation sent by the game server, object scene data corresponding to the virtual object is obtained from the game server, so that the subsequent generation of a target virtual background including a virtual object interaction model according to the object scene data is facilitated; on the other hand, the target virtual background comprises a virtual object interaction model, and the audience terminal corresponding to the live broadcasting room can interact with the virtual object interaction model, so that the interaction effect between the user and the target virtual background is improved, and the interaction effect between the user and the target game is also improved because the virtual object interaction model is a model existing in the target game.
The virtual background generation method in the embodiment of the present disclosure is described in detail below in connection with an application scenario.
When the current live broadcast carries out virtual live broadcast aiming at the target game, the live broadcast server acquires game scene data corresponding to the complete scene of the target game from the game server. An initial virtual background is generated from the game scene data.
And according to the virtual object throwing operation aiming at the virtual biology 'sprite' sent by the game server, acquiring object scene data corresponding to the virtual biology 'sprite' from the game server.
And generating a target virtual background in the current live broadcasting room according to the object scene data and the initial virtual background, wherein a virtual object interaction model corresponding to the virtual living things 'eidolon' is included in the target virtual background at the moment, and the virtual object interaction model can be used for users to interact with the virtual living things 'eidolon' through the audience terminal.
In the application scene, on one hand, according to the virtual object throwing operation sent by the game server, object scene data corresponding to the virtual object is acquired from the game server, so that the object virtual background containing the virtual object interaction model is generated according to the object scene data; on the other hand, the target virtual background comprises a virtual object interaction model, and the audience terminal corresponding to the live broadcasting room can interact with the virtual object interaction model, so that the interaction effect between the user and the target virtual background is improved, and the interaction effect between the user and the target game is also improved because the virtual object interaction model is a model existing in the target game.
In addition, in an exemplary embodiment of the present disclosure, a virtual background generation apparatus is also provided. Fig. 16 shows a schematic structural diagram of a virtual background generating apparatus, and as shown in fig. 16, a virtual background generating apparatus 1600 may include: a first acquisition module 1610, a second acquisition module 1620, and a generation module 1630. Wherein:
a first acquisition module 1610 configured to acquire, through a game server, game scene data corresponding to a target game live in a current live broadcast room, and generate an initial virtual background based on the game scene data; a second acquisition module 1620 configured to acquire object scene data corresponding to the virtual object from the game server according to the virtual object delivery operation transmitted by the game server; a first generating module 1630 configured to generate a target virtual background in the current live room based on the initial virtual background and the object scene data; the target virtual background comprises a virtual object interaction model corresponding to the virtual object.
In an exemplary embodiment of the present disclosure, based on the foregoing scheme, after generating the initial virtual background based on the game scene data, the apparatus further includes: a first transmission module configured to transmit the initial virtual background to the viewer terminal for display in the viewer terminal; the first receiving module is configured to receive a manipulation object throwing operation sent by the target audience terminal, and determine a target manipulation object corresponding to the manipulation object throwing operation and a throwing position in an initial virtual background; the target control object is used for controlling a target user corresponding to the target audience terminal; the first placement module is respectively configured to place the control object model corresponding to the target control object at a placement position in the initial virtual background based on the target control object; and a second transmission module configured to transmit the initial virtual background in which the manipulation object model is placed to the target audience terminal to be displayed in the target audience terminal.
In an exemplary embodiment of the present disclosure, based on the foregoing scheme, the apparatus further includes: the second receiving module is configured to receive an object view angle switching instruction which is sent by the target audience terminal and aims at controlling the object model, and determine a target view angle corresponding to the object view angle switching instruction; and the adjusting module synchronizes the target visual angle to the target audience terminal so that the target audience terminal adjusts the background visual angle of the initial virtual background placed with the control object model according to the target visual angle.
In an exemplary embodiment of the present disclosure, based on the foregoing scheme, the apparatus further includes: the third receiving module is configured to receive an object moving instruction which is sent by the target audience terminal and is aimed at controlling the object model, and determine a moving distance and a moving direction corresponding to the object moving instruction; and the moving module is configured to synchronize the moving distance and the moving direction to the target audience terminal so that the target audience terminal moves the initial virtual background on which the control object model is placed based on the moving distance and the moving direction.
In one exemplary embodiment of the present disclosure, based on the foregoing scheme, the object scene data includes object appearance data corresponding to the virtual object and scene environment data; based on the initial virtual background and the object scene data, generating a target virtual background in the current live room, wherein the device comprises: the second generation module is configured to generate an interactive virtual background based on the object appearance data and the scene environment data; the interactive virtual background comprises a virtual object interactive model corresponding to the object appearance data and an environment object corresponding to the scene environment data; the third generation module is configured to determine a background combination relation between the interactive virtual background and the initial virtual background according to virtual object throwing operation, and based on the background combination relation, combine the initial virtual background and the interactive virtual background to generate a target virtual background in the current live broadcasting room.
In an exemplary embodiment of the present disclosure, based on the foregoing solution, according to a virtual object delivery operation, a background combination relationship between an interactive virtual background and an initial virtual background is determined, and an apparatus includes: the background nesting module is configured to determine a background combination relationship between the interactive virtual background and the initial virtual background as a background nesting relationship in response to the existence of a target throwing position corresponding to the virtual object throwing operation; the target delivery position is within the initial virtual background; and the background splicing module is configured to determine a background combination relation between the interactive virtual background and the initial virtual background as a background splicing relation in response to the absence of a target throwing position corresponding to the virtual object throwing operation.
In an exemplary embodiment of the present disclosure, based on the foregoing solution, the method combines an initial virtual background and an interactive virtual background based on a background combination relationship, the apparatus includes: the nesting module is configured to nest the interactive virtual background at a target throwing position of the initial virtual background in response to the background combination relationship being a background nesting relationship; and the splicing module is configured to splice the interactive virtual background and the initial virtual background in response to the background combination relationship as a splicing relationship.
In an exemplary embodiment of the present disclosure, based on the foregoing scheme, the apparatus further includes: the interactive instruction module is configured to receive an interactive instruction aiming at a target virtual background and sent by a target audience terminal, and determine an interactive event corresponding to the interactive instruction; and an execution interaction module configured to synchronize the interaction event to the target audience terminal such that the target audience terminal executes the interaction event against the virtual object interaction model in the target virtual context.
In an exemplary embodiment of the present disclosure, after generating the target virtual background in the current live room based on the foregoing scheme, the apparatus further includes: a third transmission module configured to transmit the target virtual background to the target audience terminal to be displayed in the target audience terminal; the fourth generation module is configured to generate an interaction control in the target virtual background according to the object moving operation which is sent by the target audience terminal and is aimed at the target control object, and synchronize the target virtual background containing the interaction control to the target audience terminal; the target control object is used for controlling a target user corresponding to the target audience terminal; the interaction control is used for interacting with the virtual object interaction model; the first determining module is configured to determine interaction behavior data corresponding to the object interaction operation according to the object interaction instruction corresponding to the interaction control sent by the target audience terminal; the first interaction module is configured to synchronize the interaction behavior data to the target audience terminal, so that the target audience terminal controls the target control object to execute the interaction behavior corresponding to the interaction behavior data according to the interaction behavior data in the target virtual background aiming at the virtual object interaction model.
In an exemplary embodiment of the present disclosure, based on the foregoing solution, according to an object movement operation for a target manipulation object sent by a target audience terminal, an interaction control is generated in a target virtual background, and an apparatus includes: a second determining module configured to determine a target movement position corresponding to the object movement operation according to the object movement operation for the target manipulation object transmitted by the target audience terminal; the position distance module is configured to determine the object position of the virtual object interaction model in the target virtual background and determine the position distance between the target moving position and the object position; and a fifth generation module configured to generate an interaction control in the target virtual background in response to the location distance being less than the preset distance threshold.
In an exemplary embodiment of the present disclosure, based on the foregoing scheme, the apparatus further includes: a sixth generation module configured to generate a virtual prop corresponding to the environmental object in the target virtual background in response to the location distance being less than the preset distance threshold; the second placement module is configured to place the virtual prop at a target position of the target virtual background according to the selection operation for the virtual prop sent by the target audience terminal; there is a positional mapping relationship between the target position and the object position.
In one exemplary embodiment of the present disclosure, after placing the virtual prop at the target location of the target virtual background based on the foregoing scheme, the apparatus further includes: a fourth transmission module configured to transmit the target virtual background with the virtual prop placed therein to the target audience terminal to be displayed in the target audience terminal; the third determining module is configured to determine prop interaction data corresponding to the touch operation according to the touch operation which is sent by the target audience terminal and is specific to the virtual prop; the second interaction module is respectively configured to synchronize the prop interaction data to the target audience terminal, so that the target audience terminal controls the target control object to execute prop interaction behaviors corresponding to the prop interaction data in the target virtual background aiming at the virtual object interaction model.
In an exemplary embodiment of the present disclosure, after generating the target virtual background in the current live room based on the foregoing scheme, the apparatus further includes: a fifth transmission module configured to transmit the target virtual background to the viewer terminal corresponding to the current live room to be displayed in the viewer terminal; a fourth determining module configured to determine an amplification ratio corresponding to the amplification operation according to the amplification operation for the target virtual background transmitted by the target audience terminal; and an enlarging module configured to enlarge the display scale of the target virtual background based on the enlargement scale, and synchronize the target virtual background enlarged in the display scale to the target audience terminal to display the target virtual background enlarged in the display scale in the target audience terminal.
In an exemplary embodiment of the present disclosure, based on the foregoing scheme, the apparatus further includes: a fifth determining module configured to determine a reduction ratio corresponding to the reduction operation according to the reduction operation for the target virtual background transmitted by the target audience terminal; the reduction module is configured to reduce the display proportion of the target virtual background based on the reduction proportion and generate an object identifier corresponding to the virtual object interaction model; and the display background module is configured to synchronize the target virtual background with the reduced display scale and the object identification to the target audience terminal so as to display the target virtual background with the reduced display scale in the target audience terminal and display the object identification corresponding to the virtual object interaction model in the target virtual background with the reduced display scale.
The details of the virtual background generating apparatus 1600 are described in detail in the corresponding virtual background generating method, and thus are not described herein.
It should be noted that although several modules or units of the virtual background generating apparatus 1600 are mentioned in the above detailed description, such partitioning is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
In addition, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
An electronic device 1700 according to such an embodiment of the invention is described below with reference to fig. 17. The electronic device 1700 shown in fig. 17 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 17, the electronic device 1700 is in the form of a general purpose computing device. The components of electronic device 1700 may include, but are not limited to: the at least one processing unit 1710, the at least one storage unit 1720, a bus 1730 connecting different system components (including the storage unit 1720 and the processing unit 1710), and a display unit 1740.
Wherein the storage unit stores program code that is executable by the processing unit 1710, such that the processing unit 1710 performs the steps according to various exemplary embodiments of the present invention described in the above section of the "exemplary method" of the present specification.
Storage unit 1720 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 1721 and/or cache memory unit 1722, and may further include read only memory unit (ROM) 1723.
Storage unit 1720 may also include a program/usage tool 1724 having a set (at least one) of program modules 1725, such program modules 1725 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which may include the reality of a network environment, or some combination thereof.
Bus 1730 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, a graphics accelerator port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 1700 may also communicate with one or more external devices 1770 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 1700, and/or any device (e.g., router, modem, etc.) that enables the electronic device 1700 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 1750. Also, electronic device 1700 can communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, for example, the Internet, through network adapter 1760. As shown, network adapter 1760 communicates with other modules of electronic device 1700 via bus 1730. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 1700, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID (Redundant Arrays of Independent Disks, disk array) systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
The processor in the electronic device may implement the following operations in the virtual background generation method by executing machine executable instructions:
acquiring game scene data corresponding to a target game live in a current live broadcasting room through a game server, and generating an initial virtual background based on the game scene data; according to the virtual object throwing operation sent by the game server, obtaining object scene data corresponding to the virtual object from the game server; generating a target virtual background in the current live broadcasting room based on the initial virtual background and the object scene data; the target virtual background comprises a virtual object interaction model corresponding to the virtual object.
Transmitting the initial virtual background to the viewer terminal to be displayed in the viewer terminal; receiving a control object throwing operation sent by a target audience terminal, and determining a target control object corresponding to the control object throwing operation and a throwing position in an initial virtual background; the target control object is used for controlling a target user corresponding to the target audience terminal; based on the target control object, placing a control object model corresponding to the target control object at a put-in position in the initial virtual background; the initial virtual background with the manipulation object model placed is transmitted to the target audience terminal to be displayed in the target audience terminal.
Receiving an object view angle switching instruction which is sent by a target audience terminal and aims at controlling an object model, and determining a target view angle corresponding to the object view angle switching instruction; and synchronizing the target viewing angle to the target audience terminal so that the target audience terminal adjusts the background viewing angle of the initial virtual background placed with the control object model according to the target viewing angle.
Receiving an object moving instruction which is sent by a target audience terminal and aims at controlling an object model, and determining a moving distance and a moving direction corresponding to the object moving instruction; and synchronizing the moving distance and the moving direction to the target audience terminal so that the target audience terminal moves the initial virtual background on which the control object model is placed based on the moving distance and the moving direction.
Generating an interactive virtual background based on the object appearance data and the scene environment data; the interactive virtual background comprises a virtual object interactive model corresponding to the object appearance data and an environment object corresponding to the scene environment data; and determining a background combination relation between the interactive virtual background and the initial virtual background according to the virtual object throwing operation, and combining the initial virtual background and the interactive virtual background based on the background combination relation to generate a target virtual background in the current live broadcasting room.
Determining a background combination relation between the interactive virtual background and the initial virtual background as a background nesting relation in response to the existence of a target throwing position corresponding to the virtual object throwing operation; the target delivery position is within the initial virtual background; and determining a background combination relation between the interactive virtual background and the initial virtual background as a background splicing relation in response to the absence of a target throwing position corresponding to the virtual object throwing operation.
Responding to the background combination relationship as a background nesting relationship, and nesting the interactive virtual background at a target throwing position of the initial virtual background; and responding to the background combination relation as a splicing relation, and splicing the interactive virtual background and the initial virtual background.
Receiving an interaction instruction aiming at a target virtual background and sent by a target audience terminal, and determining an interaction event corresponding to the interaction instruction; the interaction event is synchronized to the target audience terminal such that the target audience terminal performs the interaction event against the virtual object interaction model in the target virtual background.
Transmitting the target virtual background to the target audience terminal to be displayed in the target audience terminal; generating an interaction control in a target virtual background according to object moving operation which is sent by a target audience terminal and aims at a target control object, and synchronizing the target virtual background containing the interaction control to the target audience terminal; the target control object is used for controlling a target user corresponding to the target audience terminal; the interaction control is used for interacting with the virtual object interaction model; according to an object interaction instruction corresponding to the interaction control sent by the target audience terminal, determining interaction behavior data corresponding to the object interaction operation; and synchronizing the interactive behavior data to the target audience terminal so that the target audience terminal controls the target control object to execute the interactive behavior corresponding to the interactive behavior data according to the interactive behavior data in the target virtual background aiming at the virtual object interactive model.
Determining a target moving position corresponding to the target moving operation according to the target moving operation which is sent by the target audience terminal and aims at the target control object; determining the object position of the virtual object interaction model in the target virtual background, and determining the position distance between the target moving position and the object position; and generating the interaction control in the target virtual background in response to the position distance being smaller than the preset distance threshold.
Generating a virtual prop corresponding to the environment object in the target virtual background in response to the position distance being smaller than a preset distance threshold; according to the selection operation aiming at the virtual prop sent by the target audience terminal, placing the virtual prop at a target position of a target virtual background; there is a positional mapping relationship between the target position and the object position.
Transmitting the target virtual background with the virtual prop placed to a target audience terminal to be displayed in the target audience terminal; according to touch operation aiming at the virtual prop and sent by a target audience terminal, prop interaction data corresponding to the touch operation are determined; and synchronizing the prop interaction data to the target audience terminal so that the target audience terminal controls the target control object to execute prop interaction behaviors corresponding to the prop interaction data in the target virtual background aiming at the virtual object interaction model.
Transmitting the target virtual background to a viewer terminal corresponding to the current live broadcasting room to be displayed in the viewer terminal; determining an amplification proportion corresponding to the amplification operation according to the amplification operation aiming at the target virtual background sent by the target audience terminal; based on the enlargement ratio, the display ratio of the target virtual background is enlarged, and the target virtual background with the enlarged display ratio is synchronized to the target audience terminal to display the target virtual background with the enlarged display ratio in the target audience terminal.
Determining a reduction ratio corresponding to the reduction operation according to the reduction operation aiming at the target virtual background sent by the target audience terminal; based on the reduction proportion, reducing the display proportion of the target virtual background, and generating an object identifier corresponding to the virtual object interaction model; synchronizing the target virtual background with reduced display scale and the object identification to the target audience terminal so as to display the target virtual background with reduced display scale in the target audience terminal, and displaying the object identification corresponding to the virtual object interaction model in the target virtual background with reduced display scale.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the invention as described in the "exemplary methods" section of this specification, when said program product is run on the terminal device.
With reference to fig. 18, a program product 1800 for implementing the above-described method according to an embodiment of the invention is described, which may employ a portable compact disc read-only memory (CD-ROM) and comprise program code, and may run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF (Radio Frequency) and the like, or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (17)

1. A virtual background generation method, the method comprising:
acquiring game scene data corresponding to a target game live in a current live broadcasting room through a game server, and generating an initial virtual background based on the game scene data;
according to the virtual object throwing operation sent by the game server, obtaining object scene data corresponding to the virtual object from the game server;
generating a target virtual background in the current live room based on the initial virtual background and the object scene data; and the target virtual background comprises a virtual object interaction model corresponding to the virtual object.
2. The virtual background generation method according to claim 1, wherein after the initial virtual background is generated based on the game scene data, the method further comprises:
transmitting the initial virtual background to a viewer terminal for display in the viewer terminal;
receiving a control object throwing operation sent by a target audience terminal, and determining a target control object corresponding to the control object throwing operation and a throwing position in the initial virtual background; the target control object is used for controlling a target user corresponding to the target audience terminal;
based on the target control object, placing a control object model corresponding to the target control object at the delivery position in the initial virtual background;
the initial virtual background on which the manipulation object model is placed is transmitted to the target audience terminal to be displayed in the target audience terminal.
3. The virtual background generation method of claim 2, wherein the method further comprises:
receiving an object view angle switching instruction which is sent by the target audience terminal and is aimed at the control object model, and determining a target view angle corresponding to the object view angle switching instruction;
And synchronizing the target viewing angle to the target audience terminal so that the target audience terminal adjusts the background viewing angle of the initial virtual background placed with the control object model according to the target viewing angle.
4. The virtual background generation method of claim 2, wherein the method further comprises:
receiving an object moving instruction which is sent by the target audience terminal and is aimed at the control object model, and determining a moving distance and a moving direction corresponding to the object moving instruction;
and synchronizing the moving distance and the moving direction to the target audience terminal so that the target audience terminal moves the initial virtual background on which the control object model is placed based on the moving distance and the moving direction.
5. The virtual background generation method according to claim 1, wherein the object scene data includes object appearance data corresponding to the virtual object and scene environment data;
the generating the target virtual background in the current live room based on the initial virtual background and the object scene data comprises the following steps:
Generating an interactive virtual background based on the object appearance data and the scene environment data; the interactive virtual background comprises the virtual object interactive model corresponding to the object appearance data and an environment object corresponding to the scene environment data;
and determining a background combination relation between the interactive virtual background and the initial virtual background according to the virtual object throwing operation, and combining the initial virtual background and the interactive virtual background based on the background combination relation to generate a target virtual background in the current live broadcasting room.
6. The method for generating a virtual background according to claim 5, wherein determining a background combination relationship between the interactive virtual background and the initial virtual background according to the virtual object delivery operation comprises:
determining a background combination relationship between the interactive virtual background and the initial virtual background as a background nesting relationship in response to the existence of a target throwing position corresponding to the virtual object throwing operation; the target delivery position is within the initial virtual background;
and determining that the background combination relation between the interactive virtual background and the initial virtual background is a background splicing relation in response to the fact that the target throwing position corresponding to the virtual object throwing operation does not exist.
7. The virtual background generation method according to claim 6, wherein the combining the initial virtual background and the interactive virtual background based on the background combination relationship comprises:
responding to the background combination relation as the background nesting relation, and nesting the interactive virtual background at the target throwing position of the initial virtual background;
and responding to the background combination relation as the splicing relation, and splicing the interactive virtual background and the initial virtual background.
8. The virtual background generation method of claim 1, wherein the method further comprises:
receiving an interaction instruction aiming at the target virtual background and sent by a target audience terminal, and determining an interaction event corresponding to the interaction instruction;
synchronizing the interaction event to the target audience terminal so that the target audience terminal executes the interaction event in the target virtual background for the virtual object interaction model.
9. The virtual background generation method of claim 5, wherein after the generating the target virtual background in the current live room, the method further comprises:
Transmitting the target virtual background to a target audience terminal for display in the target audience terminal;
generating an interaction control in the target virtual background according to the object moving operation which is sent by the target audience terminal and aims at the target control object, and synchronizing the target virtual background containing the interaction control to the target audience terminal; the target control object is used for controlling a target user corresponding to the target audience terminal; the interaction control is used for interacting with the virtual object interaction model;
according to the object interaction instruction corresponding to the interaction control sent by the target audience terminal, determining interaction behavior data corresponding to the object interaction operation;
and synchronizing the interactive behavior data to the target audience terminal so that the target audience terminal controls the target control object to execute the interactive behavior corresponding to the interactive behavior data in the target virtual background aiming at the virtual object interactive model according to the interactive behavior data.
10. The method for generating a virtual background according to claim 9, wherein the generating an interaction control in the target virtual background according to the object moving operation for the target manipulation object sent by the target audience terminal comprises:
Determining a target moving position corresponding to the object moving operation according to the object moving operation which is sent by the target audience terminal and is aimed at the target control object;
determining an object position of the virtual object interaction model in the target virtual background, and determining a position distance between the target moving position and the object position;
and generating an interaction control in the target virtual background in response to the position distance being smaller than a preset distance threshold.
11. The virtual background generation method of claim 10, wherein the method further comprises:
generating a virtual prop corresponding to the environment object in the target virtual background in response to the position distance being less than the preset distance threshold;
according to the selection operation of the virtual prop sent by the target audience terminal, placing the virtual prop at a target position of the target virtual background; and a position mapping relation exists between the target position and the object position.
12. The virtual background generation method of claim 11, wherein after the placing the virtual prop at the target location of the target virtual background, the method further comprises:
Transmitting the target virtual background with the virtual prop placed to the target audience terminal to be displayed in the target audience terminal;
according to the touch operation which is sent by the target audience terminal and is aimed at the virtual prop, prop interaction data corresponding to the touch operation are determined;
and synchronizing the prop interaction data to the target audience terminal so that the target audience terminal performs prop interaction behaviors corresponding to the prop interaction data in the target virtual background aiming at the virtual object interaction model and controlling the target control object.
13. The virtual background generation method of claim 1, wherein after the generating the target virtual background in the current live room, the method further comprises:
transmitting the target virtual background to a viewer terminal corresponding to the current live broadcasting room so as to be displayed in the viewer terminal;
determining an amplification ratio corresponding to the amplification operation according to the amplification operation aiming at the target virtual background sent by a target audience terminal;
and based on the enlargement ratio, enlarging a display ratio of the target virtual background, synchronizing the target virtual background enlarged in the display ratio to the target audience terminal, to display the target virtual background enlarged in the display ratio in the target audience terminal.
14. The virtual background generation method of claim 13, wherein the method further comprises:
determining a reduction ratio corresponding to the reduction operation according to the reduction operation aiming at the target virtual background sent by the target audience terminal;
based on the reduction ratio, reducing the display ratio of the target virtual background, and generating an object identifier corresponding to the virtual object interaction model;
synchronizing the target virtual background and the object identification, the display scale of which is reduced, to the target audience terminal so as to display the target virtual background, the display scale of which is reduced, in the target audience terminal, and display the object identification corresponding to the virtual object interaction model in the target virtual background, the display scale of which is reduced.
15. A virtual background generation apparatus, comprising:
the first acquisition module is configured to acquire game scene data corresponding to a target game live in a current live broadcasting room through a game server, and generate an initial virtual background based on the game scene data;
a second acquisition module configured to acquire object scene data corresponding to a virtual object from the game server according to a virtual object delivery operation transmitted by the game server;
A generation module configured to generate a target virtual background in the current live room based on the initial virtual background and the object scene data; and the target virtual background comprises a virtual object interaction model corresponding to the virtual object.
16. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the virtual background generation method of any of claims 1-14 via execution of the executable instructions.
17. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the virtual background generation method of any of claims 1-14.
CN202310843369.8A 2023-07-10 2023-07-10 Virtual background generation method and device, electronic equipment and readable storage medium Pending CN116866678A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310843369.8A CN116866678A (en) 2023-07-10 2023-07-10 Virtual background generation method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310843369.8A CN116866678A (en) 2023-07-10 2023-07-10 Virtual background generation method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN116866678A true CN116866678A (en) 2023-10-10

Family

ID=88229974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310843369.8A Pending CN116866678A (en) 2023-07-10 2023-07-10 Virtual background generation method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116866678A (en)

Similar Documents

Publication Publication Date Title
WO2020248640A1 (en) Display device
WO2020248668A1 (en) Display and image processing method
JP2001505683A (en) Computer interface extension system and method
WO2021031623A1 (en) Display apparatus, file sharing method, and server
CN111510788B (en) Display method and display device for double-screen double-system screen switching animation
CN112533037B (en) Method for generating Lian-Mai chorus works and display equipment
US11425466B2 (en) Data transmission method and device
WO2020248680A1 (en) Video data processing method and apparatus, and display device
CN112333499A (en) Method for searching target equipment and display equipment
CN112399264B (en) Projection hall service management method and application
CN113014939A (en) Display device and playing method
CN112399243A (en) Playing method and display device
CN111385631B (en) Display device, communication method and storage medium
CN112463267B (en) Method for presenting screen saver information on display device screen and display device
CN116866678A (en) Virtual background generation method and device, electronic equipment and readable storage medium
WO2020248682A1 (en) Display device and virtual scene generation method
CN112533023B (en) Method for generating Lian-Mai chorus works and display equipment
CN112802440B (en) Display device and sound low-delay processing method
CN115129280A (en) Virtual reality equipment and screen-casting media asset playing method
CN112399245A (en) Playing method and display device
CN113938633A (en) Video call processing method and display device
CN112784137A (en) Display device, display method and computing device
CN112738576B (en) Display device and sound low-delay processing method
WO2023213185A1 (en) Live streaming picture data processing method and apparatus, device, storage medium, and program
CN114051151B (en) Live interaction method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination