WO2023071443A1 - Virtual object control method and apparatus, electronic device, and readable storage medium - Google Patents

Virtual object control method and apparatus, electronic device, and readable storage medium Download PDF

Info

Publication number
WO2023071443A1
WO2023071443A1 PCT/CN2022/113276 CN2022113276W WO2023071443A1 WO 2023071443 A1 WO2023071443 A1 WO 2023071443A1 CN 2022113276 W CN2022113276 W CN 2022113276W WO 2023071443 A1 WO2023071443 A1 WO 2023071443A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
virtual object
virtual
scene
virtual character
Prior art date
Application number
PCT/CN2022/113276
Other languages
French (fr)
Chinese (zh)
Inventor
王骁玮
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023071443A1 publication Critical patent/WO2023071443A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics

Definitions

  • the present application belongs to the field of computer technology, and in particular relates to a virtual object control method, device, electronic equipment and storage medium.
  • live video broadcasting has become a popular interactive method. More and more users choose to watch live video broadcasts through live broadcast platforms, such as game live broadcasts and news live broadcasts. In order to improve the effect of live broadcasting, virtual anchors have emerged to replace real anchors for live video broadcasting.
  • Embodiments of the present disclosure at least provide a virtual object control method, device, electronic equipment, and storage medium.
  • the embodiment of the present disclosure provides a virtual object control method applied to a game platform, including:
  • the live video stream is generated based on 3D scene information
  • the 3D scene information is used to generate a 3D scene after rendering
  • the 3D scene information includes at least one virtual character information and at least one virtual object information
  • the virtual The character information is used to generate a virtual character after rendering, and the virtual character is driven by control information captured by a motion capture device;
  • At least one virtual object is generated based on the bullet chat information and the at least one virtual object information
  • At least one virtual object is generated based on the bullet chat information and the at least one virtual object information. And control the at least one virtual object to enter the 3D scene to interact with the at least one virtual character. That is, the barrage information sent by the user can replace the user and enter the 3D scene in the form of a virtual object to interact with the virtual character. In this way, not only the user's participation in the live broadcast process is improved, but also the user's interactive experience is improved.
  • the acquiring the barrage information sent by the user terminal includes:
  • the bullet chat information sent by the user terminal can be obtained in real time through the live dial platform, and at least one virtual object can be generated based on the bullet chat information.
  • controlling the at least one virtual object to enter the 3D scene to interact with the at least one virtual character includes:
  • the virtual object is controlled to interact with the target virtual character.
  • the interactive behavior with the target is determined according to the type of the contact part between the virtual object and the target virtual color perception, so that the target interactive behavior can be matched with the contact position, and the viewing experience of the interactive behavior is improved.
  • the controlling the at least one virtual object to enter the 3D scene and move in a direction close to the target virtual character until it contacts the target virtual character includes:
  • Each virtual object is controlled to enter the 3D scene, and move toward a target virtual character corresponding to each virtual object until it contacts the target virtual character.
  • the target virtual character corresponding to each virtual object is determined from the at least one virtual character, so that the target virtual character touched by each virtual object
  • the role is associated with the user, which enhances the user's sense of participation in the live broadcast process and improves the user's live broadcast experience.
  • the contact part is the foot of the target virtual character
  • controlling the virtual object to interact with the target virtual character in a possible implementation manner, includes:
  • the moving state of the virtual object away from the target virtual character is controlled according to the motion information of the target virtual character's feet.
  • the movement state of the virtual object away from the target virtual character is controlled, which can make the interaction between the virtual object and the target virtual character more realistic and improve The user's live broadcast experience.
  • the controlling the moving state of the virtual object away from the target virtual character according to the movement information of the target virtual character's feet includes:
  • the movement state of the virtual object is controlled.
  • the movement state of the virtual object is controlled, so that the movement state of the virtual object is consistent with the movement information of the feet, and the interaction between the virtual object and the virtual character is improved. realism.
  • the 3D scene information further includes a virtual camera
  • the moving state includes a moving direction
  • the method further includes:
  • the preset special effect of colliding with the virtual mirror is obtained and displayed, which not only enhances the viewing experience of the user during the live broadcast, but also enhances the Fun during the live broadcast.
  • controlling the at least one virtual object to enter the 3D scene to interact with the at least one virtual character includes:
  • the at least one virtual object is controlled to move relative to the at least one virtual character.
  • the movement of the at least one virtual object relative to the at least one virtual character is controlled according to the real-time position of the virtual character in the 3D scene, which can improve the accuracy of the movement of the virtual object relative to the virtual character.
  • the number of the at least one virtual object is multiple, and the method further includes:
  • the interaction between the at least one virtual object is controlled.
  • the temporal interaction of the virtual object is controlled.
  • interactive behaviors also exist between virtual objects, which improves the richness of interactive behaviors of virtual objects.
  • controlling the at least one virtual object to enter the 3D scene to interact with the at least one virtual character includes:
  • each virtual object is controlled to enter the 3D scene to interact with the at least one virtual character.
  • the movement state of the virtual object is combined with the user's resource information, that is, the virtual object is associated with the user, which improves the user's sense of participation and interest in the live broadcast process.
  • the embodiment of the present disclosure provides a virtual object control method applied to a live broadcast platform, including:
  • the live video stream is generated based on 3D scene information
  • the 3D scene information is used to generate a 3D scene after rendering
  • the 3D scene information includes at least one virtual character information and at least one virtual object information
  • the virtual character information is used to generate a virtual character after rendering, and the virtual character is driven by control information captured by a motion capture device;
  • the method further includes:
  • the bullet chat information is deleted and not displayed in the live screen; wherein, the successful processing of the bullet chat information means that the bullet chat information is related to the at least A virtual object information is combined to generate the at least one virtual object.
  • the bullet chat information when the bullet chat information is successfully processed, the bullet chat information is deleted and is not displayed on the live screen, so that the displayed bullet chat information and the generated virtual object can be avoided.
  • the repeated barrage information carried will improve the user's live broadcast experience.
  • an embodiment of the present disclosure provides a virtual object control device, including:
  • the first obtaining module is used to obtain a live video stream, the live video stream is generated based on 3D scene information, and the 3D scene information is used to generate a 3D scene after rendering, and the 3D scene information includes at least one virtual character information and at least one Virtual object information, the virtual character information is used to generate a virtual character after rendering, and the virtual character is driven by control information captured by a motion capture device;
  • the first sending module is configured to send the live video stream, so as to display a live picture corresponding to the live video stream on the user terminal;
  • the second obtaining module is used to obtain the barrage information sent by the user terminal;
  • a first generation module configured to generate at least one virtual object based on the bullet chat information and the at least one virtual object information when the bullet chat information meets a first preset condition
  • An interaction module configured to control the at least one virtual object to enter the 3D scene to interact with the at least one virtual character.
  • the second acquiring module is specifically configured to:
  • the interaction module is specifically configured to:
  • the virtual object is controlled to interact with the target virtual character.
  • the interaction module is specifically configured to:
  • Each virtual object is controlled to enter the 3D scene, and move toward a target virtual character corresponding to each virtual object until it contacts the target virtual character.
  • the contact part is the foot of the target virtual character
  • the interaction module is specifically used for:
  • the moving state of the virtual object away from the target virtual character is controlled according to the motion information of the target virtual character's feet.
  • the interaction module is specifically configured to:
  • the movement state of the virtual object is controlled.
  • the 3D scene information further includes a virtual lens
  • the interaction module is specifically configured to:
  • the interaction module is specifically configured to:
  • the at least one virtual object is controlled to move relative to the at least one virtual character.
  • the interaction module is specifically configured to:
  • the interaction between the at least one virtual object is controlled.
  • the interaction module is specifically configured to:
  • each virtual object is controlled to enter the 3D scene to interact with the at least one virtual character.
  • an embodiment of the present disclosure provides a virtual object control device, including:
  • the third obtaining module is used to obtain a live video stream through a game platform, the live video stream is generated based on 3D scene information, and the 3D scene information is used to generate a 3D scene after rendering, and the 3D scene information includes at least one virtual character information And at least one piece of virtual object information, the virtual character information is used to generate a virtual character after rendering, and the virtual character is driven by control information captured by a motion capture device;
  • the second sending module is configured to send the live video stream to at least one user terminal, so as to display a live picture corresponding to the live video stream on the user terminal;
  • a fourth obtaining module configured to obtain the barrage information sent by the user terminal
  • the third sending module is configured to send the barrage information to the game platform, so that the game platform generates at least one virtual object based on the barrage information and the at least one virtual object information, and controls the at least one A virtual object enters the 3D scene to interact with the at least one virtual character.
  • the virtual object control device further includes:
  • An information receiving module configured to receive the barrage processing result information sent by the game platform
  • the barrage processing module if the barrage information is successfully processed, deletes the barrage information and does not display it in the live screen; wherein, the barrage information processing is successful means that the barrage The information is combined with the at least one virtual object information and the at least one virtual object is generated.
  • an embodiment of the present disclosure provides an electronic device, including: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, the processing The processor communicates with the memory through a bus, and when the machine-readable instructions are executed by the processor, the virtual object control method as described in the first aspect or the second aspect is executed.
  • an embodiment of the present disclosure provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the computer program described in the first aspect or the second aspect is executed. Virtual object control method.
  • FIG. 1 shows a schematic diagram of an execution subject of a virtual object control method provided by an embodiment of the present disclosure
  • FIG. 2 shows a flow chart of the first virtual object control method provided by an embodiment of the present disclosure
  • FIG. 3 shows a schematic diagram of sending a live video stream provided by an embodiment of the present disclosure
  • FIG. 4 shows a schematic diagram of a generated virtual object provided by an embodiment of the present disclosure
  • FIG. 5 shows a schematic diagram of at least one virtual object entering a 3D scene provided by an embodiment of the present disclosure
  • Fig. 6 shows a flow chart of the first method for controlling the interaction between a virtual object and a virtual character provided by an embodiment of the present disclosure
  • Fig. 7 shows a flowchart of a method for controlling the moving state of a virtual object provided by an embodiment of the present disclosure
  • Fig. 8 shows a schematic diagram of a virtual character kicking away a virtual object provided by an embodiment of the present disclosure
  • FIG. 9 shows a schematic diagram of the effect of a virtual object colliding with a virtual mirror provided by an embodiment of the present disclosure
  • Fig. 10 shows a flow chart of the second method for controlling the interaction between a virtual object and a virtual character provided by an embodiment of the present disclosure
  • FIG. 11 shows a schematic diagram of interaction among multiple virtual objects provided by an embodiment of the present disclosure
  • Fig. 12 shows a flowchart of a third method for controlling the interaction between a virtual object and a virtual character provided by an embodiment of the present disclosure
  • Fig. 13 shows a schematic diagram of the motion state of the first type of virtual object provided by the embodiment of the present disclosure
  • Fig. 14 shows a schematic diagram of the motion state of the second virtual object provided by the embodiment of the present disclosure
  • Fig. 15 shows a schematic diagram of the motion state of a third virtual object provided by an embodiment of the present disclosure
  • Fig. 16 shows a flow chart of another virtual object control method provided by an embodiment of the present disclosure
  • Fig. 17 shows a schematic structural diagram of a virtual object control device provided by an embodiment of the present disclosure
  • Fig. 18 shows a schematic structural diagram of another virtual object control device provided by an embodiment of the present disclosure.
  • Fig. 19 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
  • live video broadcasting has become a popular interactive method. More and more users choose to watch live video broadcasts through live broadcast platforms, such as game live broadcasts and news live broadcasts. In order to improve the effect of live broadcasting, virtual anchors have emerged to replace real anchors for live video broadcasting.
  • a virtual object control method provided in the present disclosure includes: acquiring a live video stream, where the live video stream is generated based on 3D scene information.
  • the 3D scene information is used to generate a 3D scene after rendering.
  • the 3D scene information includes at least one virtual character information and at least one virtual object information, and the virtual character information is used to generate a virtual character after rendering, and the virtual character is driven by control information captured by a motion capture device.
  • the live video stream is sent to display a live picture corresponding to the live video stream on the user terminal. Obtain the barrage information sent by the user terminal. If the bullet chat information meets the first preset condition, at least one virtual object is generated based on the bullet chat information and the at least one virtual object information. Controlling the at least one virtual object to enter the 3D scene to interact with the at least one virtual character.
  • the barrage information meets the first preset condition, at least one virtual object is generated based on the barrage information and the at least one virtual object information, and the at least one virtual object is controlled to enter the interact with the at least one virtual character in the 3D scene. That is, the barrage information sent by the user can replace the user in the form of a virtual object to enter the 3D scene to interact with the virtual character. In this way, not only the user's participation in the live broadcast process is improved, but also the user's interactive experience is improved.
  • FIG. 1 is a schematic diagram of an execution body of a virtual object control method provided by an embodiment of the present disclosure.
  • the method is executed by the electronic device 100, where the electronic device 100 may include a terminal and a server.
  • this method can be applied to a terminal, which can be a smart phone 10, a desktop computer 20, a notebook computer 30, etc. shown in FIG. 1, or a smart speaker, smart watch, tablet computer, etc. not shown in FIG. 1 , is not limited.
  • the method can also be applied to the server 40 , or can be applied to an implementation environment composed of the terminal and the server 40 .
  • the server 40 can be an independent physical server, or a server cluster or a distributed system composed of multiple physical servers, or can be a basic cloud server that provides cloud services, cloud databases, cloud computing, cloud storage, big data and artificial intelligence platforms. Cloud server for computing services.
  • the electronic device 100 may also include an AR (Augmented Reality, augmented reality) device, a VR (Virtual Reality, virtual reality) device, an MR (Mixed Reality, mixed reality) device, and the like.
  • AR Augmented Reality, augmented reality
  • VR Virtual Reality, virtual reality
  • MR Mated Reality, mixed reality
  • the AR device may be a mobile phone or a tablet computer with an AR function, or may be AR glasses, which is not limited here.
  • the server 40 can communicate with the smart phone 10 , the desktop computer 20 and the notebook computer 30 respectively through the network 50 .
  • Network 50 may include various connection types, such as wires, wireless communication links, or fiber optic cables, among others.
  • the first virtual object control method can be applied to a server of a game platform.
  • the first method for controlling a virtual object may be implemented in a manner in which a processor invokes computer-readable instructions stored in a memory.
  • the virtual object control method includes the following steps S101-S105:
  • the live video stream is generated based on 3D scene information
  • the 3D scene information is used to generate a 3D scene after rendering
  • the 3D scene information includes at least one virtual character information and at least one virtual object information
  • the avatar information is used to generate the avatar after rendering, and the avatar is driven by the control information captured by the motion capture device.
  • one form of the virtual character is to capture the motion of the actor (the person in the middle) to obtain control signals, drive the action of the virtual character in the game engine, and obtain the voice of the actor at the same time, and fuse the voice of the actor with the picture of the virtual character to generate video data .
  • Motion capture devices include body motion capture devices worn on the actor's body (such as clothes), hand motion capture devices worn on the actor's hands (such as gloves), facial motion capture devices (such as cameras), and sound capture devices (such as microphones). , throat wheat, etc.) at least one.
  • the live video stream is a data stream required for continuous video live broadcast. It can be understood that a video is usually composed of pictures and/or sounds, etc., pictures belong to video frames, and sounds belong to audio frames.
  • the process of obtaining the live video stream may be the process of directly obtaining the generated live video stream, or the process of generating the live video stream based on the 3D scene information. There is no limitation, as long as the live video stream can be finally obtained Just stream.
  • the 3D rendering environment is a 3D game engine running in the electronic device, which can generate image information based on one or more perspectives based on the data to be rendered.
  • Avatar information is a role model that exists in the game engine, and can generate corresponding avatars after rendering.
  • virtual object information can also generate corresponding virtual objects after rendering. The difference is that the virtual character is driven by the control information captured by the motion capture device, while the virtual object does not need it and can be controlled by the system.
  • the virtual character may include a virtual anchor or a digital human.
  • the virtual object can be a virtual animal image, a virtual cartoon image, and the like.
  • the 3D scene information can run on the computer CPU (Central Processing Uni, central processing unit), GPU (Graphics Processing Unit, graphics processing unit) and memory, which includes gridded model information and texture information.
  • the virtual character information and the virtual object information include, but are not limited to, meshed model data, voxel data, and map texture data, or a combination thereof.
  • the grid includes, but is not limited to, a triangular grid, a quadrilateral grid, other polygonal grids or a combination thereof. In the embodiment of the present disclosure, the grid is a triangular grid.
  • the server 40 of the game platform can send the live video stream to the live broadcast platform 200 in real time after obtaining the live video stream, and the live broadcast platform 200 sends the live video stream to a plurality of user terminals 300 for further processing. Live video.
  • barrage refers to the commentary subtitles that pop up when watching a live video. It can be understood that during the live broadcast, the user of the user terminal (such as the audience) can send barrage through the user terminal to interact with the virtual character.
  • the bullet chat information may be the specific content of the bullet chat, or may be the user identification (such as user account number or user nickname) that sends the bullet chat.
  • the live broadcast platform 200 can obtain the bullet chat information sent by the user terminal 100 in real time, and send the bullet chat information to the game platform. In this way, the game platform can obtain the barrage information sent by the user terminal 100 through the live broadcast platform 200 .
  • At least one virtual object B may be generated by combining the bullet chat information with at least one virtual object information.
  • the object B carries the content of the barrage information.
  • the content of the bullet chat information carried by one of the virtual objects B is "I am Captain 2"
  • the content of the bullet chat information carried by the other virtual object B is "I don't want to do laundry”.
  • the first preset condition can be set according to actual needs.
  • the first preset condition can be that the content of the bullet chat information conforms to the preset content, or that the user ID carried by the bullet chat information meets the preset requirements.
  • the user corresponding to the user ID has followed the virtual character, which is not done here. Specific limits.
  • At least one virtual object B can be controlled to enter the 3D scene to interact with at least one virtual character A.
  • the interaction includes but is not limited to action interaction, behavior interaction and language interaction.
  • step S105 when controlling the at least one virtual object to enter the 3D scene to interact with the at least one virtual character, the following steps S1051-S1053 may be included:
  • At least one virtual object B is generated, at least one virtual object B is controlled to move in a direction close to the target virtual character A until it contacts the target virtual character A.
  • the target virtual character can be determined from the multiple virtual characters, specifically, the target can be determined from the multiple virtual characters in the following ways (1)-(2) virtual character.
  • the user identification carried by the bullet chatting it is possible to identify which virtual character the user who sent the bullet chatting has a preset association relationship with, and determine the virtual character associated with the user as the target virtual character. For example, if the user who sends the barrage only pays attention to one of the avatars, or his user name contains the name of one of the avatars, then the avatar should be determined as the target avatar. In this way, the user's interest in interaction can be enhanced.
  • semantic recognition can also be performed on the barrage, and the target virtual character can be determined from multiple virtual characters according to the content of the semantic recognition. For example, multiple virtual characters can be classified according to their personalities or skills they are good at, to obtain the classification label of each virtual character, and then determine the category label to which the bullet chat belongs according to the identified semantic content, and assign the category label The corresponding virtual character is determined as the target virtual character. For example, if the category label of one of the avatars is "Entertainment King", the category label of the other avatar is "model worker".
  • the virtual character labeled "model worker” is determined as the target virtual character. In this way, the user can change different bullet screens during the interaction process, and then change different target virtual characters, which improves the fun of the interaction.
  • different interaction behaviors may be pre-set according to different contact types. For example, if the contact part is the leg of the target avatar, the interactive behavior of "hugging" the target avatar can be set. If the contact part is the foot of the target avatar, set the interaction behavior of "kick away”. If the contact part is the arm part of the target avatar, set the interaction behavior of "rotating around the arm”. Therefore, after the type of the contact part is determined, the target interaction behavior corresponding to the contact part can be determined.
  • the virtual object can be controlled to interact with the target virtual character based on the target interaction behavior.
  • the contact part between the virtual object and the target virtual character is the foot
  • the target interaction behavior corresponding to the foot is the target interaction behavior of "kicking away". That is, to control the target interaction behavior of the virtual object away from the feet of the virtual character. Therefore, referring to FIG. 7, for step S1053, when controlling the virtual object to interact with the target virtual character based on the target interaction behavior, the following steps S10531-S10533 may be included:
  • the control information of the feet of the target avatar A can be obtained through a motion capture device worn on the feet of the actor, and the feet of the target avatar A can be driven based on the control information. Movement, and then according to the movement information of the target virtual character A's feet, the moving state of the virtual object B away from the target virtual character can be controlled. In this way, the interaction between the virtual object and the target virtual character can be made more realistic, and the user's live broadcast experience can be improved.
  • the movement information of the feet of the target virtual character A may be acquired, and based on the movement information of the feet of the target virtual character A, the moving state of the virtual object B may be controlled.
  • the motion information of the foot of the virtual character A is driven and generated by the control object (actor).
  • the movement information of the foot of the target virtual character A includes information such as the movement direction, movement speed and movement acceleration of the foot of the target virtual character A. Therefore, information such as the moving direction, moving speed, and moving acceleration of the virtual object can be determined based on the motion information, and then the moving state of the virtual object B can be controlled.
  • the moving states of different virtual objects B are different. That is to say, different virtual objects B are "kicked away" in different degrees and directions.
  • the moving state of the virtual object may also match the texture of the clothing of the contact part of the target virtual character. For example, if the feet of the target avatar wear furry shoes, the avatar may be controlled to leave the target avatar in a slow first moving state. If the feet of the target avatar are wearing smooth leather boots, the virtual object can be controlled to leave the target avatar in a faster second moving state. In this way, the moving state of the virtual object is consistent with the user's senses, which is conducive to enhancing the user's sense of substitution and further improving the user's live broadcast experience.
  • the 3D scene information further includes a virtual camera.
  • a virtual camera When the moving direction of the virtual object is toward the virtual camera, if the moving state of the virtual object satisfies a second preset condition, a preset special effect of colliding with the virtual mirror is acquired and displayed. In this way, not only the viewing experience of the user during the live broadcast is enhanced, but also the fun during the live broadcast is enhanced.
  • the second preset condition can be set according to actual needs.
  • the second preset condition may be that the moving speed of the virtual object is greater than the preset speed, or that the moving acceleration of the virtual object is greater than the preset acceleration, which is not limited here.
  • the preset special effect of colliding with the virtual mirror can be determined according to the attributes of the virtual object.
  • the attribute includes but not limited to the material, type, etc. of the virtual object. For example, if the virtual object is a fragile cup, the special effect of colliding with the virtual mirror surface may be a special effect of glass shattering.
  • the special effect of colliding with the virtual mirror may be a special effect of deformation of the virtual object after colliding with the virtual mirror.
  • the virtual object B collides with the virtual mirror, it is flattened and attached to the virtual mirror, and can slide down along the virtual mirror, that is, it presents a special effect of flattening and falling after impact.
  • step S105 when controlling the at least one virtual object to enter the 3D scene to interact with the at least one virtual character, the following steps S105a-S105b may be included:
  • S105a Acquire first real-time position information of the at least one virtual character in the 3D scene.
  • first real-time position information of the at least one virtual character in the 3D scene needs to be acquired. Controlling the movement of the at least one virtual object B relative to the movement A of the at least one virtual character based on the first real-time position information. In this way, the movement of the virtual object B relative to the virtual character A can be made more precise.
  • the second real-time position information of multiple virtual objects B in the 3D scene may also be acquired. Based on the second real-time position information, the interaction between multiple virtual objects B is controlled. For example, multiple virtual objects B can be controlled to approach each other and interact intensively. In this way, the richness of the interactive behavior of the virtual object is improved.
  • step S105 when controlling the at least one virtual object to enter the 3D scene to interact with the at least one virtual character, the following steps S105m-S105k may also be included :
  • S105m Determine user resource information of the barrage information carried by each virtual object.
  • each virtual object B is generated by different bullet chat information
  • different bullet chat information may also be sent by the same user. Therefore, in order to improve the fun of the user participating in the live broadcast, the user resource information of the barrage information carried by each virtual object B can be determined, and the motion state of each virtual object can be determined based on the user resource information.
  • the user resource information may be "like" information of the user, information on the number of interactions with the avatar, and the like. Based on user resource information, determine the movement state of each virtual object. Specifically, the virtual object corresponding to the user who "likes" can move faster, and the virtual object corresponding to the user who does not like it will fall after a few steps.
  • the virtual object B may enter the 3D scene in a light and pleasant motion state as shown in FIG. 13 to interact with the at least one virtual character. If the resource information of the user is medium, the virtual object B can enter the 3D scene in the normal and robust motion state shown in FIG. 14 to interact with the at least one virtual character. If the resource information of the user is less, the virtual object B may enter the 3D scene to interact with the at least one virtual character in a clumsy motion state as shown in FIG. 15 .
  • the motion states in Fig. 13 to Fig. 15 are only schematic representations, and in other embodiments, other motion states may also be used, or a combined state formed by combining a plurality of different states. In this way, combining the motion state of the virtual object with the user's resource information, that is, associating the virtual object with the user, improves the user's sense of participation and interest in the live broadcast process.
  • FIG. 16 it is a flow chart of a second virtual object control method provided by an embodiment of the present disclosure.
  • the second virtual object control method can be applied to a live broadcast platform.
  • the second method for controlling a virtual object may be implemented by a processor invoking computer-readable instructions stored in a memory.
  • the virtual object control method includes the following S201-S205:
  • Obtain a live video stream through a game platform the live video stream is generated based on 3D scene information, the 3D scene information is used to generate a 3D scene after rendering, and the 3D scene information includes at least one virtual character information and at least one virtual object information, the virtual character information is used to generate a virtual character after rendering, and the virtual character is driven by control information captured by a motion capture device.
  • step S201 is similar to the above-mentioned step S101, and will not be repeated here.
  • S202 Send the live video stream to at least one user terminal, so as to display a live picture corresponding to the live video stream on the user terminal.
  • step S202 is similar to the aforementioned step S102, and will not be repeated here.
  • step S203 is similar to the aforementioned step S103, and will not be repeated here.
  • S204 Send the barrage information to the game platform, so that the game platform generates at least one virtual object based on the barrage information and the at least one virtual object information, and controls the at least one virtual object to enter the Interact with the at least one virtual character in a 3D scene.
  • step S204 is similar to the above-mentioned step S104 and step S105, and will not be repeated here.
  • the live broadcast platform also receives the barrage processing result information sent by the game platform. If the barrage information is successfully processed, the barrage information is deleted and not displayed on the live screen. Wherein, the successful processing of the barrage information means that the barrage information is combined with the at least one virtual object information to generate the at least one virtual object. In this way, duplication of the displayed barrage information and the barrage information carried by the generated virtual object can be avoided, thereby improving the user's live broadcast experience.
  • the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possible
  • the inner logic is OK.
  • the embodiment of the present disclosure also provides a virtual object control device corresponding to the virtual object control method. Since the problem-solving principle of the device in the embodiment of the present disclosure is similar to the above-mentioned virtual object control method in the embodiment of the present disclosure, therefore For the implementation of the device, reference may be made to the implementation of the method, and repeated descriptions will not be repeated.
  • FIG. 17 which is a schematic diagram of a virtual object control device 500 provided by an embodiment of the present disclosure, the device includes:
  • the first acquisition module 501 is configured to acquire a live video stream, the live video stream is generated based on 3D scene information, the 3D scene information is used to generate a 3D scene after rendering, and the 3D scene information includes at least one virtual character information and at least A piece of virtual object information, the virtual character information is used to generate a virtual character after rendering, and the virtual character is driven by control information captured by a motion capture device;
  • the first sending module 502 is configured to send the live video stream, so as to display a live picture corresponding to the live video stream on the user terminal;
  • the second acquiring module 503 is configured to acquire the barrage information sent by the user terminal;
  • a first generation module 504 configured to generate at least one virtual object based on the bullet chat information and the at least one virtual object information when the bullet chat information meets a first preset condition
  • An interaction module 505, configured to control the at least one virtual object to enter the 3D scene to interact with the at least one virtual character.
  • the second obtaining module 503 is specifically configured to:
  • the interaction module 505 is specifically configured to:
  • the virtual object is controlled to interact with the target virtual character.
  • the interaction module 505 is specifically configured to:
  • Each virtual object is controlled to enter the 3D scene, and move toward a target virtual character corresponding to each virtual object until it contacts the target virtual character.
  • the contact part is the foot of the target virtual character
  • the interaction module 505 is specifically configured to:
  • the moving state of the virtual object away from the target virtual character is controlled according to the motion information of the target virtual character's feet.
  • the interaction module 505 is specifically configured to:
  • the movement state of the virtual object is controlled.
  • the 3D scene information further includes a virtual lens
  • the interaction module 505 is specifically configured to:
  • the interaction module 505 is specifically configured to:
  • the at least one virtual object is controlled to move relative to the at least one virtual character.
  • the interaction module 505 is specifically configured to:
  • the interaction between the at least one virtual object is controlled.
  • the interaction module 505 is specifically configured to:
  • each virtual object is controlled to enter the 3D scene to interact with the at least one virtual character.
  • FIG. 18 which is a schematic diagram of a virtual object control device 600 provided by an embodiment of the present disclosure, the device includes:
  • the third acquiring module 601 is configured to acquire a live video stream through a game platform, the live video stream is generated based on 3D scene information, and the 3D scene information is used to generate a 3D scene after rendering, and the 3D scene information includes at least one virtual character information and at least one virtual object information, the virtual character information is used to generate a virtual character after rendering, and the virtual character is driven by control information captured by a motion capture device;
  • the second sending module 602 is configured to send the live video stream to at least one user terminal, so as to display a live picture corresponding to the live video stream on the user terminal;
  • a fourth obtaining module 603, configured to obtain the barrage information sent by the user terminal
  • the third sending module 604 is configured to send the barrage information to the game platform, so that the game platform generates at least one virtual object based on the barrage information and the at least one virtual object information, and controls the at least one virtual object.
  • a virtual object enters the 3D scene to interact with the at least one virtual character.
  • the virtual object control device 600 further includes:
  • An information receiving module 605, configured to receive the barrage processing result information sent by the game platform
  • the barrage processing module 606 if the barrage information is successfully processed, deletes the barrage information and does not display it on the live screen; wherein, the barrage information processing is successful means that the barrage information
  • the scene information is combined with the at least one virtual object information, and the at least one virtual object is generated.
  • an embodiment of the present disclosure also provides an electronic device.
  • FIG. 19 it is a schematic structural diagram of an electronic device 700 provided by an embodiment of the present disclosure, including a processor 701 , a memory 702 , and a bus 703 .
  • the memory 702 is used to store execution instructions, including a memory 7021 and an external memory 7022; the memory 7021 here is also called an internal memory, and is used to temporarily store calculation data in the processor 701 and exchange data with an external memory 7022 such as a hard disk.
  • the processor 701 exchanges data with the external memory 7022 through the memory 7021 .
  • the memory 702 is specifically used to store the application program code for executing the solution of the present application, and the execution is controlled by the processor 701 . That is, when the electronic device 700 is running, the processor 701 communicates with the memory 702 through the bus 703, so that the processor 701 executes the application program code stored in the memory 702, and then executes the method described in any of the foregoing embodiments.
  • memory 702 can be, but not limited to, random access memory (Random Access Memory, RAM), read-only memory (Read Only Memory, ROM), programmable read-only memory (Programmable Read-Only Memory, PROM), can Erasable Programmable Read-Only Memory (EPROM), Electric Erasable Programmable Read-Only Memory (EEPROM), etc.
  • RAM Random Access Memory
  • ROM read-only memory
  • PROM programmable read-only memory
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electric Erasable Programmable Read-Only Memory
  • the processor 701 may be an integrated circuit chip with signal processing capabilities.
  • the above-mentioned processor can be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; it can also be a digital signal processor (DSP), an application-specific integrated circuit (ASIC) , field programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • FPGA field programmable gate array
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 700 .
  • the electronic device 700 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the steps of the virtual object control method in the foregoing method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • the embodiment of the present disclosure also provides a computer program product, the computer program product carries a program code, and the instructions included in the program code can be used to execute the steps of the virtual object control method in the above method embodiment, for details, please refer to the above method implementation example, which will not be repeated here.
  • the above-mentioned computer program product may be specifically implemented by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) etc. wait.
  • a software development kit Software Development Kit, SDK
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor.
  • the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present application discloses a virtual object control method and apparatus, an electronic device, and a storage medium. The virtual object method comprises: acquiring a live broadcast video stream, the live broadcast video stream being generated on the basis of 3D scene information, the 3D scene information being used to generate a 3D scene after rendering, and the 3D scene information including at least one item of virtual character information and at least one item of virtual object information; the virtual character information being used to generate a virtual character after rendering, and the virtual character being driven by control information captured by an action capture device; sending the live broadcast video stream so as to display a live broadcast picture corresponding to the live broadcast video stream at a user terminal; acquiring bullet screen information sent by the user terminal; when the bullet screen information meets a first preset condition, generating at least one virtual object on the basis of the bullet screen information and the at least one item of virtual object information; controlling the at least one virtual object to enter the 3D scene to interact with the at least one virtual character. Embodiments of the present application can improve a user experience of participating in a live broadcast.

Description

虚拟对象控制方法、装置、电子设备及可读存储介质Virtual object control method, device, electronic device and readable storage medium
本申请要求于2021年10月26日提交中国国家知识产权局、申请号为202111250745.X、发明名称为“虚拟对象控制方法、装置、电子设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed with the State Intellectual Property Office of China on October 26, 2021, with the application number 202111250745.X, and the title of the invention is "virtual object control method, device, electronic device, and readable storage medium" , the entire contents of which are incorporated in this application by reference.
技术领域technical field
本申请属于计算机技术领域,具体涉及一种虚拟对象控制方法、装置、电子设备和存储介质。The present application belongs to the field of computer technology, and in particular relates to a virtual object control method, device, electronic equipment and storage medium.
背景技术Background technique
随着计算机技术与网络技术的发展,视频直播成为一种热门的交互方式。越来越多的用户选择通过直播平台来观看视频直播,比如游戏直播、新闻直播等。为了提升直播效果,出现了虚拟主播代替真实主播进行视频直播的方式。With the development of computer technology and network technology, live video broadcasting has become a popular interactive method. More and more users choose to watch live video broadcasts through live broadcast platforms, such as game live broadcasts and news live broadcasts. In order to improve the effect of live broadcasting, virtual anchors have emerged to replace real anchors for live video broadcasting.
在虚拟主播进行直播时,观众用户的参与度在不断提高。观众用户通过弹幕的方式与虚拟主播进行互动,已经越来越频繁。然而,现有的弹幕大都以文字的形式展示,观众用户与虚拟主播之间互动的体验欠佳。When virtual anchors live broadcast, the participation of viewers and users continues to increase. Audience users interact with virtual anchors through barrage, which has become more and more frequent. However, most of the existing bullet screens are displayed in the form of text, and the interaction experience between audience users and virtual anchors is not good.
发明内容Contents of the invention
本公开实施例至少提供一种虚拟对象控制方法、装置、电子设备及存储介质。Embodiments of the present disclosure at least provide a virtual object control method, device, electronic equipment, and storage medium.
第一方面,本公开实施例提供了一种虚拟对象控制方法,应用于游戏平台,包括:In the first aspect, the embodiment of the present disclosure provides a virtual object control method applied to a game platform, including:
获取直播视频流,所述直播视频流基于3D场景信息生成,所述3D场景信息用于渲染后生成3D场景,所述3D场景信息包含至少一个虚拟角色信息以及至少一个虚拟对象信息,所述虚拟角色信息用于渲染后生成虚拟角色,所述虚拟角色通过动作捕捉设备捕捉的控制信息驱动;Obtain a live video stream, the live video stream is generated based on 3D scene information, the 3D scene information is used to generate a 3D scene after rendering, the 3D scene information includes at least one virtual character information and at least one virtual object information, and the virtual The character information is used to generate a virtual character after rendering, and the virtual character is driven by control information captured by a motion capture device;
发送所述直播视频流,以在用户终端展示与所述直播视频流相应的直播画面;Sending the live video stream to display a live picture corresponding to the live video stream on the user terminal;
获取所述用户终端发送的弹幕信息;Obtaining the barrage information sent by the user terminal;
在所述弹幕信息符合第一预设条件的情况下,基于所述弹幕信息以及所述至少一个虚拟对象信息生成至少一个虚拟对象;When the bullet chat information meets the first preset condition, at least one virtual object is generated based on the bullet chat information and the at least one virtual object information;
控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互。Controlling the at least one virtual object to enter the 3D scene to interact with the at least one virtual character.
本公开实施例中,在弹幕信息符合第一预设条件的情况下,基于所述弹幕信息以及所述至少一个虚拟对象信息生成至少一个虚拟对象。并控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互。也即,用户所发的弹幕信息可以代替用户以虚拟对象的形式进入到3D场景中与虚拟角色进行交互。如此,不仅提高了用户在直播过程中的参与度,还提升了用户的交互体验。In the embodiment of the present disclosure, if the bullet chat information meets the first preset condition, at least one virtual object is generated based on the bullet chat information and the at least one virtual object information. And control the at least one virtual object to enter the 3D scene to interact with the at least one virtual character. That is, the barrage information sent by the user can replace the user and enter the 3D scene in the form of a virtual object to interact with the virtual character. In this way, not only the user's participation in the live broadcast process is improved, but also the user's interactive experience is improved.
根据第一方面,在一种可能的实施方式中,所述获取所述用户终端发送的弹幕信息,包括:According to the first aspect, in a possible implementation manner, the acquiring the barrage information sent by the user terminal includes:
通过直播平台获取所述用户终端发送的弹幕信息。Obtain the barrage information sent by the user terminal through the live broadcast platform.
本公开实施例中,通过直播拨平台可以实时获取到用户终端发送的弹幕信息,进而可以基于弹幕信息生成至少一个虚拟对象。In the embodiment of the present disclosure, the bullet chat information sent by the user terminal can be obtained in real time through the live dial platform, and at least one virtual object can be generated based on the bullet chat information.
根据第一方面,在一种可能的实施方式中,所述控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互,包括:According to the first aspect, in a possible implementation manner, the controlling the at least one virtual object to enter the 3D scene to interact with the at least one virtual character includes:
控制所述至少一个虚拟对象进入所述3D场景中,并向靠近目标虚拟角色的方向移动直至与所述目标虚拟角色接触;controlling the at least one virtual object to enter the 3D scene, and move in a direction close to the target virtual character until it comes into contact with the target virtual character;
识别所述虚拟对象与所述目标虚拟角色的接触部位,并根据所述接触部位的类型,确定与所述接触部位的类型对应的目标交互行为;identifying a contact part between the virtual object and the target virtual character, and determining a target interaction behavior corresponding to the type of the contact part according to the type of the contact part;
基于所述目标交互行为,控制所述虚拟对象与所述目标虚拟角色进行交互。Based on the target interaction behavior, the virtual object is controlled to interact with the target virtual character.
本公开实施例中,根据虚拟对象与目标虚拟觉色的接触部位的类型,确定与目标交互行为,可以使得目标交互行为与接触部位相匹配,提升了交互行为的观赏体验。In the embodiment of the present disclosure, the interactive behavior with the target is determined according to the type of the contact part between the virtual object and the target virtual color perception, so that the target interactive behavior can be matched with the contact position, and the viewing experience of the interactive behavior is improved.
根据第一方面,在一种可能的实施方式中,所述控制所述至少一个虚拟对象进入所述3D场景中,并向靠近目标虚拟角色的方向移动直至与所述目标虚拟角色接触,包括:According to the first aspect, in a possible implementation manner, the controlling the at least one virtual object to enter the 3D scene and move in a direction close to the target virtual character until it contacts the target virtual character includes:
基于每个虚拟对象所携带的弹幕信息,从所述至少一个虚拟角色中确定与所述每个虚拟对象对应的目标虚拟角色;Determining a target virtual character corresponding to each virtual object from the at least one virtual character based on the barrage information carried by each virtual object;
控制所述每个虚拟对象进入所述3D场景中,并向靠近与所述每个虚拟对象对应的目标虚拟角色的方向移动直至与所述目标虚拟角色接触。Each virtual object is controlled to enter the 3D scene, and move toward a target virtual character corresponding to each virtual object until it contacts the target virtual character.
本公开实施例中,基于每个虚拟对象所携带的弹幕信息,从所述至少一个虚拟角色中确定与所述每个虚拟对象对应的目标虚拟角色,使得每个虚拟对象所接触的目标虚拟角色与用户产生关联,增强了用户在直播过程中的参与感,提升了用户的直播体验。In the embodiment of the present disclosure, based on the barrage information carried by each virtual object, the target virtual character corresponding to each virtual object is determined from the at least one virtual character, so that the target virtual character touched by each virtual object The role is associated with the user, which enhances the user's sense of participation in the live broadcast process and improves the user's live broadcast experience.
根据第一方面,在一种可能的实施方式中,所述接触部位为所述目标虚拟角色的脚部,所述基于所述目标交互行为,控制所述虚拟对象与所述目标虚拟角色进行交互,包括:According to the first aspect, in a possible implementation manner, the contact part is the foot of the target virtual character, and based on the target interaction behavior, controlling the virtual object to interact with the target virtual character ,include:
获取所述目标虚拟角色的脚部的控制信息;Acquiring control information of the feet of the target virtual character;
基于所述控制信息驱动所述目标虚拟角色的脚部运动;driving a foot movement of the target virtual character based on the control information;
根据所述目标虚拟角色的脚部的运动信息,控制所述虚拟对象远离所述目标虚拟角色的移动状态。The moving state of the virtual object away from the target virtual character is controlled according to the motion information of the target virtual character's feet.
本公开实施例中,根据所述目标虚拟角色的脚部的运动信息,控制所述虚拟对象远离所述目标虚拟角色的移动状态,可以使得虚拟对象与目标虚拟角色之间的交互更加逼真,提升用户的直播体验。In the embodiment of the present disclosure, according to the motion information of the target virtual character's feet, the movement state of the virtual object away from the target virtual character is controlled, which can make the interaction between the virtual object and the target virtual character more realistic and improve The user's live broadcast experience.
根据第一方面,在一种可能的实施方式中,所述根据所述目标虚拟角色的脚部的运动信息,控制所述虚拟对象远离所述目标虚拟角色的移动状态,包括:According to the first aspect, in a possible implementation manner, the controlling the moving state of the virtual object away from the target virtual character according to the movement information of the target virtual character's feet includes:
获取所述目标虚拟角色的脚部的运动信息,所述运动信息由控制对象驱动生成;Acquiring motion information of the feet of the target virtual character, the motion information being driven and generated by a control object;
基于所述运动信息,控制所述虚拟对象的移动状态。Based on the motion information, the movement state of the virtual object is controlled.
本公开实施例中,基于目标虚拟角色的脚部的运动信息,控制所述虚拟对象的移动状态,可以使得虚拟对象的移动状态与脚部的运动信息一致,提升了虚拟对象与虚拟角色交互的真实感。In the embodiment of the present disclosure, based on the movement information of the feet of the target virtual character, the movement state of the virtual object is controlled, so that the movement state of the virtual object is consistent with the movement information of the feet, and the interaction between the virtual object and the virtual character is improved. realism.
根据第一方面,在一种可能的实施方式中,所述3D场景信息还包括虚拟镜头,所述移动状态包括移动方向,所述方法还包括:According to the first aspect, in a possible implementation manner, the 3D scene information further includes a virtual camera, the moving state includes a moving direction, and the method further includes:
在所述虚拟对象的移动方向为朝向所述虚拟镜头方向的情况下,若所述虚拟对象的移动状态满足第二预设条件,获取并显示预设的与虚拟镜面碰撞的特效。When the moving direction of the virtual object is toward the virtual camera, if the moving state of the virtual object satisfies a second preset condition, a preset special effect of colliding with the virtual mirror is acquired and displayed.
本公开实施例中,由于在虚拟对象的移动状态满足第二预设条件情况下,获取并显示预设的与虚拟镜面碰撞的特效,不仅增强了用户在直播过程中的观赏体验,还增强了直播过程中的趣味性。In the embodiment of the present disclosure, when the moving state of the virtual object satisfies the second preset condition, the preset special effect of colliding with the virtual mirror is obtained and displayed, which not only enhances the viewing experience of the user during the live broadcast, but also enhances the Fun during the live broadcast.
根据第一方面,在一种可能的实施方式中,所述控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互,包括:According to the first aspect, in a possible implementation manner, the controlling the at least one virtual object to enter the 3D scene to interact with the at least one virtual character includes:
获取所述至少一个虚拟角色在所述3D场景中的第一实时位置信息;Acquiring first real-time position information of the at least one virtual character in the 3D scene;
基于所述第一实时位置信息,控制所述至少一个虚拟对象相对于所述至少一个虚拟角色移动。Based on the first real-time position information, the at least one virtual object is controlled to move relative to the at least one virtual character.
本公开实施例中,根据虚拟角色在3D场景中的实时位置,来控制所述至少一个虚拟对象相对于所述至少一个虚拟角色移动,可以提升虚拟对象相对于虚拟角色移动的精度。In the embodiment of the present disclosure, the movement of the at least one virtual object relative to the at least one virtual character is controlled according to the real-time position of the virtual character in the 3D scene, which can improve the accuracy of the movement of the virtual object relative to the virtual character.
根据第一方面,在一种可能的实施方式中,所述至少一个虚拟对象的数量为多个,所述方法还包括:According to the first aspect, in a possible implementation manner, the number of the at least one virtual object is multiple, and the method further includes:
获取所述至少一个虚拟对象在所述3D场景中的第二实时位置信息;acquiring second real-time position information of the at least one virtual object in the 3D scene;
基于所述第二实时位置信息,控制所述至少一个虚拟对象之间进行互动。Based on the second real-time position information, the interaction between the at least one virtual object is controlled.
本公开实施例中,基于虚拟对象的实时位置信息,控制虚拟对象时间的互动。如此,使得虚拟对象之间也存在交互行为,提升了虚拟对象的交互行为的丰富性。In the embodiment of the present disclosure, based on the real-time location information of the virtual object, the temporal interaction of the virtual object is controlled. In this way, interactive behaviors also exist between virtual objects, which improves the richness of interactive behaviors of virtual objects.
根据第一方面,在一种可能的实施方式中,所述控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互,包括:According to the first aspect, in a possible implementation manner, the controlling the at least one virtual object to enter the 3D scene to interact with the at least one virtual character includes:
确定每个虚拟对象携带的弹幕信息的用户资源信息;Determine the user resource information of the barrage information carried by each virtual object;
基于所述用户资源信息,确定所述每个虚拟对象的运动状态;determining the motion state of each virtual object based on the user resource information;
基于所述运动状态,控制所述每个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互。Based on the motion state, each virtual object is controlled to enter the 3D scene to interact with the at least one virtual character.
本公开实施例中,将虚拟对象的运动状态和用户的资源信息相结合,也即将虚拟对象和用户之间关联起来,提升了用户在直播过程中的参与感与趣味性。In the embodiment of the present disclosure, the movement state of the virtual object is combined with the user's resource information, that is, the virtual object is associated with the user, which improves the user's sense of participation and interest in the live broadcast process.
第二方面,本公开实施例提供了一种虚拟对象控制方法,应用于直播平台,包括:In the second aspect, the embodiment of the present disclosure provides a virtual object control method applied to a live broadcast platform, including:
通过游戏平台获取直播视频流,所述直播视频流基于3D场景信息生成,所述3D场景信息用于渲染后生成3D场景,所述3D场景信息包含至少一个虚拟角色信息以及至少一个虚拟对象信息,所述虚拟角色信息用于渲染后生成虚拟角色,所述虚拟角色通过动作捕捉设备捕捉的控制信息驱动;Obtaining a live video stream through a game platform, the live video stream is generated based on 3D scene information, the 3D scene information is used to generate a 3D scene after rendering, and the 3D scene information includes at least one virtual character information and at least one virtual object information, The virtual character information is used to generate a virtual character after rendering, and the virtual character is driven by control information captured by a motion capture device;
将所述直播视频流发送到至少一个用户终端,以在所述用户终端展示与所述直播视频流相应的直播画面;sending the live video stream to at least one user terminal, so as to display a live picture corresponding to the live video stream on the user terminal;
获取所述用户终端发送的弹幕信息;Obtaining the barrage information sent by the user terminal;
将所述弹幕信息发送至游戏平台,以使得所述游戏平台基于所述弹幕信息以及所述至少一个虚拟对象信息生成至少一个虚拟对象,并控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互。Sending the barrage information to the game platform, so that the game platform generates at least one virtual object based on the barrage information and the at least one virtual object information, and controls the at least one virtual object to enter the 3D scene interact with the at least one virtual character.
根据第二方面,在一种可能的实施方式中,所述方法还包括:According to the second aspect, in a possible implementation manner, the method further includes:
接收所述游戏平台发送的弹幕处理结果信息;receiving the barrage processing result information sent by the game platform;
在所述弹幕信息处理成功的情况下,将所述弹幕信息删除,不展示于所述直播画面中;其中,所述弹幕信息处理成功是指,所述弹幕信息与所述至少一个虚拟对象信息结合,并生成了所述至少一个虚拟对象。In the case that the bullet chat information is successfully processed, the bullet chat information is deleted and not displayed in the live screen; wherein, the successful processing of the bullet chat information means that the bullet chat information is related to the at least A virtual object information is combined to generate the at least one virtual object.
本公开实施例中,在所述弹幕信息处理成功的情况下,将所述弹幕信息删除,不展示于所述直播画面中,如此,可以避免显示的弹幕信息和生成的虚拟对象所携带的弹幕信息重复的情况发生,提升用户的直播体验。In the embodiment of the present disclosure, when the bullet chat information is successfully processed, the bullet chat information is deleted and is not displayed on the live screen, so that the displayed bullet chat information and the generated virtual object can be avoided. The repeated barrage information carried will improve the user's live broadcast experience.
第三方面,本公开实施例提供了一种虚拟对象控制装置,包括:In a third aspect, an embodiment of the present disclosure provides a virtual object control device, including:
第一获取模块,用于获取直播视频流,所述直播视频流基于3D场景信息生成,所述3D场景信息用于渲染后生成3D场景,所述3D场景信息包含至少一个虚拟角色信息以及至少一个虚拟对象信息,所述虚拟角色信息用于渲染后生成虚拟角色,所述虚拟角色通过动作捕捉设备捕捉的控制信息驱动;The first obtaining module is used to obtain a live video stream, the live video stream is generated based on 3D scene information, and the 3D scene information is used to generate a 3D scene after rendering, and the 3D scene information includes at least one virtual character information and at least one Virtual object information, the virtual character information is used to generate a virtual character after rendering, and the virtual character is driven by control information captured by a motion capture device;
第一发送模块,用于发送所述直播视频流,以在用户终端展示与所述直播视频流相应的直播画面;The first sending module is configured to send the live video stream, so as to display a live picture corresponding to the live video stream on the user terminal;
第二获取模块,用于获取所述用户终端发送的弹幕信息;The second obtaining module is used to obtain the barrage information sent by the user terminal;
第一生成模块,用于在所述弹幕信息符合第一预设条件的情况下,基于所述弹幕信息以及所述至少一个虚拟对象信息生成至少一个虚拟对象;A first generation module, configured to generate at least one virtual object based on the bullet chat information and the at least one virtual object information when the bullet chat information meets a first preset condition;
交互模块,用于控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互。An interaction module, configured to control the at least one virtual object to enter the 3D scene to interact with the at least one virtual character.
根据第三方面,在一种可能的实施方式中,第二获取模块具体用于:According to the third aspect, in a possible implementation manner, the second acquiring module is specifically configured to:
通过直播平台获取所述用户终端发送的弹幕信息。Obtain the barrage information sent by the user terminal through the live broadcast platform.
根据第三方面,在一种可能的实施方式中,所述交互模块具体用于:According to the third aspect, in a possible implementation manner, the interaction module is specifically configured to:
控制所述至少一个虚拟对象进入所述3D场景中,并向靠近目标虚拟角色的方向移动直至与所述目标虚拟角色接触;controlling the at least one virtual object to enter the 3D scene, and move in a direction close to the target virtual character until it comes into contact with the target virtual character;
识别所述虚拟对象与所述目标虚拟角色的接触部位,并根据所述接触部位的类型,确定与所述接触部位的类型对应的目标交互行为;identifying a contact part between the virtual object and the target virtual character, and determining a target interaction behavior corresponding to the type of the contact part according to the type of the contact part;
基于所述目标交互行为,控制所述虚拟对象与所述目标虚拟角色进行交互。Based on the target interaction behavior, the virtual object is controlled to interact with the target virtual character.
根据第三方面,在一种可能的实施方式中,所述交互模块具体用于:According to the third aspect, in a possible implementation manner, the interaction module is specifically configured to:
基于每个虚拟对象所携带的弹幕信息,从所述至少一个虚拟角色中确定与所述每个虚拟对象对应的目标虚拟角色;Determining a target virtual character corresponding to each virtual object from the at least one virtual character based on the barrage information carried by each virtual object;
控制所述每个虚拟对象进入所述3D场景中,并向靠近与所述每个虚拟对象对应的目标虚拟角色的方向移动直至与所述目标虚拟角色接触。Each virtual object is controlled to enter the 3D scene, and move toward a target virtual character corresponding to each virtual object until it contacts the target virtual character.
根据第三方面,在一种可能的实施方式中,所述接触部位为所述目标虚拟角色的脚部,所述交互模块具体用于:According to the third aspect, in a possible implementation manner, the contact part is the foot of the target virtual character, and the interaction module is specifically used for:
获取所述目标虚拟角色的脚部的控制信息;Acquiring control information of the feet of the target virtual character;
基于所述控制信息驱动所述目标虚拟角色的脚部运动;driving a foot movement of the target virtual character based on the control information;
根据所述目标虚拟角色的脚部的运动信息,控制所述虚拟对象远离所述目标虚拟角色的移动状态。The moving state of the virtual object away from the target virtual character is controlled according to the motion information of the target virtual character's feet.
根据第三方面,在一种可能的实施方式中,所述交互模块具体用于:According to the third aspect, in a possible implementation manner, the interaction module is specifically configured to:
获取所述目标虚拟角色的脚部的运动信息,所述运动信息由控制对象驱动生成;Acquiring motion information of the feet of the target virtual character, the motion information being driven and generated by a control object;
基于所述运动信息,控制所述虚拟对象的移动状态。Based on the motion information, the movement state of the virtual object is controlled.
根据第三方面,在一种可能的实施方式中,所述3D场景信息还包括虚拟镜头,所述交互模块具体用于:According to the third aspect, in a possible implementation manner, the 3D scene information further includes a virtual lens, and the interaction module is specifically configured to:
在所述虚拟对象的移动方向为朝向所述虚拟镜头方向的情况下,若所述虚拟对象的移动状态满足第二预设条件,获取并显示预设的与虚拟镜面碰撞的特效。When the moving direction of the virtual object is toward the virtual camera, if the moving state of the virtual object satisfies a second preset condition, a preset special effect of colliding with the virtual mirror is acquired and displayed.
根据第三方面,在一种可能的实施方式中,所述交互模块具体用于:According to the third aspect, in a possible implementation manner, the interaction module is specifically configured to:
获取所述至少一个虚拟角色在所述3D场景中的第一实时位置信息;Acquiring first real-time position information of the at least one virtual character in the 3D scene;
基于所述第一实时位置信息,控制所述至少一个虚拟对象相对于所述至少一个虚拟角色移动。Based on the first real-time position information, the at least one virtual object is controlled to move relative to the at least one virtual character.
根据第三方面,在一种可能的实施方式中,所述交互模块具体用于:According to the third aspect, in a possible implementation manner, the interaction module is specifically configured to:
获取所述至少一个虚拟对象在所述3D场景中的第二实时位置信息;acquiring second real-time position information of the at least one virtual object in the 3D scene;
基于所述第二实时位置信息,控制所述至少一个虚拟对象之间进行互动。Based on the second real-time position information, the interaction between the at least one virtual object is controlled.
根据第三方面,在一种可能的实施方式中,所述交互模块具体用于:According to the third aspect, in a possible implementation manner, the interaction module is specifically configured to:
确定每个虚拟对象携带的弹幕信息的用户资源信息;Determine the user resource information of the barrage information carried by each virtual object;
基于所述用户资源信息,确定所述每个虚拟对象的运动状态;determining the motion state of each virtual object based on the user resource information;
基于所述运动状态,控制所述每个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互。Based on the motion state, each virtual object is controlled to enter the 3D scene to interact with the at least one virtual character.
第四方面,本公开实施例提供了一种虚拟对象控制装置,包括:In a fourth aspect, an embodiment of the present disclosure provides a virtual object control device, including:
第三获取模块,用于通过游戏平台获取直播视频流,所述直播视频流基于3D场景信息生成,所述3D场景信息用于渲染后生成3D场景,所述3D场景信息包含至少一个虚拟角色信息以及至少一个虚拟对象信息,所述 虚拟角色信息用于渲染后生成虚拟角色,所述虚拟角色通过动作捕捉设备捕捉的控制信息驱动;The third obtaining module is used to obtain a live video stream through a game platform, the live video stream is generated based on 3D scene information, and the 3D scene information is used to generate a 3D scene after rendering, and the 3D scene information includes at least one virtual character information And at least one piece of virtual object information, the virtual character information is used to generate a virtual character after rendering, and the virtual character is driven by control information captured by a motion capture device;
第二发送模块,用于将所述直播视频流发送到至少一个用户终端,以在所述用户终端展示与所述直播视频流相应的直播画面;The second sending module is configured to send the live video stream to at least one user terminal, so as to display a live picture corresponding to the live video stream on the user terminal;
第四获取模块,用于获取所述用户终端发送的弹幕信息;A fourth obtaining module, configured to obtain the barrage information sent by the user terminal;
第三发送模块,用于将所述弹幕信息发送至游戏平台,以使得所述游戏平台基于所述弹幕信息以及所述至少一个虚拟对象信息生成至少一个虚拟对象,并控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互。The third sending module is configured to send the barrage information to the game platform, so that the game platform generates at least one virtual object based on the barrage information and the at least one virtual object information, and controls the at least one A virtual object enters the 3D scene to interact with the at least one virtual character.
根据第四方面,在一种可能的实施方式中,所述虚拟对象控制装置还包括:According to the fourth aspect, in a possible implementation manner, the virtual object control device further includes:
信息接收模块,用于接收所述游戏平台发送的弹幕处理结果信息;An information receiving module, configured to receive the barrage processing result information sent by the game platform;
弹幕处理模块,在所述弹幕信息处理成功的情况下,将所述弹幕信息删除,不展示于所述直播画面中;其中,所述弹幕信息处理成功是指,所述弹幕信息与所述至少一个虚拟对象信息结合,并生成了所述至少一个虚拟对象。The barrage processing module, if the barrage information is successfully processed, deletes the barrage information and does not display it in the live screen; wherein, the barrage information processing is successful means that the barrage The information is combined with the at least one virtual object information and the at least one virtual object is generated.
第五方面,本公开实施例提供了一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如第一方面或者第二方面所述的虚拟对象控制方法。In a fifth aspect, an embodiment of the present disclosure provides an electronic device, including: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, the processing The processor communicates with the memory through a bus, and when the machine-readable instructions are executed by the processor, the virtual object control method as described in the first aspect or the second aspect is executed.
第六方面,本公开实施例提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如第一方面或者第二方面所述的虚拟对象控制方法。In a sixth aspect, an embodiment of the present disclosure provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the computer program described in the first aspect or the second aspect is executed. Virtual object control method.
为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。In order to make the above-mentioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments will be described in detail below together with the accompanying drawings.
附图说明Description of drawings
附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与 本发明实施例一起用于解释本发明,并不构成对本发明的限制。在附图中:The accompanying drawings are used to provide a further understanding of the present invention, and constitute a part of the specification, and are used together with the embodiments of the present invention to explain the present invention, and do not constitute a limitation to the present invention. In the attached picture:
图1示出了本公开实施例所提供的一种虚拟对象控制方法的执行主体的示意图;FIG. 1 shows a schematic diagram of an execution subject of a virtual object control method provided by an embodiment of the present disclosure;
图2示出了本公开实施例所提供的第一种虚拟对象控制方法的流程图;FIG. 2 shows a flow chart of the first virtual object control method provided by an embodiment of the present disclosure;
图3示出了本公开实施例所提供的一种发送直播视频流的示意图;FIG. 3 shows a schematic diagram of sending a live video stream provided by an embodiment of the present disclosure;
图4示出了本公开实施例所提供的一种生成的虚拟对象的示意图;FIG. 4 shows a schematic diagram of a generated virtual object provided by an embodiment of the present disclosure;
图5示出了本公开实施例所提供的一种至少一个虚拟对象进入3D场景的示意图;FIG. 5 shows a schematic diagram of at least one virtual object entering a 3D scene provided by an embodiment of the present disclosure;
图6示出了本公开实施例所提供的第一种控制虚拟对象与虚拟角色进行交互的方法流程图;Fig. 6 shows a flow chart of the first method for controlling the interaction between a virtual object and a virtual character provided by an embodiment of the present disclosure;
图7示出了本公开实施例所提供的一种控制虚拟对象的移动状态的方法流程图;Fig. 7 shows a flowchart of a method for controlling the moving state of a virtual object provided by an embodiment of the present disclosure;
图8示出了本公开实施例所提供的一种虚拟角色踢开虚拟对象的示意图;Fig. 8 shows a schematic diagram of a virtual character kicking away a virtual object provided by an embodiment of the present disclosure;
图9示出了本公开实施例所提供的一种虚拟对象与虚拟镜面碰撞的效果示意图;FIG. 9 shows a schematic diagram of the effect of a virtual object colliding with a virtual mirror provided by an embodiment of the present disclosure;
图10示出了本公开实施例所提供的第二种控制虚拟对象与虚拟角色进行交互的方法流程图;Fig. 10 shows a flow chart of the second method for controlling the interaction between a virtual object and a virtual character provided by an embodiment of the present disclosure;
图11示出了本公开实施例所提供的一种多个虚拟对象之间进行互动的示意图;FIG. 11 shows a schematic diagram of interaction among multiple virtual objects provided by an embodiment of the present disclosure;
图12示出了本公开实施例所提供的第三种控制虚拟对象与虚拟角色进行交互的方法流程图;Fig. 12 shows a flowchart of a third method for controlling the interaction between a virtual object and a virtual character provided by an embodiment of the present disclosure;
图13示出了本公开实施例所提供的第一种虚拟对象的运动状态的示意图;Fig. 13 shows a schematic diagram of the motion state of the first type of virtual object provided by the embodiment of the present disclosure;
图14示出了本公开实施例所提供的第二种虚拟对象的运动状态的示意图;Fig. 14 shows a schematic diagram of the motion state of the second virtual object provided by the embodiment of the present disclosure;
图15示出了本公开实施例所提供的第三种虚拟对象的运动状态的示意图;Fig. 15 shows a schematic diagram of the motion state of a third virtual object provided by an embodiment of the present disclosure;
图16示出了本公开实施例所提供的另一种虚拟对象控制方法的流程图;Fig. 16 shows a flow chart of another virtual object control method provided by an embodiment of the present disclosure;
图17示出了本公开实施例所提供的一种虚拟对象控制装置的结构示意图;Fig. 17 shows a schematic structural diagram of a virtual object control device provided by an embodiment of the present disclosure;
图18示出了本公开实施例所提供的另一种虚拟对象控制装置的结构示意图;Fig. 18 shows a schematic structural diagram of another virtual object control device provided by an embodiment of the present disclosure;
图19示出了本公开实施例所提供的一种电子设备的示意图。Fig. 19 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
具体实施方式Detailed ways
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present disclosure clearer, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only It is a part of the embodiments of the present disclosure, but not all of them. The components of the disclosed embodiments generally described and illustrated in the figures herein may be arranged and designed in a variety of different configurations. Accordingly, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the claimed disclosure, but merely represents selected embodiments of the present disclosure. Based on the embodiments of the present disclosure, all other embodiments obtained by those skilled in the art without creative effort shall fall within the protection scope of the present disclosure.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。It should be noted that like numerals and letters denote similar items in the following figures, therefore, once an item is defined in one figure, it does not require further definition and explanation in subsequent figures.
本文中术语“和/或”,仅仅是描述一种关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。The term "and/or" in this article only describes an association relationship, which means that there can be three kinds of relationships, for example, A and/or B can mean: there is A alone, A and B exist at the same time, and B exists alone. situation. In addition, the term "at least one" herein means any one of a variety or any combination of at least two of the more, for example, including at least one of A, B, and C, which may mean including from A, Any one or more elements selected from the set formed by B and C.
随着计算机技术与网络技术的发展,视频直播成为一种热门的交互方式。越来越多的用户选择通过直播平台来观看视频直播,比如游戏直播、新闻直播等。为了提升直播效果,出现了虚拟主播代替真实主播进行视频直播的方式。With the development of computer technology and network technology, live video broadcasting has become a popular interactive method. More and more users choose to watch live video broadcasts through live broadcast platforms, such as game live broadcasts and news live broadcasts. In order to improve the effect of live broadcasting, virtual anchors have emerged to replace real anchors for live video broadcasting.
经研究发现,在虚拟主播进行直播时,观众用户的参与度在不断提高。观众用户通过弹幕的方式与虚拟主播进行互动,已经越来越频繁。然而,现有的弹幕大都以文字的形式展示,观众用户与虚拟主播之间互动的体验欠佳。After research, it is found that when the virtual anchor is live broadcasting, the participation of audience users is constantly increasing. Audience users interact with virtual anchors through barrage, which has become more and more frequent. However, most of the existing bullet screens are displayed in the form of text, and the interaction experience between audience users and virtual anchors is not good.
本公开提供的一种虚拟对象控制方法,包括:获取直播视频流,所述直播视频流基于3D场景信息生成。所述3D场景信息用于渲染后生成3D场景。所述3D场景信息包含至少一个虚拟角色信息以及至少一个虚拟对象信息,所述虚拟角色信息用于渲染后生成虚拟角色,所述虚拟角色通过动作捕捉设备捕捉的控制信息驱动。发送所述直播视频流,以在用户终端展示与所述直播视频流相应的直播画面。获取所述用户终端发送的弹幕信息。在所述弹幕信息符合第一预设条件的情况下,基于所述弹幕信息以及所述至少一个虚拟对象信息生成至少一个虚拟对象。控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互。A virtual object control method provided in the present disclosure includes: acquiring a live video stream, where the live video stream is generated based on 3D scene information. The 3D scene information is used to generate a 3D scene after rendering. The 3D scene information includes at least one virtual character information and at least one virtual object information, and the virtual character information is used to generate a virtual character after rendering, and the virtual character is driven by control information captured by a motion capture device. The live video stream is sent to display a live picture corresponding to the live video stream on the user terminal. Obtain the barrage information sent by the user terminal. If the bullet chat information meets the first preset condition, at least one virtual object is generated based on the bullet chat information and the at least one virtual object information. Controlling the at least one virtual object to enter the 3D scene to interact with the at least one virtual character.
本公开实施例中,在弹幕信息符合第一预设条件的情况下,基于所述弹幕信息以及所述至少一个虚拟对象信息生成至少一个虚拟对象,并控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互。也即,用户所发的弹幕信息可以虚拟对象的形式替代用户进入到3D场景中与虚拟角色进行交互。如此,不仅提高了用户在直播过程中的参与度,还提升了用户的交互体验。In the embodiment of the present disclosure, if the barrage information meets the first preset condition, at least one virtual object is generated based on the barrage information and the at least one virtual object information, and the at least one virtual object is controlled to enter the interact with the at least one virtual character in the 3D scene. That is, the barrage information sent by the user can replace the user in the form of a virtual object to enter the 3D scene to interact with the virtual character. In this way, not only the user's participation in the live broadcast process is improved, but also the user's interactive experience is improved.
请参阅图1,为本公开实施例所提供的虚拟对象控制方法的执行主体的示意图。该方法的执行主体为电子设备100,其中电子设备100可以包括终端和服务器。例如,该方法可应用于终端中,终端可以是图1中所示智能手机10、台式计算机20、笔记本电脑30等,还可以是图1中未示出的智能音箱、智能手表、平板电脑等,并不限定。该方法还可应用于服务器40,或者可应用于由终端和服务器40所组成的实施环境中。服务器40可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云存储、大数据和人工智能平台等基础云计算服务的云服务器。Please refer to FIG. 1 , which is a schematic diagram of an execution body of a virtual object control method provided by an embodiment of the present disclosure. The method is executed by the electronic device 100, where the electronic device 100 may include a terminal and a server. For example, this method can be applied to a terminal, which can be a smart phone 10, a desktop computer 20, a notebook computer 30, etc. shown in FIG. 1, or a smart speaker, smart watch, tablet computer, etc. not shown in FIG. 1 , is not limited. The method can also be applied to the server 40 , or can be applied to an implementation environment composed of the terminal and the server 40 . The server 40 can be an independent physical server, or a server cluster or a distributed system composed of multiple physical servers, or can be a basic cloud server that provides cloud services, cloud databases, cloud computing, cloud storage, big data and artificial intelligence platforms. Cloud server for computing services.
在另一些实施方式中,电子设备100还可以包括AR(Augmented Reality,增强现实)设备、VR(Virtual Reality,虚拟现实)设备、MR(Mixed Reality,混合现实)设备等。比如,AR设备可以是具有AR功能的手机或平板电脑,也可以是AR眼镜,在此不做限定。In some other implementation manners, the electronic device 100 may also include an AR (Augmented Reality, augmented reality) device, a VR (Virtual Reality, virtual reality) device, an MR (Mixed Reality, mixed reality) device, and the like. For example, the AR device may be a mobile phone or a tablet computer with an AR function, or may be AR glasses, which is not limited here.
需要说明的是,在一些实施方式中,服务器40可以通网络50分别与智能手机10、台式计算机20及笔记本电脑30进行通信。网络50可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。It should be noted that, in some embodiments, the server 40 can communicate with the smart phone 10 , the desktop computer 20 and the notebook computer 30 respectively through the network 50 . Network 50 may include various connection types, such as wires, wireless communication links, or fiber optic cables, among others.
参见图2所示,为本公开实施例提供的第一种虚拟对象控制方法的流程图,该第一种虚拟对象控制方法可以应用于游戏平台的服务器中。在一些可能的实现方式中,该第一种虚拟对象控制方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。该虚拟对象控制方法包括以下S101~S105:Referring to FIG. 2 , it is a flowchart of a first virtual object control method provided by an embodiment of the present disclosure. The first virtual object control method can be applied to a server of a game platform. In some possible implementation manners, the first method for controlling a virtual object may be implemented in a manner in which a processor invokes computer-readable instructions stored in a memory. The virtual object control method includes the following steps S101-S105:
S101,获取直播视频流,所述直播视频流基于3D场景信息生成,所述3D场景信息用于渲染后生成3D场景,所述3D场景信息包含至少一个虚拟角色信息以及至少一个虚拟对象信息,所述虚拟角色信息用于渲染后生成虚拟角色,所述虚拟角色通过动作捕捉设备捕捉的控制信息驱动。S101. Acquire a live video stream, the live video stream is generated based on 3D scene information, the 3D scene information is used to generate a 3D scene after rendering, and the 3D scene information includes at least one virtual character information and at least one virtual object information, so The avatar information is used to generate the avatar after rendering, and the avatar is driven by the control information captured by the motion capture device.
示例性地,虚拟角色的一种形态是对演员(中之人)动作捕捉获取控制信号,驱动游戏引擎中的虚拟角色动作,同时获取演员声音,将演员声音与虚拟角色画面融合,生成视频数据。Exemplarily, one form of the virtual character is to capture the motion of the actor (the person in the middle) to obtain control signals, drive the action of the virtual character in the game engine, and obtain the voice of the actor at the same time, and fuse the voice of the actor with the picture of the virtual character to generate video data .
动作捕捉设备包括穿在演员身体上的肢体动作捕捉设备(如衣服)、戴在演员手上的手部动作捕捉设备(如手套)、面部动作捕捉设备(如摄像头)以及声音捕捉设备(比如麦克风、喉麦等)中至少一种。Motion capture devices include body motion capture devices worn on the actor's body (such as clothes), hand motion capture devices worn on the actor's hands (such as gloves), facial motion capture devices (such as cameras), and sound capture devices (such as microphones). , throat wheat, etc.) at least one.
其中,直播视频流是进行连续视频直播所需要的数据流。可以理解,视频通常由画面和/或声音等组成的,画面属于视频帧,声音则属于音频帧。本公开实施例中,获取直播视频流的过程可以是直接获取生成好的直播视频流的过程,也可以是基于3D场景信息生成直播视频流的过程,不做限定,只要能最终拿到直播视频流即可。Wherein, the live video stream is a data stream required for continuous video live broadcast. It can be understood that a video is usually composed of pictures and/or sounds, etc., pictures belong to video frames, and sounds belong to audio frames. In the embodiment of the present disclosure, the process of obtaining the live video stream may be the process of directly obtaining the generated live video stream, or the process of generating the live video stream based on the 3D scene information. There is no limitation, as long as the live video stream can be finally obtained Just stream.
其中,3D渲染环境是运行于电子设备中的3D游戏引擎,能够基于待渲染数据生成基于一个或者多个视角的影像信息。虚拟角色信息是存在于 游戏引擎中的角色模型,能够在渲染后生成相应的虚拟角色。同理,虚拟对象信息也可以在渲染后生成相应的虚拟对象。不同的是,所述虚拟角色通过动作捕捉设备捕捉的控制信息驱动,而虚拟对象则不需要,可以通过系统控制。本公开实施例中,虚拟角色可以包括虚拟主播或者数字人。虚拟对象可以是虚拟动物形象、虚拟卡通形象等。Wherein, the 3D rendering environment is a 3D game engine running in the electronic device, which can generate image information based on one or more perspectives based on the data to be rendered. Avatar information is a role model that exists in the game engine, and can generate corresponding avatars after rendering. Similarly, virtual object information can also generate corresponding virtual objects after rendering. The difference is that the virtual character is driven by the control information captured by the motion capture device, while the virtual object does not need it and can be controlled by the system. In the embodiment of the present disclosure, the virtual character may include a virtual anchor or a digital human. The virtual object can be a virtual animal image, a virtual cartoon image, and the like.
3D场景信息可以运行于计算机CPU(Central Processing Uni,中央处理器)、GPU(Graphics Processing Unit,图形处理器)以及存储器中,其包含网格化的模型信息以及和贴图纹理信息。相应地,作为示例,虚拟角色信息和虚拟对象信息包含但不限于网格化的模型数据、体素数据及贴图纹理数据或者其组合。其中,网格包括但不限于三角形网格、四边形网格、其他多边形网格或者其组合。本公开实施例中,网格为三角形网格。The 3D scene information can run on the computer CPU (Central Processing Uni, central processing unit), GPU (Graphics Processing Unit, graphics processing unit) and memory, which includes gridded model information and texture information. Correspondingly, as an example, the virtual character information and the virtual object information include, but are not limited to, meshed model data, voxel data, and map texture data, or a combination thereof. Wherein, the grid includes, but is not limited to, a triangular grid, a quadrilateral grid, other polygonal grids or a combination thereof. In the embodiment of the present disclosure, the grid is a triangular grid.
S102,发送所述直播视频流,以在所述用户终端展示与所述直播视频流相应的直播画面。S102. Send the live video stream to display a live picture corresponding to the live video stream on the user terminal.
示例性地,参见图3所示,游戏平台的服务器40在获取到直播视频流后可以实时地将直播视频流发送至直播平台200,直播平台200再将直播视频流发送至多个用户终端300进行视频直播。Exemplarily, as shown in FIG. 3 , the server 40 of the game platform can send the live video stream to the live broadcast platform 200 in real time after obtaining the live video stream, and the live broadcast platform 200 sends the live video stream to a plurality of user terminals 300 for further processing. Live video.
S103,获取所述用户终端发送的弹幕信息。S103. Obtain the barrage information sent by the user terminal.
其中,弹幕是指在观看直播视频时弹出的评论性字幕。可以理解,在直播过程中,用户终端的用户(比如观众)可以通过用户终端发送弹幕以和虚拟角色进行互动。本公开实施例中,弹幕信息可以是弹幕的具体内容,也可以是发送弹幕的用户标识(比如用户账号或者用户昵称)。Among them, barrage refers to the commentary subtitles that pop up when watching a live video. It can be understood that during the live broadcast, the user of the user terminal (such as the audience) can send barrage through the user terminal to interact with the virtual character. In the embodiment of the present disclosure, the bullet chat information may be the specific content of the bullet chat, or may be the user identification (such as user account number or user nickname) that sends the bullet chat.
请再次参阅图3,在直播过程中,直播平台200可以实时获取所述用户终端100发送的弹幕信息,并将弹幕信息发送至游戏平台。如此,游戏平台可以通过直播平台200获取所述用户终端100发送的弹幕信息。Please refer to FIG. 3 again. During the live broadcast, the live broadcast platform 200 can obtain the bullet chat information sent by the user terminal 100 in real time, and send the bullet chat information to the game platform. In this way, the game platform can obtain the barrage information sent by the user terminal 100 through the live broadcast platform 200 .
S104,在所述弹幕信息符合第一预设条件的情况下,基于所述弹幕信息以及所述至少一个虚拟对象信息生成至少一个虚拟对象。S104, if the bullet chat information meets the first preset condition, generate at least one virtual object based on the bullet chat information and the at least one virtual object information.
参见图4所示,在所述弹幕信息符合第一预设条件的情况下,可以将弹幕信息与至少一个虚拟对象信息结合生成至少一个虚拟对象B。如此,使得对象B携带有弹幕信息的内容。如图4中所示,其中一个虚拟对象B 所携带的弹幕信息内容为“我是舰长2”,另一个虚拟对象B所携带的弹幕信息内容为“不想洗衣服”。Referring to FIG. 4 , if the bullet chat information meets the first preset condition, at least one virtual object B may be generated by combining the bullet chat information with at least one virtual object information. In this way, the object B carries the content of the barrage information. As shown in FIG. 4 , the content of the bullet chat information carried by one of the virtual objects B is "I am Captain 2", and the content of the bullet chat information carried by the other virtual object B is "I don't want to do laundry".
其中,第一预设条件可以根据实际需求而设定。比如第一预设条件可以是弹幕信息内容符合预设内容,也可以是弹幕信息所携带的用户标识符合预设要求,比如该用户标识对应的用户关注了该虚拟角色,在此不做具体限定。Wherein, the first preset condition can be set according to actual needs. For example, the first preset condition can be that the content of the bullet chat information conforms to the preset content, or that the user ID carried by the bullet chat information meets the preset requirements. For example, the user corresponding to the user ID has followed the virtual character, which is not done here. Specific limits.
S105,控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互。S105. Control the at least one virtual object to enter the 3D scene to interact with the at least one virtual character.
参见图5所示,在生成至少一个虚拟对象B后,可以控制至少一个虚拟对象B进入所述3D场景中与至少一个虚拟角色A进行交互。其中,交互包括不限于动作交互、行为交互及语言交互等。Referring to FIG. 5 , after at least one virtual object B is generated, at least one virtual object B can be controlled to enter the 3D scene to interact with at least one virtual character A. Wherein, the interaction includes but is not limited to action interaction, behavior interaction and language interaction.
参见图6所示,针对上述步骤S105,在控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互时,可以包括以下S1051~S1053:Referring to FIG. 6, for the above step S105, when controlling the at least one virtual object to enter the 3D scene to interact with the at least one virtual character, the following steps S1051-S1053 may be included:
S1051,控制所述至少一个虚拟对象进入所述3D场景中,并向靠近目标虚拟角色的方向移动直至与所述目标虚拟角色接触。S1051. Control the at least one virtual object to enter the 3D scene, and move toward a direction close to a target virtual character until it comes into contact with the target virtual character.
示例性地,请再次参阅图5,生成至少一个虚拟对象B后,控制至少一个虚拟对象B向靠近目标虚拟角色A的方向移动,直至与目标虚拟角色A接触。当然,在其他实施例中,也可以控制虚拟对象B进入3D场景后在距离目标虚拟角色A的预设距离范围内与目标虚拟角色A进行互动,比如与目标虚拟角色A隔空互动。Exemplarily, please refer to FIG. 5 again, after at least one virtual object B is generated, at least one virtual object B is controlled to move in a direction close to the target virtual character A until it contacts the target virtual character A. Of course, in other embodiments, it is also possible to control the virtual object B to interact with the target virtual character A within a preset distance from the target virtual character A after entering the 3D scene, for example, to interact with the target virtual character A through the air.
需要说明的是,若在3D场景中只存在一个虚拟角色的情况下,该一个虚拟角色即为目标虚拟角色。若在3D场景中存在多个虚拟角色的情况下,则可以从多个虚拟角色中确定目标虚拟角色,具体地,可以通过如下(1)~(2)的方式从多个虚拟角色中确定目标虚拟角色。It should be noted that, if there is only one virtual character in the 3D scene, this one virtual character is the target virtual character. If there are multiple virtual characters in the 3D scene, the target virtual character can be determined from the multiple virtual characters, specifically, the target can be determined from the multiple virtual characters in the following ways (1)-(2) virtual character.
(1)基于每个虚拟对象所携带的弹幕信息,从所述至少一个虚拟角色中确定与所述每个虚拟对象对应的目标虚拟角色。(1) Based on the barrage information carried by each virtual object, determine a target virtual character corresponding to each virtual object from the at least one virtual character.
(2)控制所述每个虚拟对象进入所述3D场景中,并向靠近与所述每个虚拟对象对应的目标虚拟角色的方向移动直至与所述目标虚拟角色接触。(2) Control each virtual object to enter the 3D scene, and move toward a target virtual character corresponding to each virtual object until it comes into contact with the target virtual character.
示例性地,可以根据弹幕携带的用户标识,识别发送该弹幕的用户与哪个虚拟角色存在预设的关联关系,将与该用户存在关联关系的虚拟角色,确定为目标虚拟角色。例如,若发送该弹幕的用户只关注了其中一个虚拟角色,或者其用户名称包含其中一个虚拟角色的名字,则该将虚拟角色确定为目标虚拟角色。如此,可以增强用户的互动兴趣。Exemplarily, according to the user identification carried by the bullet chatting, it is possible to identify which virtual character the user who sent the bullet chatting has a preset association relationship with, and determine the virtual character associated with the user as the target virtual character. For example, if the user who sends the barrage only pays attention to one of the avatars, or his user name contains the name of one of the avatars, then the avatar should be determined as the target avatar. In this way, the user's interest in interaction can be enhanced.
另外,还可以对弹幕进行语义识别,根据语义识别的内容,从多个虚拟角色中确定目标虚拟角色。例如,可以将多个虚拟角色根据性格或者所擅长的技能进行分类,得到每个虚拟角色的分类标签,然后根据识别到的语义的内容确定所述弹幕所属的类别标签,并将该类别标签对应的虚拟角色确定为目标虚拟角色。例如,若其中一个虚拟角色的类别标签为“娱乐大王”,另一个虚拟角色的分类标签的“劳动模范”。通过语义识别,得到弹幕内容为“不想洗衣服”(如图4所示),则确定该弹幕与劳动有关。因此,将标签为“劳动模范”的虚拟角色确定为目标虚拟角色。如此,用户可以在交互过程中变换不同的弹幕,进而更换不同的目标虚拟角色,提升了交互的趣味性。In addition, semantic recognition can also be performed on the barrage, and the target virtual character can be determined from multiple virtual characters according to the content of the semantic recognition. For example, multiple virtual characters can be classified according to their personalities or skills they are good at, to obtain the classification label of each virtual character, and then determine the category label to which the bullet chat belongs according to the identified semantic content, and assign the category label The corresponding virtual character is determined as the target virtual character. For example, if the category label of one of the avatars is "Entertainment King", the category label of the other avatar is "model worker". Through semantic recognition, if the content of the bullet chat is "I don't want to do laundry" (as shown in Figure 4), it is determined that the bullet chat is related to labor. Therefore, the virtual character labeled "model worker" is determined as the target virtual character. In this way, the user can change different bullet screens during the interaction process, and then change different target virtual characters, which improves the fun of the interaction.
S1052,识别所述虚拟对象与所述目标虚拟角色的接触部位,并根据所述接触部位的类型,确定与所述接触部位的类型对应的目标交互行为。S1052. Identify a contact part between the virtual object and the target virtual character, and determine a target interaction behavior corresponding to the type of the contact part according to the type of the contact part.
示例性地,可以预先根据不同的接触类型设定不同的交互行为。比如,若接触部位为目标虚拟角色的腿部,则可以设定“抱一抱”目标虚拟角色的交互行为。若接触部位为目标虚拟角色的脚部,则设定“踢开”的交互行为。若接触部位为目标虚拟角色的手臂部位,则设定“绕手臂转动”的交互行为。因此,在确定接触部位的类型后,即可确定与该接触部位对应的目标交互行为。Exemplarily, different interaction behaviors may be pre-set according to different contact types. For example, if the contact part is the leg of the target avatar, the interactive behavior of "hugging" the target avatar can be set. If the contact part is the foot of the target avatar, set the interaction behavior of "kick away". If the contact part is the arm part of the target avatar, set the interaction behavior of "rotating around the arm". Therefore, after the type of the contact part is determined, the target interaction behavior corresponding to the contact part can be determined.
S1053,基于所述目标交互行为,控制所述虚拟对象与所述目标虚拟角色进行交互。S1053. Based on the target interaction behavior, control the virtual object to interact with the target virtual character.
可以理解,在确定目标交互行为后,即可基于所述目标交互行为,控制所述虚拟对象与所述目标虚拟角色进行交互。在一些实施方式中,虚拟对象与目标虚拟角色的接触部位为脚部,与该脚部对应的目标交互行为为“踢开”的目标交互行为。也即,控制虚拟对象远离虚拟角色脚部的目标交 互行为。因此,参见图7所示,针对步骤S1053,在基于所述目标交互行为,控制所述虚拟对象与所述目标虚拟角色进行交互时,可以包括以下S10531~S10533:It can be understood that after the target interaction behavior is determined, the virtual object can be controlled to interact with the target virtual character based on the target interaction behavior. In some implementations, the contact part between the virtual object and the target virtual character is the foot, and the target interaction behavior corresponding to the foot is the target interaction behavior of "kicking away". That is, to control the target interaction behavior of the virtual object away from the feet of the virtual character. Therefore, referring to FIG. 7, for step S1053, when controlling the virtual object to interact with the target virtual character based on the target interaction behavior, the following steps S10531-S10533 may be included:
S10531,获取所述目标虚拟角色的脚部的控制信息。S10531. Obtain control information of the target virtual character's feet.
S10532,基于所述控制信息驱动所述目标虚拟角色的脚部运动。S10532. Drive the foot movement of the target virtual character based on the control information.
S10533,根据所述目标虚拟角色的脚部的运动信息,控制所述虚拟对象远离所述目标虚拟角色的移动状态。S10533. According to the movement information of the feet of the target virtual character, control the moving state of the virtual object away from the target virtual character.
示例性地,参见图8所示,可以通过戴在演员脚部的动作捕捉设备来获取目标虚拟角色A的脚部的控制信息,并基于所述控制信息驱动所述目标虚拟角色A的脚部运动,进而可以根据目标虚拟角色A的脚部的运动信息,控制所述虚拟对象B远离所述目标虚拟角色的移动状态。如此,可以使得虚拟对象与目标虚拟角色之间的交互更加逼真,提升用户的直播体验。For example, as shown in FIG. 8 , the control information of the feet of the target avatar A can be obtained through a motion capture device worn on the feet of the actor, and the feet of the target avatar A can be driven based on the control information. Movement, and then according to the movement information of the target virtual character A's feet, the moving state of the virtual object B away from the target virtual character can be controlled. In this way, the interaction between the virtual object and the target virtual character can be made more realistic, and the user's live broadcast experience can be improved.
具体地,可以获取所述目标虚拟角色A的脚部的运动信息,并基于所述目标虚拟角色A的脚部的运动信息,控制所述虚拟对象B的移动状态。其中,所述虚拟角色A的脚部的运动信息由控制对象(演员)驱动生成。目标虚拟角色A的脚部的运动信息包括目标虚拟角色A脚部的运动方向、运动速度及运动加速度等信息。因此,基于该运动信息即可确定虚拟对象的移动方向、移动速度及移动加速度等信息,进而可以控制虚拟对象B的移动状态。如图8所示,针对不同的虚拟对象B,由于目标虚拟角色A的脚部的运动信息不同,导致不同的虚拟对象B的移动状态不同。也即,不同的虚拟对象B被“踢飞”的程度以及方向不同。Specifically, the movement information of the feet of the target virtual character A may be acquired, and based on the movement information of the feet of the target virtual character A, the moving state of the virtual object B may be controlled. Wherein, the motion information of the foot of the virtual character A is driven and generated by the control object (actor). The movement information of the foot of the target virtual character A includes information such as the movement direction, movement speed and movement acceleration of the foot of the target virtual character A. Therefore, information such as the moving direction, moving speed, and moving acceleration of the virtual object can be determined based on the motion information, and then the moving state of the virtual object B can be controlled. As shown in FIG. 8 , for different virtual objects B, due to the different motion information of the feet of the target virtual character A, the moving states of different virtual objects B are different. That is to say, different virtual objects B are "kicked away" in different degrees and directions.
在另一些实施方式中,虚拟对象的移动状态还可以和目标虚拟角色的接触部位的着装的质感相匹配。例如,若目标虚拟角色的脚部穿着为毛茸茸的鞋子,则可以控制虚拟对象以较慢的第一移动状态离开所述目标虚拟角色。若目标虚拟角色的脚部穿着为质感光滑的皮靴,则可以控制虚拟对象以较快的第二移动状态离开所述目标虚拟角色。如此,使得虚拟对象的移动状态和用户的感官一致,有利于增强用户的代入感,进一步提升用户的直播体验。In some other implementations, the moving state of the virtual object may also match the texture of the clothing of the contact part of the target virtual character. For example, if the feet of the target avatar wear furry shoes, the avatar may be controlled to leave the target avatar in a slow first moving state. If the feet of the target avatar are wearing smooth leather boots, the virtual object can be controlled to leave the target avatar in a faster second moving state. In this way, the moving state of the virtual object is consistent with the user's senses, which is conducive to enhancing the user's sense of substitution and further improving the user's live broadcast experience.
在一些实施方式中,所述3D场景信息还包括虚拟镜头。在所述虚拟对象的移动方向为朝向所述虚拟镜头方向的情况下,若所述虚拟对象的移动状态满足第二预设条件,获取并显示预设的与虚拟镜面碰撞的特效。如此,不仅增强了用户在直播过程中的观赏体验,还增强了直播过程中的趣味性。In some implementations, the 3D scene information further includes a virtual camera. When the moving direction of the virtual object is toward the virtual camera, if the moving state of the virtual object satisfies a second preset condition, a preset special effect of colliding with the virtual mirror is acquired and displayed. In this way, not only the viewing experience of the user during the live broadcast is enhanced, but also the fun during the live broadcast is enhanced.
其中,第二预设条件可以根据实际需求而设定。比如,在一些实施方式中,第二预设条件可以是虚拟对象的移动速度大于预设速度,也可以是虚拟对象的移动加速度大于预设加速度,此处不做限定。另外,预设的与虚拟镜面碰撞的特效可以根据虚拟对象的属性来确定。其中,属性包括但不限于虚拟对象的材质、类型等。比如,若虚拟对象为易碎的杯子,则与虚拟镜面碰撞的特效可以是玻璃碎裂的特效。若虚拟对象为不易碎的小动物,则与虚拟镜面碰撞的特效可以是虚拟对象与虚拟镜面碰撞后发生形变的特效。如图9所示,虚拟对象B与虚拟镜面碰撞后呈扁平状贴合在虚拟镜面上,并可延虚拟镜面向下滑动,也即,呈现撞击后边扁平并掉落的特效。Wherein, the second preset condition can be set according to actual needs. For example, in some implementations, the second preset condition may be that the moving speed of the virtual object is greater than the preset speed, or that the moving acceleration of the virtual object is greater than the preset acceleration, which is not limited here. In addition, the preset special effect of colliding with the virtual mirror can be determined according to the attributes of the virtual object. Wherein, the attribute includes but not limited to the material, type, etc. of the virtual object. For example, if the virtual object is a fragile cup, the special effect of colliding with the virtual mirror surface may be a special effect of glass shattering. If the virtual object is a non-fragile small animal, the special effect of colliding with the virtual mirror may be a special effect of deformation of the virtual object after colliding with the virtual mirror. As shown in FIG. 9 , after the virtual object B collides with the virtual mirror, it is flattened and attached to the virtual mirror, and can slide down along the virtual mirror, that is, it presents a special effect of flattening and falling after impact.
参见图10所示,在一些实施方式中,针对上述步骤S105,在控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互时,可以包括以下S105a~S105b:Referring to FIG. 10 , in some implementations, for the above step S105, when controlling the at least one virtual object to enter the 3D scene to interact with the at least one virtual character, the following steps S105a-S105b may be included:
S105a,获取所述至少一个虚拟角色在所述3D场景中的第一实时位置信息。S105a. Acquire first real-time position information of the at least one virtual character in the 3D scene.
S105b,基于所述第一实时位置信息,控制所述至少一个虚拟对象相对于所述至少一个虚拟角色移动。S105b. Based on the first real-time position information, control the movement of the at least one virtual object relative to the at least one virtual character.
示例性地,请再次参阅图5,在至少一个虚拟对象B进入3D场景之后,需要获取所述至少一个虚拟角色在所述3D场景中的第一实时位置信息。基于所述第一实时位置信息,控制所述至少一个虚拟对象B相对于所述至少一个虚拟角色移动A移动。如此,可以使得虚拟对象B相对于虚拟角色A的移动更加精准。Exemplarily, referring to FIG. 5 again, after at least one virtual object B enters the 3D scene, first real-time position information of the at least one virtual character in the 3D scene needs to be acquired. Controlling the movement of the at least one virtual object B relative to the movement A of the at least one virtual character based on the first real-time position information. In this way, the movement of the virtual object B relative to the virtual character A can be made more precise.
在一些实施方式中,在至少一个虚拟对象B的数量为多个的情况下,参见图11所示,还可以获取多个虚拟对象B在所述3D场景中的第二实时位置信息。基于所述第二实时位置信息,控制多个虚拟对象B之间进行互 动。例如,可以控制多个虚拟对象B之间相互靠近,集中互动。如此,提升了虚拟对象的交互行为的丰富性。In some implementation manners, when there are multiple virtual objects B, as shown in FIG. 11 , the second real-time position information of multiple virtual objects B in the 3D scene may also be acquired. Based on the second real-time position information, the interaction between multiple virtual objects B is controlled. For example, multiple virtual objects B can be controlled to approach each other and interact intensively. In this way, the richness of the interactive behavior of the virtual object is improved.
在另一些实施方式中,参见图12所示,针对上述步骤S105,在控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互时,还可以包括以下S105m~S105k:In some other implementation manners, as shown in FIG. 12, for the above step S105, when controlling the at least one virtual object to enter the 3D scene to interact with the at least one virtual character, the following steps S105m-S105k may also be included :
S105m,确定每个虚拟对象携带的弹幕信息的用户资源信息。S105m. Determine user resource information of the barrage information carried by each virtual object.
S105n,基于所述用户资源信息,确定所述每个虚拟对象的运动状态。S105n. Determine the motion state of each virtual object based on the user resource information.
S105k,基于所述运动状态,控制所述每个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互。S105k. Based on the motion state, control each virtual object to enter the 3D scene to interact with the at least one virtual character.
示例性地,参见图13、图14及图15,由于每个虚拟对象B是由不同的弹幕信息生成的,但是不同的弹幕信息也有可能是同一个用户发送的。因此,为了提升用户参与直播的趣味性,可以确定每个虚拟对象B携带的弹幕信息的用户资源信息,并基于用户资源信息,确定所述每个虚拟对象的运动状态。其中,用户资源信息可以是用户的“点赞”信息,与虚拟形象的交互次数信息等。基于用户资源信息,确定每个虚拟对象的运动状态,具体可以为,“点赞”的用户对应的虚拟对象运动速度可以更快,没有点赞的用户对应的虚拟对象跑几步会跌倒。For example, referring to FIG. 13 , FIG. 14 and FIG. 15 , since each virtual object B is generated by different bullet chat information, different bullet chat information may also be sent by the same user. Therefore, in order to improve the fun of the user participating in the live broadcast, the user resource information of the barrage information carried by each virtual object B can be determined, and the motion state of each virtual object can be determined based on the user resource information. Wherein, the user resource information may be "like" information of the user, information on the number of interactions with the avatar, and the like. Based on user resource information, determine the movement state of each virtual object. Specifically, the virtual object corresponding to the user who "likes" can move faster, and the virtual object corresponding to the user who does not like it will fall after a few steps.
比如,若用户的资源信息较多,则虚拟对象B可以以图13中较为轻盈愉悦的运动状态进入所述3D场景中与所述至少一个虚拟角色进行交互。若用户的资源信息中等,则虚拟对象B可以以图14中的常规稳健的运动状态进入所述3D场景中与所述至少一个虚拟角色进行交互。若用户的资源信息较少,则虚拟对象B可以以图15中的较为笨拙的运动状态进入所述3D场景中与所述至少一个虚拟角色进行交互。当然图13至图15中的运动状态仅仅是示意,其他实施方式中,还可以是其他的运动状态,或者是多个不同的状态相结合而形成的组合状态。如此,将虚拟对象的运动状态和用户的资源信息相结合,也即将虚拟对象和用户之间关联起来,提升了用户在直播过程中的参与感与趣味性。For example, if the user has more resource information, the virtual object B may enter the 3D scene in a light and pleasant motion state as shown in FIG. 13 to interact with the at least one virtual character. If the resource information of the user is medium, the virtual object B can enter the 3D scene in the normal and robust motion state shown in FIG. 14 to interact with the at least one virtual character. If the resource information of the user is less, the virtual object B may enter the 3D scene to interact with the at least one virtual character in a clumsy motion state as shown in FIG. 15 . Of course, the motion states in Fig. 13 to Fig. 15 are only schematic representations, and in other embodiments, other motion states may also be used, or a combined state formed by combining a plurality of different states. In this way, combining the motion state of the virtual object with the user's resource information, that is, associating the virtual object with the user, improves the user's sense of participation and interest in the live broadcast process.
参见图16所示,为本公开实施例提供的第二种虚拟对象控制方法的流程图,该第二种虚拟对象控制方法可以应用于直播平台。在一些可能的实 现方式中,该第二种虚拟对象控制方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。该虚拟对象控制方法包括以下S201~S205:Referring to FIG. 16 , it is a flow chart of a second virtual object control method provided by an embodiment of the present disclosure. The second virtual object control method can be applied to a live broadcast platform. In some possible implementation manners, the second method for controlling a virtual object may be implemented by a processor invoking computer-readable instructions stored in a memory. The virtual object control method includes the following S201-S205:
S201,通过游戏平台获取直播视频流,所述直播视频流基于3D场景信息生成,所述3D场景信息用于渲染后生成3D场景,所述3D场景信息包含至少一个虚拟角色信息以及至少一个虚拟对象信息,所述虚拟角色信息用于渲染后生成虚拟角色,所述虚拟角色通过动作捕捉设备捕捉的控制信息驱动。S201. Obtain a live video stream through a game platform, the live video stream is generated based on 3D scene information, the 3D scene information is used to generate a 3D scene after rendering, and the 3D scene information includes at least one virtual character information and at least one virtual object information, the virtual character information is used to generate a virtual character after rendering, and the virtual character is driven by control information captured by a motion capture device.
其中,步骤S201与前述步骤S101类似,在此不再赘述。Wherein, step S201 is similar to the above-mentioned step S101, and will not be repeated here.
S202,将所述直播视频流发送到至少一个用户终端,以在所述用户终端展示与所述直播视频流相应的直播画面。S202. Send the live video stream to at least one user terminal, so as to display a live picture corresponding to the live video stream on the user terminal.
其中,步骤S202与前述步骤S102类似,在此不再赘述。Wherein, step S202 is similar to the aforementioned step S102, and will not be repeated here.
S203,获取用户终端发送的弹幕信息。S203. Obtain the barrage information sent by the user terminal.
其中,步骤S203与前述步骤S103类似,在此不再赘述。Wherein, step S203 is similar to the aforementioned step S103, and will not be repeated here.
S204,将所述弹幕信息发送至游戏平台,以使得所述游戏平台基于所述弹幕信息以及所述至少一个虚拟对象信息生成至少一个虚拟对象,并控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互。S204. Send the barrage information to the game platform, so that the game platform generates at least one virtual object based on the barrage information and the at least one virtual object information, and controls the at least one virtual object to enter the Interact with the at least one virtual character in a 3D scene.
其中,步骤S204与前述步骤S104及步骤S105类似,在此不再赘述。Wherein, step S204 is similar to the above-mentioned step S104 and step S105, and will not be repeated here.
在一些实施方式中,直播平台还接收所述游戏平台发送的弹幕处理结果信息。在所述弹幕信息处理成功的情况下,将所述弹幕信息删除,不展示于所述直播画面中。其中,所述弹幕信息处理成功是指,所述弹幕信息与所述至少一个虚拟对象信息结合,并生成了所述至少一个虚拟对象。如此,可以避免显示的弹幕信息和生成的虚拟对象所携带的弹幕信息重复的情况发生,提升用户的直播体验。In some embodiments, the live broadcast platform also receives the barrage processing result information sent by the game platform. If the barrage information is successfully processed, the barrage information is deleted and not displayed on the live screen. Wherein, the successful processing of the barrage information means that the barrage information is combined with the at least one virtual object information to generate the at least one virtual object. In this way, duplication of the displayed barrage information and the barrage information carried by the generated virtual object can be avoided, thereby improving the user's live broadcast experience.
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。Those skilled in the art can understand that in the above method of specific implementation, the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process. The specific execution order of each step should be based on its function and possible The inner logic is OK.
基于同一技术构思,本公开实施例中还提供了与虚拟对象控制方法对应的虚拟对象控制装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述虚拟对象控制方法相似,因此装置的实施可以参见方法的实施,重复之处不再赘述。Based on the same technical idea, the embodiment of the present disclosure also provides a virtual object control device corresponding to the virtual object control method. Since the problem-solving principle of the device in the embodiment of the present disclosure is similar to the above-mentioned virtual object control method in the embodiment of the present disclosure, therefore For the implementation of the device, reference may be made to the implementation of the method, and repeated descriptions will not be repeated.
参照图17所示,为本公开实施例提供的一种虚拟对象控制装置500的示意图,所述装置包括:Referring to FIG. 17 , which is a schematic diagram of a virtual object control device 500 provided by an embodiment of the present disclosure, the device includes:
第一获取模块501,用于获取直播视频流,所述直播视频流基于3D场景信息生成,所述3D场景信息用于渲染后生成3D场景,所述3D场景信息包含至少一个虚拟角色信息以及至少一个虚拟对象信息,所述虚拟角色信息用于渲染后生成虚拟角色,所述虚拟角色通过动作捕捉设备捕捉的控制信息驱动;The first acquisition module 501 is configured to acquire a live video stream, the live video stream is generated based on 3D scene information, the 3D scene information is used to generate a 3D scene after rendering, and the 3D scene information includes at least one virtual character information and at least A piece of virtual object information, the virtual character information is used to generate a virtual character after rendering, and the virtual character is driven by control information captured by a motion capture device;
第一发送模块502,用于发送所述直播视频流,以在用户终端展示与所述直播视频流相应的直播画面;The first sending module 502 is configured to send the live video stream, so as to display a live picture corresponding to the live video stream on the user terminal;
第二获取模块503,用于获取所述用户终端发送的弹幕信息;The second acquiring module 503 is configured to acquire the barrage information sent by the user terminal;
第一生成模块504,用于在所述弹幕信息符合第一预设条件的情况下,基于所述弹幕信息以及所述至少一个虚拟对象信息生成至少一个虚拟对象;A first generation module 504, configured to generate at least one virtual object based on the bullet chat information and the at least one virtual object information when the bullet chat information meets a first preset condition;
交互模块505,用于控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互。An interaction module 505, configured to control the at least one virtual object to enter the 3D scene to interact with the at least one virtual character.
在一种可能的实施方式中,第二获取模块503具体用于:In a possible implementation manner, the second obtaining module 503 is specifically configured to:
通过直播平台获取所述用户终端发送的弹幕信息。Obtain the barrage information sent by the user terminal through the live broadcast platform.
在一种可能的实施方式中,所述交互模块505具体用于:In a possible implementation manner, the interaction module 505 is specifically configured to:
控制所述至少一个虚拟对象进入所述3D场景中,并向靠近目标虚拟角色的方向移动直至与所述目标虚拟角色接触;controlling the at least one virtual object to enter the 3D scene, and move in a direction close to the target virtual character until it comes into contact with the target virtual character;
识别所述虚拟对象与所述目标虚拟角色的接触部位,并根据所述接触部位的类型,确定与所述接触部位的类型对应的目标交互行为;identifying a contact part between the virtual object and the target virtual character, and determining a target interaction behavior corresponding to the type of the contact part according to the type of the contact part;
基于所述目标交互行为,控制所述虚拟对象与所述目标虚拟角色进行交互。Based on the target interaction behavior, the virtual object is controlled to interact with the target virtual character.
在一种可能的实施方式中,所述交互模块505具体用于:In a possible implementation manner, the interaction module 505 is specifically configured to:
基于每个虚拟对象所携带的弹幕信息,从所述至少一个虚拟角色中确定与所述每个虚拟对象对应的目标虚拟角色;Determining a target virtual character corresponding to each virtual object from the at least one virtual character based on the barrage information carried by each virtual object;
控制所述每个虚拟对象进入所述3D场景中,并向靠近与所述每个虚拟对象对应的目标虚拟角色的方向移动直至与所述目标虚拟角色接触。Each virtual object is controlled to enter the 3D scene, and move toward a target virtual character corresponding to each virtual object until it contacts the target virtual character.
在一种可能的实施方式中,所述接触部位为所述目标虚拟角色的脚部,所述交互模块505具体用于:In a possible implementation manner, the contact part is the foot of the target virtual character, and the interaction module 505 is specifically configured to:
获取所述目标虚拟角色的脚部的控制信息;Acquiring control information of the feet of the target virtual character;
基于所述控制信息驱动所述目标虚拟角色的脚部运动;driving a foot movement of the target virtual character based on the control information;
根据所述目标虚拟角色的脚部的运动信息,控制所述虚拟对象远离所述目标虚拟角色的移动状态。The moving state of the virtual object away from the target virtual character is controlled according to the motion information of the target virtual character's feet.
在一种可能的实施方式中,所述交互模块505具体用于:In a possible implementation manner, the interaction module 505 is specifically configured to:
获取所述目标虚拟角色的脚部的运动信息,所述运动信息由控制对象驱动生成;Acquiring motion information of the feet of the target virtual character, the motion information being driven and generated by a control object;
基于所述运动信息,控制所述虚拟对象的移动状态。Based on the motion information, the movement state of the virtual object is controlled.
在一种可能的实施方式中,所述3D场景信息还包括虚拟镜头,所述交互模块505具体用于:In a possible implementation manner, the 3D scene information further includes a virtual lens, and the interaction module 505 is specifically configured to:
在所述虚拟对象的移动方向为朝向所述虚拟镜头方向的情况下,若所述虚拟对象的移动状态满足第二预设条件,获取并显示预设的与虚拟镜面碰撞的特效。When the moving direction of the virtual object is toward the virtual camera, if the moving state of the virtual object satisfies a second preset condition, a preset special effect of colliding with the virtual mirror is acquired and displayed.
在一种可能的实施方式中,所述交互模块505具体用于:In a possible implementation manner, the interaction module 505 is specifically configured to:
获取所述至少一个虚拟角色在所述3D场景中的第一实时位置信息;Acquiring first real-time position information of the at least one virtual character in the 3D scene;
基于所述第一实时位置信息,控制所述至少一个虚拟对象相对于所述至少一个虚拟角色移动。Based on the first real-time position information, the at least one virtual object is controlled to move relative to the at least one virtual character.
在一种可能的实施方式中,所述交互模块505具体用于:In a possible implementation manner, the interaction module 505 is specifically configured to:
获取所述至少一个虚拟对象在所述3D场景中的第二实时位置信息;acquiring second real-time position information of the at least one virtual object in the 3D scene;
基于所述第二实时位置信息,控制所述至少一个虚拟对象之间进行互动。Based on the second real-time position information, the interaction between the at least one virtual object is controlled.
在一种可能的实施方式中,所述交互模块505具体用于:In a possible implementation manner, the interaction module 505 is specifically configured to:
确定每个虚拟对象携带的弹幕信息的用户资源信息;Determine the user resource information of the barrage information carried by each virtual object;
基于所述用户资源信息,确定所述每个虚拟对象的运动状态;determining the motion state of each virtual object based on the user resource information;
基于所述运动状态,控制所述每个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互。Based on the motion state, each virtual object is controlled to enter the 3D scene to interact with the at least one virtual character.
参见图18所示,为本公开实施例提供的一种虚拟对象控制装置600的示意图,所述装置包括:Referring to FIG. 18 , which is a schematic diagram of a virtual object control device 600 provided by an embodiment of the present disclosure, the device includes:
第三获取模块601,用于通过游戏平台获取直播视频流,所述直播视频流基于3D场景信息生成,所述3D场景信息用于渲染后生成3D场景,所述3D场景信息包含至少一个虚拟角色信息以及至少一个虚拟对象信息,所述虚拟角色信息用于渲染后生成虚拟角色,所述虚拟角色通过动作捕捉设备捕捉的控制信息驱动;The third acquiring module 601 is configured to acquire a live video stream through a game platform, the live video stream is generated based on 3D scene information, and the 3D scene information is used to generate a 3D scene after rendering, and the 3D scene information includes at least one virtual character information and at least one virtual object information, the virtual character information is used to generate a virtual character after rendering, and the virtual character is driven by control information captured by a motion capture device;
第二发送模块602,用于将所述直播视频流发送到至少一个用户终端,以在所述用户终端展示与所述直播视频流相应的直播画面;The second sending module 602 is configured to send the live video stream to at least one user terminal, so as to display a live picture corresponding to the live video stream on the user terminal;
第四获取模块603,用于获取所述用户终端发送的弹幕信息;A fourth obtaining module 603, configured to obtain the barrage information sent by the user terminal;
第三发送模块604,用于将所述弹幕信息发送至游戏平台,以使得所述游戏平台基于所述弹幕信息以及所述至少一个虚拟对象信息生成至少一个虚拟对象,并控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互。The third sending module 604 is configured to send the barrage information to the game platform, so that the game platform generates at least one virtual object based on the barrage information and the at least one virtual object information, and controls the at least one virtual object. A virtual object enters the 3D scene to interact with the at least one virtual character.
在一种可能的实施方式中,所述虚拟对象控制装置600还包括:In a possible implementation manner, the virtual object control device 600 further includes:
信息接收模块605,用于接收所述游戏平台发送的弹幕处理结果信息;An information receiving module 605, configured to receive the barrage processing result information sent by the game platform;
弹幕处理模块606,在所述弹幕信息处理成功的情况下,将所述弹幕信息删除,不展示于所述直播画面中;其中,所述弹幕信息处理成功是指,所述弹幕信息与所述至少一个虚拟对象信息结合,并生成了所述至少一个虚拟对象。The barrage processing module 606, if the barrage information is successfully processed, deletes the barrage information and does not display it on the live screen; wherein, the barrage information processing is successful means that the barrage information The scene information is combined with the at least one virtual object information, and the at least one virtual object is generated.
关于装置中的各模块的处理流程、以及各模块之间的交互流程的描述可以参照上述方法实施例中的相关说明,这里不再详述。For the description of the processing flow of each module in the device and the interaction flow between the modules, reference may be made to the relevant description in the above method embodiment, and details will not be described here.
基于同一技术构思,本公开实施例还提供了一种电子设备。参照图19所示,为本公开实施例提供的电子设备700的结构示意图,包括处理器701、存储器702、和总线703。其中,存储器702用于存储执行指令,包括内存7021和外部存储器7022;这里的内存7021也称内存储器,用于暂时存放 处理器701中的运算数据,以及与硬盘等外部存储器7022交换的数据,处理器701通过内存7021与外部存储器7022进行数据交换。Based on the same technical idea, an embodiment of the present disclosure also provides an electronic device. Referring to FIG. 19 , it is a schematic structural diagram of an electronic device 700 provided by an embodiment of the present disclosure, including a processor 701 , a memory 702 , and a bus 703 . Among them, the memory 702 is used to store execution instructions, including a memory 7021 and an external memory 7022; the memory 7021 here is also called an internal memory, and is used to temporarily store calculation data in the processor 701 and exchange data with an external memory 7022 such as a hard disk. The processor 701 exchanges data with the external memory 7022 through the memory 7021 .
本申请实施例中,存储器702具体用于存储执行本申请方案的应用程序代码,并由处理器701来控制执行。也即,当电子设备700运行时,处理器701与存储器702之间通过总线703通信,使得处理器701执行存储器702中存储的应用程序代码,进而执行前述任一实施例中所述的方法。In the embodiment of the present application, the memory 702 is specifically used to store the application program code for executing the solution of the present application, and the execution is controlled by the processor 701 . That is, when the electronic device 700 is running, the processor 701 communicates with the memory 702 through the bus 703, so that the processor 701 executes the application program code stored in the memory 702, and then executes the method described in any of the foregoing embodiments.
其中,存储器702可以是,但不限于,随机存取存储器(Random Access Memory,RAM),只读存储器(Read Only Memory,ROM),可编程只读存储器(Programmable Read-Only Memory,PROM),可擦除只读存储器(Erasable Programmable Read-Only Memory,EPROM),电可擦除只读存储器(Electric Erasable Programmable Read-Only Memory,EEPROM)等。Wherein, memory 702 can be, but not limited to, random access memory (Random Access Memory, RAM), read-only memory (Read Only Memory, ROM), programmable read-only memory (Programmable Read-Only Memory, PROM), can Erasable Programmable Read-Only Memory (EPROM), Electric Erasable Programmable Read-Only Memory (EEPROM), etc.
处理器701可能是一种集成电路芯片,具有信号的处理能力。上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The processor 701 may be an integrated circuit chip with signal processing capabilities. The above-mentioned processor can be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; it can also be a digital signal processor (DSP), an application-specific integrated circuit (ASIC) , field programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. Various methods, steps and logic block diagrams disclosed in the embodiments of the present invention may be implemented or executed. A general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
可以理解的是,本申请实施例示意的结构并不构成对电子设备700的具体限定。在本申请另一些实施例中,电子设备700可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that, the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 700 . In other embodiments of the present application, the electronic device 700 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components. The illustrated components can be realized in hardware, software or a combination of software and hardware.
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中的虚拟对象控制方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。Embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the steps of the virtual object control method in the foregoing method embodiments are executed. Wherein, the storage medium may be a volatile or non-volatile computer-readable storage medium.
本公开实施例还提供一种计算机程序产品,该计算机程序产品承载有程序代码,所述程序代码包括的指令可用于执行上述方法实施例中的虚拟对象控制方法的步骤,具体可参见上述方法实施例,在此不再赘述。The embodiment of the present disclosure also provides a computer program product, the computer program product carries a program code, and the instructions included in the program code can be used to execute the steps of the virtual object control method in the above method embodiment, for details, please refer to the above method implementation example, which will not be repeated here.
其中,上述计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。Wherein, the above-mentioned computer program product may be specifically implemented by means of hardware, software or a combination thereof. In an optional embodiment, the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) etc. wait.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。Those skilled in the art can clearly understand that for the convenience and brevity of description, the specific working process of the above-described system and device can refer to the corresponding process in the foregoing method embodiments, which will not be repeated here. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices and methods may be implemented in other ways. The device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some communication interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读 存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor. Based on this understanding, the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应所述以权利要求的保护范围为准。Finally, it should be noted that: the above-mentioned embodiments are only specific implementations of the present disclosure, and are used to illustrate the technical solutions of the present disclosure, rather than limit them, and the protection scope of the present disclosure is not limited thereto, although referring to the aforementioned The embodiments have described the present disclosure in detail, and those skilled in the art should understand that any person familiar with the technical field can still modify the technical solutions described in the foregoing embodiments within the technical scope disclosed in the present disclosure Changes can be easily imagined, or equivalent replacements can be made to some of the technical features; and these modifications, changes or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present disclosure, and should be included in this disclosure. within the scope of protection. Therefore, the protection scope of the present disclosure should be defined by the protection scope of the claims.

Claims (16)

  1. 一种虚拟对象控制方法,其特征在于,应用于游戏平台,所述方法包括:A virtual object control method is characterized in that it is applied to a game platform, and the method includes:
    获取直播视频流,所述直播视频流基于3D场景信息生成,所述3D场景信息用于渲染后生成3D场景,所述3D场景信息包含至少一个虚拟角色信息以及至少一个虚拟对象信息,所述虚拟角色信息用于渲染后生成虚拟角色,所述虚拟角色通过动作捕捉设备捕捉的控制信息驱动;Obtain a live video stream, the live video stream is generated based on 3D scene information, the 3D scene information is used to generate a 3D scene after rendering, the 3D scene information includes at least one virtual character information and at least one virtual object information, and the virtual The character information is used to generate a virtual character after rendering, and the virtual character is driven by control information captured by a motion capture device;
    发送所述直播视频流,以在用户终端展示与所述直播视频流相应的直播画面;Sending the live video stream to display a live picture corresponding to the live video stream on the user terminal;
    获取所述用户终端发送的弹幕信息;Obtaining the barrage information sent by the user terminal;
    在所述弹幕信息符合第一预设条件的情况下,基于所述弹幕信息以及所述至少一个虚拟对象信息生成至少一个虚拟对象;When the bullet chat information meets the first preset condition, at least one virtual object is generated based on the bullet chat information and the at least one virtual object information;
    控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互。Controlling the at least one virtual object to enter the 3D scene to interact with the at least one virtual character.
  2. 根据权利要求1所述的方法,其特征在于,所述获取所述用户终端发送的弹幕信息,包括:The method according to claim 1, wherein said obtaining the barrage information sent by said user terminal comprises:
    通过直播平台获取所述用户终端发送的弹幕信息。Obtain the barrage information sent by the user terminal through the live broadcast platform.
  3. 根据权利要求1所述的方法,其特征在于,所述控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互,包括:The method according to claim 1, wherein the controlling the at least one virtual object to enter the 3D scene to interact with the at least one virtual character comprises:
    控制所述至少一个虚拟对象进入所述3D场景中,并向靠近目标虚拟角色的方向移动直至与所述目标虚拟角色接触;controlling the at least one virtual object to enter the 3D scene, and move in a direction close to the target virtual character until it comes into contact with the target virtual character;
    识别所述虚拟对象与所述目标虚拟角色的接触部位,并根据所述接触部位的类型,确定与所述接触部位的类型对应的目标交互行为;identifying a contact part between the virtual object and the target virtual character, and determining a target interaction behavior corresponding to the type of the contact part according to the type of the contact part;
    基于所述目标交互行为,控制所述虚拟对象与所述目标虚拟角色进行交互。Based on the target interaction behavior, the virtual object is controlled to interact with the target virtual character.
  4. 根据权利要求3所述的方法,其特征在于,所述控制所述至少一个虚拟对象进入所述3D场景中,并向靠近目标虚拟角色的方向移动直至与所述目标虚拟角色接触,包括:The method according to claim 3, wherein the controlling the at least one virtual object to enter the 3D scene and move toward a direction close to the target virtual character until it comes into contact with the target virtual character comprises:
    基于每个虚拟对象所携带的弹幕信息,从所述至少一个虚拟角色中确 定与所述每个虚拟对象对应的目标虚拟角色;Based on the barrage information carried by each virtual object, determine a target virtual character corresponding to each virtual object from the at least one virtual character;
    控制所述每个虚拟对象进入所述3D场景中,并向靠近与所述每个虚拟对象对应的目标虚拟角色的方向移动直至与所述目标虚拟角色接触。Each virtual object is controlled to enter the 3D scene, and move toward a target virtual character corresponding to each virtual object until it contacts the target virtual character.
  5. 根据权利要求3所述的方法,其特征在于,所述接触部位为所述目标虚拟角色的脚部,所述基于所述目标交互行为,控制所述虚拟对象与所述目标虚拟角色进行交互,包括:The method according to claim 3, wherein the contact part is the foot of the target virtual character, and based on the target interaction behavior, controlling the virtual object to interact with the target virtual character, include:
    获取所述目标虚拟角色的脚部的控制信息;Acquiring control information of the feet of the target virtual character;
    基于所述控制信息驱动所述目标虚拟角色的脚部运动;driving a foot movement of the target virtual character based on the control information;
    根据所述目标虚拟角色的脚部的运动信息,控制所述虚拟对象远离所述目标虚拟角色的移动状态。The moving state of the virtual object away from the target virtual character is controlled according to the motion information of the target virtual character's feet.
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述目标虚拟角色的脚部的运动信息,控制所述虚拟对象远离所述目标虚拟角色的移动状态,包括:The method according to claim 5, wherein the controlling the moving state of the virtual object away from the target virtual character according to the movement information of the target virtual character's feet comprises:
    获取所述目标虚拟角色的脚部的运动信息,所述运动信息由控制对象驱动生成;Acquiring motion information of the feet of the target virtual character, the motion information being driven and generated by a control object;
    基于所述运动信息,控制所述虚拟对象的移动状态。Based on the motion information, the movement state of the virtual object is controlled.
  7. 根据权利要求6所述的方法,其特征在于,所述3D场景信息还包括虚拟镜头,所述移动状态包括移动方向,所述方法还包括:The method according to claim 6, wherein the 3D scene information also includes a virtual lens, and the moving state includes a moving direction, and the method further includes:
    在所述虚拟对象的移动方向为朝向所述虚拟镜头方向的情况下,若所述虚拟对象的移动状态满足第二预设条件,获取并显示预设的与虚拟镜面碰撞的特效。When the moving direction of the virtual object is toward the virtual camera, if the moving state of the virtual object satisfies a second preset condition, a preset special effect of colliding with the virtual mirror is acquired and displayed.
  8. 根据权利要求1所述的方法,其特征在于,所述控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互,包括:The method according to claim 1, wherein the controlling the at least one virtual object to enter the 3D scene to interact with the at least one virtual character comprises:
    获取所述至少一个虚拟角色在所述3D场景中的第一实时位置信息;Acquiring first real-time position information of the at least one virtual character in the 3D scene;
    基于所述第一实时位置信息,控制所述至少一个虚拟对象相对于所述至少一个虚拟角色移动。Based on the first real-time position information, the at least one virtual object is controlled to move relative to the at least one virtual character.
  9. 根据权利要求8所述的方法,其特征在于,所述至少一个虚拟对象的数量为多个,所述方法还包括:The method according to claim 8, wherein the quantity of the at least one virtual object is multiple, and the method further comprises:
    获取所述至少一个虚拟对象在所述3D场景中的第二实时位置信息;acquiring second real-time position information of the at least one virtual object in the 3D scene;
    基于所述第二实时位置信息,控制所述至少一个虚拟对象之间进行互动。Based on the second real-time position information, the interaction between the at least one virtual object is controlled.
  10. 根据权利要求1所述的方法,其特征在于,所述控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互,包括:The method according to claim 1, wherein the controlling the at least one virtual object to enter the 3D scene to interact with the at least one virtual character comprises:
    确定每个虚拟对象携带的弹幕信息的用户资源信息;Determine the user resource information of the barrage information carried by each virtual object;
    基于所述用户资源信息,确定所述每个虚拟对象的运动状态;determining the motion state of each virtual object based on the user resource information;
    基于所述运动状态,控制所述每个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互。Based on the motion state, each virtual object is controlled to enter the 3D scene to interact with the at least one virtual character.
  11. 一种虚拟对象控制方法,其特征在于,应用于直播平台,所述方法包括:A virtual object control method is characterized in that it is applied to a live broadcast platform, and the method includes:
    通过游戏平台获取直播视频流,所述直播视频流基于3D场景信息生成,所述3D场景信息用于渲染后生成3D场景,所述3D场景信息包含至少一个虚拟角色信息以及至少一个虚拟对象信息,所述虚拟角色信息用于渲染后生成虚拟角色,所述虚拟角色通过动作捕捉设备捕捉的控制信息驱动;Obtaining a live video stream through a game platform, the live video stream is generated based on 3D scene information, the 3D scene information is used to generate a 3D scene after rendering, and the 3D scene information includes at least one virtual character information and at least one virtual object information, The virtual character information is used to generate a virtual character after rendering, and the virtual character is driven by control information captured by a motion capture device;
    将所述直播视频流发送到至少一个用户终端,以在所述用户终端展示与所述直播视频流相应的直播画面;sending the live video stream to at least one user terminal, so as to display a live picture corresponding to the live video stream on the user terminal;
    获取所述用户终端发送的弹幕信息;Obtaining the barrage information sent by the user terminal;
    将所述弹幕信息发送至游戏平台,以使得所述游戏平台基于所述弹幕信息以及所述至少一个虚拟对象信息生成至少一个虚拟对象,并控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互。Sending the barrage information to the game platform, so that the game platform generates at least one virtual object based on the barrage information and the at least one virtual object information, and controls the at least one virtual object to enter the 3D scene interact with the at least one virtual character.
  12. 根据权利要求11所述的方法,其特征在于,所述方法还包括:The method according to claim 11, characterized in that the method further comprises:
    接收所述游戏平台发送的弹幕处理结果信息;receiving the barrage processing result information sent by the game platform;
    在所述弹幕信息处理成功的情况下,将所述弹幕信息删除,不展示于所述直播画面中;其中,所述弹幕信息处理成功是指,所述弹幕信息与所述至少一个虚拟对象信息结合,并生成了所述至少一个虚拟对象。In the case that the bullet chat information is successfully processed, the bullet chat information is deleted and not displayed in the live screen; wherein, the successful processing of the bullet chat information means that the bullet chat information is related to the at least A virtual object information is combined to generate the at least one virtual object.
  13. 一种虚拟对象控制装置,其特征在于,包括:A virtual object control device, characterized in that it comprises:
    第一获取模块,用于获取直播视频流,所述直播视频流基于3D场景信息生成,所述3D场景信息用于渲染后生成3D场景,所述3D场景信息包 含至少一个虚拟角色信息以及至少一个虚拟对象信息,所述虚拟角色信息用于渲染后生成虚拟角色,所述虚拟角色通过动作捕捉设备捕捉的控制信息驱动;The first obtaining module is used to obtain a live video stream, the live video stream is generated based on 3D scene information, and the 3D scene information is used to generate a 3D scene after rendering, and the 3D scene information includes at least one virtual character information and at least one Virtual object information, the virtual character information is used to generate a virtual character after rendering, and the virtual character is driven by control information captured by a motion capture device;
    第一发送模块,用于发送所述直播视频流,以在用户终端展示与所述直播视频流相应的直播画面;The first sending module is configured to send the live video stream, so as to display a live picture corresponding to the live video stream on the user terminal;
    第二获取模块,用于获取所述用户终端发送的弹幕信息;The second obtaining module is used to obtain the barrage information sent by the user terminal;
    第一生成模块,用于在所述弹幕信息符合第一预设条件的情况下,基于所述弹幕信息以及所述至少一个虚拟对象信息生成至少一个虚拟对象;A first generation module, configured to generate at least one virtual object based on the bullet chat information and the at least one virtual object information when the bullet chat information meets a first preset condition;
    交互模块,用于控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互。An interaction module, configured to control the at least one virtual object to enter the 3D scene to interact with the at least one virtual character.
  14. 一种虚拟对象控制装置,其特征在于,包括:A virtual object control device, characterized in that it comprises:
    第三获取模块,用于通过游戏平台获取直播视频流,所述直播视频流基于3D场景信息生成,所述3D场景信息用于渲染后生成3D场景,所述3D场景信息包含至少一个虚拟角色信息以及至少一个虚拟对象信息,所述虚拟角色信息用于渲染后生成虚拟角色,所述虚拟角色通过动作捕捉设备捕捉的控制信息驱动;The third obtaining module is used to obtain a live video stream through a game platform, the live video stream is generated based on 3D scene information, and the 3D scene information is used to generate a 3D scene after rendering, and the 3D scene information includes at least one virtual character information And at least one piece of virtual object information, the virtual character information is used to generate a virtual character after rendering, and the virtual character is driven by control information captured by a motion capture device;
    第二发送模块,用于将所述直播视频流发送到至少一个用户终端,以在所述用户终端展示与所述直播视频流相应的直播画面;The second sending module is configured to send the live video stream to at least one user terminal, so as to display a live picture corresponding to the live video stream on the user terminal;
    第四获取模块,用于获取所述用户终端发送的弹幕信息;A fourth obtaining module, configured to obtain the barrage information sent by the user terminal;
    第三发送模块,用于将所述弹幕信息发送至游戏平台,以使得所述游戏平台基于所述弹幕信息以及所述至少一个虚拟对象信息生成至少一个虚拟对象,并控制所述至少一个虚拟对象进入所述3D场景中与所述至少一个虚拟角色进行交互。The third sending module is configured to send the barrage information to the game platform, so that the game platform generates at least one virtual object based on the barrage information and the at least one virtual object information, and controls the at least one A virtual object enters the 3D scene to interact with the at least one virtual character.
  15. 一种电子设备,其特征在于,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1-12任一所述的虚拟对象控制方法。An electronic device, characterized in that it includes: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, the connection between the processor and the memory communicate with each other through a bus, and execute the virtual object control method according to any one of claims 1-12 when the machine-readable instructions are executed by the processor.
  16. 一种计算机可读存储介质,其特征在于,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如权利要求1-12 任一所述的虚拟对象控制方法。A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the virtual object control method according to any one of claims 1-12 is executed.
PCT/CN2022/113276 2021-10-26 2022-08-18 Virtual object control method and apparatus, electronic device, and readable storage medium WO2023071443A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111250745.XA CN113905251A (en) 2021-10-26 2021-10-26 Virtual object control method and device, electronic equipment and readable storage medium
CN202111250745.X 2021-10-26

Publications (1)

Publication Number Publication Date
WO2023071443A1 true WO2023071443A1 (en) 2023-05-04

Family

ID=79026458

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/113276 WO2023071443A1 (en) 2021-10-26 2022-08-18 Virtual object control method and apparatus, electronic device, and readable storage medium

Country Status (2)

Country Link
CN (1) CN113905251A (en)
WO (1) WO2023071443A1 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113905251A (en) * 2021-10-26 2022-01-07 北京字跳网络技术有限公司 Virtual object control method and device, electronic equipment and readable storage medium
CN114401442B (en) * 2022-01-14 2023-10-24 北京字跳网络技术有限公司 Video live broadcast and special effect control method and device, electronic equipment and storage medium
CN114470768B (en) * 2022-02-15 2023-07-25 北京字跳网络技术有限公司 Virtual prop control method and device, electronic equipment and readable storage medium
CN114612643B (en) * 2022-03-07 2024-04-12 北京字跳网络技术有限公司 Image adjustment method and device for virtual object, electronic equipment and storage medium
CN114615514B (en) * 2022-03-14 2023-09-22 深圳幻影未来信息科技有限公司 Live broadcast interactive system of virtual person
CN114979683A (en) * 2022-04-21 2022-08-30 澳克多普有限公司 Application method and system of multi-platform intelligent anchor
CN115314749B (en) * 2022-06-15 2024-03-22 网易(杭州)网络有限公司 Response method and device of interaction information and electronic equipment
CN115334324A (en) * 2022-06-22 2022-11-11 广州博冠信息科技有限公司 Video image processing method and device and electronic equipment
CN117435040A (en) * 2022-07-14 2024-01-23 北京字跳网络技术有限公司 Information interaction method, device, electronic equipment and storage medium
CN115174954A (en) * 2022-08-03 2022-10-11 抖音视界有限公司 Video live broadcast method and device, electronic equipment and storage medium
CN116108266B (en) * 2022-12-13 2023-09-29 星络家居云物联科技有限公司 Virtual reality interaction system for realizing mutual recognition function
CN116996703A (en) * 2023-08-23 2023-11-03 中科智宏(北京)科技有限公司 Digital live broadcast interaction method, system, equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109275040A (en) * 2018-11-06 2019-01-25 网易(杭州)网络有限公司 Exchange method, device and system based on game live streaming
JP2019146148A (en) * 2018-11-19 2019-08-29 株式会社バーチャルキャスト Content distribution system, content distribution method, and content distribution program
CN110308792A (en) * 2019-07-01 2019-10-08 北京百度网讯科技有限公司 Control method, device, equipment and the readable storage medium storing program for executing of virtual role
JP2020156740A (en) * 2019-03-26 2020-10-01 株式会社コロプラ Game program, game method and information terminal device
CN112040270A (en) * 2019-06-03 2020-12-04 广州虎牙信息科技有限公司 Live broadcast method, device, equipment and storage medium
CN113457171A (en) * 2021-06-24 2021-10-01 网易(杭州)网络有限公司 Live broadcast information processing method, electronic equipment and storage medium
CN113490006A (en) * 2021-07-01 2021-10-08 北京云生万物科技有限公司 Live broadcast interaction method and equipment based on bullet screen
CN113490061A (en) * 2021-07-01 2021-10-08 北京云生万物科技有限公司 Live broadcast interaction method and equipment based on bullet screen
CN113905251A (en) * 2021-10-26 2022-01-07 北京字跳网络技术有限公司 Virtual object control method and device, electronic equipment and readable storage medium
CN113949914A (en) * 2021-08-19 2022-01-18 广州博冠信息科技有限公司 Live broadcast interaction method and device, electronic equipment and computer readable storage medium
CN114095744A (en) * 2021-11-16 2022-02-25 北京字跳网络技术有限公司 Video live broadcast method and device, electronic equipment and readable storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109275040A (en) * 2018-11-06 2019-01-25 网易(杭州)网络有限公司 Exchange method, device and system based on game live streaming
JP2019146148A (en) * 2018-11-19 2019-08-29 株式会社バーチャルキャスト Content distribution system, content distribution method, and content distribution program
JP2020156740A (en) * 2019-03-26 2020-10-01 株式会社コロプラ Game program, game method and information terminal device
CN112040270A (en) * 2019-06-03 2020-12-04 广州虎牙信息科技有限公司 Live broadcast method, device, equipment and storage medium
CN110308792A (en) * 2019-07-01 2019-10-08 北京百度网讯科技有限公司 Control method, device, equipment and the readable storage medium storing program for executing of virtual role
CN113457171A (en) * 2021-06-24 2021-10-01 网易(杭州)网络有限公司 Live broadcast information processing method, electronic equipment and storage medium
CN113490006A (en) * 2021-07-01 2021-10-08 北京云生万物科技有限公司 Live broadcast interaction method and equipment based on bullet screen
CN113490061A (en) * 2021-07-01 2021-10-08 北京云生万物科技有限公司 Live broadcast interaction method and equipment based on bullet screen
CN113949914A (en) * 2021-08-19 2022-01-18 广州博冠信息科技有限公司 Live broadcast interaction method and device, electronic equipment and computer readable storage medium
CN113905251A (en) * 2021-10-26 2022-01-07 北京字跳网络技术有限公司 Virtual object control method and device, electronic equipment and readable storage medium
CN114095744A (en) * 2021-11-16 2022-02-25 北京字跳网络技术有限公司 Video live broadcast method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN113905251A (en) 2022-01-07

Similar Documents

Publication Publication Date Title
WO2023071443A1 (en) Virtual object control method and apparatus, electronic device, and readable storage medium
US11738275B2 (en) Virtual reality presentation of real world space
US11899835B2 (en) Control of personal space content presented via head mounted display
US11478709B2 (en) Augmenting virtual reality video games with friend avatars
US11724177B2 (en) Controller having lights disposed along a loop of the controller
CN107680157B (en) Live broadcast-based interaction method, live broadcast system and electronic equipment
US10516870B2 (en) Information processing device, information processing method, and program
CN106659934B (en) Method and system for social sharing of Head Mounted Display (HMD) content with a second screen
CN106730815B (en) Somatosensory interaction method and system easy to realize
CN111246232A (en) Live broadcast interaction method and device, electronic equipment and storage medium
WO2017148410A1 (en) Information interaction method, device and system
WO2023035897A1 (en) Video data generation method and apparatus, electronic device, and readable storage medium
US20190005732A1 (en) Program for providing virtual space with head mount display, and method and information processing apparatus for executing the program
WO2023045637A1 (en) Video data generation method and apparatus, electronic device, and readable storage medium
US11627359B2 (en) Influencer stream customization for follower viewers
CN114745598A (en) Video data display method and device, electronic equipment and storage medium
CN113453034A (en) Data display method and device, electronic equipment and computer readable storage medium
CN114615513A (en) Video data generation method and device, electronic equipment and storage medium
CN114095744A (en) Video live broadcast method and device, electronic equipment and readable storage medium
CN114697703B (en) Video data generation method and device, electronic equipment and storage medium
JP2022546664A (en) User-specified advertisements in virtual space
TW202123128A (en) Virtual character live broadcast method, system thereof and computer program product
CN114173173A (en) Barrage information display method and device, storage medium and electronic equipment
CN113318441A (en) Game scene display control method and device, electronic equipment and storage medium
CN113448466A (en) Animation display method, animation display device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22885343

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE