CN112135160A - Virtual object control method and device in live broadcast, storage medium and electronic equipment - Google Patents

Virtual object control method and device in live broadcast, storage medium and electronic equipment Download PDF

Info

Publication number
CN112135160A
CN112135160A CN202011017157.7A CN202011017157A CN112135160A CN 112135160 A CN112135160 A CN 112135160A CN 202011017157 A CN202011017157 A CN 202011017157A CN 112135160 A CN112135160 A CN 112135160A
Authority
CN
China
Prior art keywords
virtual object
live
action
comment information
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011017157.7A
Other languages
Chinese (zh)
Inventor
黄达鸿
王毅
夏秋虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Boguan Information Technology Co Ltd
Original Assignee
Guangzhou Boguan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Boguan Information Technology Co Ltd filed Critical Guangzhou Boguan Information Technology Co Ltd
Priority to CN202011017157.7A priority Critical patent/CN112135160A/en
Publication of CN112135160A publication Critical patent/CN112135160A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Abstract

The disclosure provides a method and a device for controlling a virtual object in live broadcasting, a computer-readable storage medium and electronic equipment. The live virtual object control method comprises the following steps: displaying a live broadcast picture, wherein the live broadcast picture comprises a virtual object; receiving live comment information sent by second equipment; and if the live comment information is matched with the target action in the executable action set of the virtual object, controlling the virtual object to execute the target action. The method and the device realize control over the virtual object according to the user comment information, so that the virtual object can interact with the user in real time, and the user retention rate is improved.

Description

Virtual object control method and device in live broadcast, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for controlling a virtual object in live broadcasting, a computer-readable storage medium, and an electronic device.
Background
With the rapid development of the live broadcast industry, the live broadcast mode is more and more diversified. The live broadcast mode is that a real anchor broadcasts live in a real scene, and the live broadcast mode is developed to be that a virtual role is used for live broadcast in a virtual scene.
However, in the existing live broadcast method using the virtual object, the virtual object in the live broadcast scene can only finish the preset action in advance by self, the live broadcast effect lacks real-time interactivity, real-time feedback cannot be made according to the main broadcast requirement or the barrage content, the interaction between the virtual object and the user is too little, the live broadcast effect is poor, and the user retention rate is low.
Disclosure of Invention
The present disclosure aims to provide a method and an apparatus for controlling a virtual object in live broadcasting, a storage medium, and an electronic device, so as to overcome the problems of poor live broadcasting effect and low user retention rate caused by limitations and defects of related technologies at least to a certain extent.
According to a first aspect of the present disclosure, a method for controlling a virtual object in live broadcasting is provided, which is applied to a first device, and includes: displaying a live broadcast picture, wherein the live broadcast picture comprises a virtual object; receiving live comment information sent by second equipment; and if the live comment information is matched with the target action in the executable action set of the virtual object, controlling the virtual object to execute the target action.
Optionally, the method for controlling a virtual object in live broadcasting further includes: identifying the live comment information to obtain an identification result; and comparing the recognition result with the control trigger information corresponding to each action in the executable action set of the virtual object, and determining the action of which the control trigger information is matched with the recognition result as the target action.
Optionally, identifying the live comment information includes: acquiring a user grade corresponding to a live client of second equipment; identifying the on-air comment information if the user rating is above a rating threshold.
Optionally, identifying the live comment information includes: acquiring the accumulated online time corresponding to the live broadcast client of the second equipment; and if the accumulated online time is higher than the first time threshold, identifying the live comment information.
Optionally, identifying the live comment information includes: acquiring a user grade and accumulated online time corresponding to a live client of second equipment; identifying the live comment information if the user rating is above the rating threshold and the accumulated online time is above a first time threshold.
Optionally, the controlling the virtual object to execute the target action includes: acquiring a time point of executing a target action from a current last virtual object; calculating the time interval between the current time point and the time point of executing the target action by the current virtual object at the last time; and if the time interval is greater than the second duration threshold, controlling the virtual object to execute the target action.
Optionally, the target action is a target expression action, the virtual object includes multiple layers, and controlling the virtual object to execute the target action includes: determining a target layer set corresponding to the execution of the target expression action in the layer set of the virtual object; and respectively controlling each target layer in the target layer set based on the target expression action so as to execute the expression action.
Optionally, the method for controlling a virtual object in live broadcasting further includes: acquiring voice information associated with a target action; and when the virtual object is controlled to execute the target action, playing the voice information.
Optionally, the method for controlling a virtual object in live broadcasting further includes: sending a control instruction by adopting external equipment; and the first equipment controls the virtual object to execute the target action according to the received control instruction and the live comment information.
According to a second aspect of the present disclosure, a live virtual object control apparatus is provided, which is applied to a first device and includes a screen display module, an information processing module, and an action control module.
Specifically, the picture display module may be configured to display a live view, where the live view includes a virtual object; the information processing module can be used for receiving live comment information sent by the second device; the action control module may be configured to control the virtual object to execute a target action in the set of executable actions of the virtual object if the live comment information matches the target action.
Optionally, the information processing module may be configured to perform recognition on the live comment information to obtain a recognition result; and comparing the recognition result with the control trigger information corresponding to each action in the executable action set of the virtual object, and determining the action of which the control trigger information is matched with the recognition result as the target action.
Optionally, the information processing module may be further configured to perform obtaining of a user level corresponding to a live client of the second device; identifying the on-air comment information if the user rating is above a rating threshold.
Optionally, the information processing module may be further configured to perform obtaining of an accumulated online duration corresponding to a live client of the second device; and if the accumulated online time is higher than the first time threshold, identifying the live comment information.
Optionally, the information processing module may be further configured to perform obtaining of a user level and accumulated online time corresponding to a live client of the second device; identifying the live comment information if the user rating is above the rating threshold and the accumulated online time is above a first time threshold.
Optionally, the information processing module may be further configured to perform acquiring a time point from the last virtual object currently performing the target action; calculating the time interval between the current time point and the time point of executing the target action by the current virtual object at the last time; and if the time interval is greater than the second duration threshold, controlling the virtual object to execute the target action.
Optionally, the action control module may be configured to perform determining a target layer set corresponding to the execution of the target expression action in the layer set of the virtual object; and respectively controlling each target layer in the target layer set based on the target expression action so as to execute the expression action.
Optionally, the action control module may be further configured to perform acquiring voice information associated with the target action; and when the virtual object is controlled to execute the target action, playing the voice information.
Optionally, the action control module may be configured to execute sending the control instruction by using the external device; and the first equipment controls the virtual object to execute the target action according to the received control instruction and the live comment information.
According to a third aspect of the present disclosure, there is provided a live virtual object control method, applied to a second device, including: acquiring live comment information input by a user in live broadcasting; the live scene comprises a virtual object; if the live comment information is matched with a target action in the executable action set of the virtual object, acquiring an identifier corresponding to the target action; and sending the identification to the first equipment so that the first equipment controls the virtual object to execute the target action according to the identification.
According to a fourth aspect of the present disclosure, a live virtual object control apparatus is provided, which is applied to a second device and includes an information reading module, an information identifying module, and an information interacting module.
Specifically, the information reading module can be used for acquiring live comment information input by a user in live broadcasting; the live scene comprises a virtual object; the information identification module can be used for matching the live comment information with a target action in an executable action set of the virtual object and acquiring an identifier corresponding to the target action; the information interaction module may be configured to send the identifier to the first device, so that the first device controls the virtual object to execute the target action according to the identifier.
According to a fifth aspect of the present disclosure, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements any one of the above-described live virtual object control methods.
According to a sixth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions for the processor; wherein the processor is configured to perform any of the above-described live virtual object control methods via execution of executable instructions.
In the technical solutions provided by some embodiments of the present disclosure, a set live view including a virtual object is loaded at a live view end, and displayed at the live view end. And the live broadcast end receives live broadcast comment information sent by the audience end, and if the live broadcast comment information is matched with a target action in the executable action set of the virtual object, the virtual object is controlled to execute the target action. The live virtual object control method realizes accurate control of the virtual object, can complete corresponding actions according to live broadcast requirements, can perform real-time interaction with audiences according to barrage content, increases the diversity of live broadcast, improves the live broadcast effect and improves the retention rate of users.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
fig. 1 schematically shows a flow chart of a method of controlling a virtual object in live according to an exemplary embodiment of the present disclosure;
fig. 2 schematically illustrates an effect diagram of a live view of a virtual object in a virtual scene according to an exemplary embodiment of the present disclosure;
fig. 3 schematically illustrates an effect diagram of a live view of a virtual object in a real scene according to an exemplary embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow diagram for fusing a virtual object with a virtual scene to generate a live view, according to an exemplary embodiment of the present disclosure;
FIG. 5 schematically illustrates a flow diagram for fusing a virtual object with a real scene to generate a live view, according to an exemplary embodiment of the present disclosure;
FIG. 6 schematically illustrates a matching relationship diagram of live comment information to a set of virtual object executable actions, according to an exemplary embodiment of the present disclosure;
fig. 7 schematically shows a flowchart of a live virtual object control method applied to a second device according to an exemplary embodiment of the present disclosure;
fig. 8 schematically illustrates a block diagram of a live virtual object control apparatus applied to a first device according to an exemplary embodiment of the present disclosure;
fig. 9 schematically illustrates a block diagram of a live virtual object control method applied to a second device according to an exemplary embodiment of the present disclosure;
fig. 10 schematically shows a block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the steps. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
With the continuous improvement of living standard of people, people who watch live broadcast are more and more, and live broadcast mode is more and more diversified.
When the anchor is live broadcast in a real scene, the live broadcast is carried out in the same live broadcast scene due to the limited available live broadcast scenes of the anchor, and the live broadcast scene is single. In addition, after the anchor performs live broadcasting for a long time, the live broadcasting state may be degraded, and the live broadcasting effect may be adversely affected. The problems can be effectively solved by adding live broadcast in a virtual scene in the live broadcast process and adding a virtual object in the live broadcast scene for live broadcast.
However, the existing technologies for live broadcasting by using a virtual object all set the action to be executed by the virtual object in advance, and in the live broadcasting process, the virtual object cannot interact with audiences according to live broadcasting conditions and the live broadcasting effect is poor because of self-service execution according to the set action. For example, when the anchor or audience is excited and in an open state, the virtual object performs a crying action according to the preset content, which greatly affects the live broadcast effect. In view of this, a new method for controlling virtual objects in live broadcasting is needed.
The steps of the live virtual object control method according to the exemplary embodiment of the present disclosure may be generally performed by a server, and in this case, a live virtual object control apparatus described below may be configured in the server. The server may include a dedicated chip or be equipped with an independent GPU (Graphics Processing Unit). However, aspects of the present disclosure may also be implemented with terminal devices, which may include, but are not limited to, cell phones, tablets, personal computers, and the like.
Fig. 1 schematically shows a flowchart of a live virtual object control method according to an exemplary embodiment of the present disclosure, where the method is applied to a first device, and the first device is a device used by a main broadcast end for live broadcast. Referring to fig. 1, a live virtual object control method may include the steps of:
and S12, displaying a live broadcast picture, wherein the live broadcast picture comprises a virtual object.
In an exemplary embodiment of the present disclosure, a live view is a view seen by a main broadcast end during live broadcast, and the view may include one or more virtual objects, and the view may be a live view in a real scene or a live view in a virtual scene. The virtual object may be a virtual character, or may be a virtual automobile, a tree, or other object.
Fig. 2 schematically shows an effect diagram of a live view of a virtual object in a virtual scene. As shown in fig. 2, a live view in a frame is a virtual view, the virtual view includes a virtual object, and a live view including the virtual object is displayed on a device at the anchor end. The virtual objects and the virtual scenes are designed in advance by developers, and when the anchor needs to use, the resources are directly loaded and called.
Fig. 3 schematically shows an effect diagram of a live view of a virtual object in a real scene. As shown in fig. 3, a picture including a real scene, a real anchor, and a virtual object is displayed on a device on the anchor side.
In an exemplary embodiment of the present disclosure, when a host needs to use a virtual object to perform live broadcasting in a virtual scene, referring to fig. 4, only the virtual object 41 and the virtual scene 43 that are designed in advance need to be directly called and imported into the three-dimensional engine 45, the virtual object 41 may be sent to the three-dimensional engine 45 through a signal transmission manner of a wired SDI (Digital component Interface) or a wireless NDI (network device Interface), the three-dimensional engine 45 configures the virtual object 41 in the virtual scene 43 to generate a merged live broadcast picture 47, and the live broadcast picture 47 is a virtual scene picture containing the virtual object.
In an exemplary embodiment of the present disclosure, when a anchor needs to use a virtual object in a real scene for live broadcasting, referring to fig. 5, a virtual object 51 designed in advance is called first, then real scene information 53 is collected through an external device such as a camera or a microphone, the virtual object 51 and the real scene information are imported into a three-dimensional engine 55, the three-dimensional engine 55 configures the virtual object 51 in the real scene, and a fused live broadcast picture 57 is generated, where the live broadcast picture 57 is a real scene picture including the virtual object.
And S14, receiving the live comment information sent by the second equipment.
In an exemplary embodiment of the present disclosure, the second device refers to a device used by the viewer for watching a live broadcast, and the live comment information is one or more of barrage information, voice information, and gifted gift information that are input by the viewer when watching the live broadcast. And after the audience sends the live comment information, the audience end sends the live comment information to the anchor end. To be explained, the live comment information may be sent to the live server by the viewer, then sent to the live server by the live server, or sent to the live server directly by the viewer, and the specific sending mode is not limited in this disclosure.
And S16, if the live comment information is matched with a target action in the executable action set of the virtual object, controlling the virtual object to execute the target action.
In an exemplary embodiment of the disclosure, the executable action set of the virtual object is all actions that the virtual object designed in advance can execute, such as closed-eye, open mouth, smile, depression and other expressive actions, the target action is an action to be executed by the virtual object, matching of the comment information with the target action in the action set can be understood as different comment information respectively corresponding to different actions in the virtual object action set, and when a certain piece of comment information received by the anchor corresponds to an action that the virtual object can execute, the comment information is considered to be matched with the target action in the virtual object executable action set. Referring to fig. 6, the information contained in the live comment information 61 corresponds to the actions in the virtual object executable action set 63, respectively, and there is a one-to-one correspondence relationship. For example, when it is detected that the word "smile" is included in the live comment information 61, an action corresponding to the "smile" is searched for in the virtual object executable action set 63, and when it is found that there is "action 1" matching the "smile", the target action is determined as "action 1", and the virtual object is controlled to execute the target action "action 1".
In an exemplary embodiment of the disclosure, when live comment information is received by a live terminal, the live comment information needs to be identified to obtain an identification result, each action in a virtual object executable action set also corresponds to one control trigger information, and when the identification result includes the control trigger information, an action corresponding to the control trigger information is taken as a target action. For example, if the control trigger information for controlling the virtual object to execute "action 2" is "hard pass", the live comment information is recognized, and if it is recognized that the comment information includes the word "hard pass", then "action 2" is determined as the target action to be executed by the virtual object. The correspondence between the specific control trigger information and the recognition result may be preset in advance by a developer, or may be preset by a host according to a specific situation.
In an exemplary embodiment of the disclosure, when the live comment information is text information, whether the comment information includes control trigger information may be determined by comparing the received comment information with the control trigger information. For example, the received live comment information is "i have difficulty today", the control trigger information is "difficulty", and by comparing the two pieces of information, the existing technical means can easily judge that the group of information "i have difficulty today" includes the information "difficulty".
In an exemplary embodiment of the disclosure, when the live comment information is speech information, a machine learning method may be adopted, a model that is commonly used in the field of character recognition is selected, and the model includes but is not limited to CNN (Convolutional Neural Networks), RNN (Recurrent Neural Networks), and the like, and a large amount of speech data in the speech database and a large amount of text data in the text database are respectively trained by using the model to obtain an acoustic model and a language model, and then feature extraction is performed on the input speech by using the model to output text information corresponding to the speech information, and after the speech information is converted into text information, information matching may be completed by the above-mentioned method.
In an exemplary embodiment of the present disclosure, when the live comment information is a gift given by the viewer, different gifts may be pre-designed to correspond to different control trigger information. For example, when the viewer gives a bonus of "rocket" to the anchor, the control trigger information corresponding to "rocket" may be set to "laugh", and the action in the virtual object executable action set corresponding to the control trigger information "laugh" is "action 3", the target action may be determined as "action 3".
In an exemplary embodiment of the present disclosure, before identifying the received live comment information, it may be further determined whether the live comment information needs to be identified according to the level and the watching duration of the audience. The judgment method can be that only the user grade is limited, and when the user grade is greater than a preset grade threshold value, live comment information sent by the user is identified; the method can also be used for only limiting the watching time length of the user, and when the watching time length of the user is greater than a preset first time length threshold value, identifying the live comment information sent by the user; and the user grade and the watching duration can be simultaneously limited, and when the user grade is greater than a preset grade threshold value and the watching duration of the user is greater than a preset first duration threshold value, the live comment information sent by the user is identified.
In an exemplary embodiment of the present disclosure, before controlling the virtual object to execute the target action, a time point at which the virtual object last executed the target action may also be acquired; calculating the time interval between the current time point and the time point of the last virtual object executing the target action; and if the time interval is greater than a preset second duration threshold, controlling the virtual object to execute the target action, and if the time interval is less than the preset second duration threshold, controlling the virtual object not to execute the target action.
In an exemplary embodiment of the present disclosure, a virtual object includes a plurality of layers, and when a target action executed by the virtual object is an expression action, a target layer set corresponding to the execution of the target expression action in a layer set of the virtual object needs to be determined first; and respectively controlling each target layer in the target layer set based on the target expression action so as to execute the expression action. Specifically, the virtual object is led into Photoshop software, and the layers corresponding to the five sense organs are separated to obtain a PSD (photo Document, a graphic file format) Document; introducing a PSD (Photoshop Document, a graphic file format) Document into face capturing software, associating the image layers corresponding to the five sense organs with the five sense organs in the face capturing software, and adjusting the opening and closing amplitude of eyes and mouth. And when the virtual object executes the expression action, controlling the virtual object on the corresponding layer according to the action required to be executed.
In an exemplary embodiment of the present disclosure, when the virtual object is controlled to perform the target action, a voice associated with the action may also be played. In the process, only the voice information corresponding to the action is added when the virtual object target action is set. The played voice information can be set in advance by developers, and the anchor can also change the voice information according to the live broadcast requirement. For example, when a viewer gives a bonus of "rocket" to the anchor, the virtual object may be controlled to broadcast the nickname of the viewer when performing the corresponding action.
In an exemplary embodiment of the disclosure, the control method of the virtual object may further send a control instruction through the external device, and the anchor end controls the virtual object to execute the target action according to the received control instruction and the live comment information. The external equipment can be mouse, keyboard, game pad and other equipment. For example, when the target action corresponding to the received live comment information is "smile", the opening and closing degree of the eyes and mouth of the virtual object can be manually controlled while the virtual object smiles through the keys preset on the keyboard.
In an exemplary embodiment of the present disclosure, the control method of the virtual object may further capture facial expression motions and head motions of a anchor through a camera, and control the virtual object to perform the motions according to the facial expression motions and the head motions of the anchor. For example, when the anchor casts head, the virtual object is controlled to execute the corresponding head-warping action, and when the anchor casts smile, the virtual object is controlled to execute the smiling expression action.
In an exemplary embodiment of the present disclosure, after the virtual object is controlled to perform the corresponding target action, the screen of the virtual object performing the target action is sent to the second device through the server, that is, to a device used by the viewer to watch live broadcast, so that the viewer can watch live broadcast. Specifically, information such as live comment information or gifted gifts sent by audiences is sent to a first device, namely, a device at a main broadcast end, when the device at the main broadcast end detects that the information contains trigger information for controlling a virtual object to execute actions, the virtual object is controlled to execute corresponding actions, a live broadcast picture is generated on the device at the main broadcast end, and the live broadcast picture is sent to a second device through a server, namely, the live broadcast picture is sent to a device used by the audiences for watching live broadcasts, so that the audiences can watch the live broadcasts.
Fig. 7 schematically shows a flowchart of a live virtual object control method according to another exemplary embodiment of the present disclosure, where the method is applied to a second device, and the second device is a device used by a viewer for watching a live broadcast, such as a computer, a mobile phone, a tablet, and the like. Referring to fig. 7, the live virtual object control method may include the steps of:
s72, acquiring live comment information input by a user in live broadcasting; wherein, the live scene comprises virtual objects.
In another embodiment of the present disclosure, the live comment information is one or more of barrage information, voice information, and gift information given when the viewer watches the live comment. The live scene is a real scene or a virtual scene containing a virtual object. And when the audience sends live comment information in a live scene, the audience acquires the comment information.
S74, if the live comment information is matched with a target action in the executable action set of the virtual object, acquiring an identifier corresponding to the target action.
In another embodiment of the present disclosure, the executable action set of the virtual object is all actions that can be executed by a virtual object designed in advance, such as closed-eye, open-mouth, smile, depression, and other expressive actions, the target action is an action to be executed by the virtual object, matching of the comment information with the target action in the action set can be understood as different comment information respectively corresponding to different actions in the virtual object action set, when a certain piece of comment information received by the anchor corresponds to an action that can be executed by the virtual object, the comment information is considered to be matched with the target action in the virtual object executable action set, and the identifier corresponding to the target action is the control identifier corresponding to the target action. For example, preset live comment information "difficult to pass" corresponds to an action "action 4" in a virtual object executable action set, when a user inputs barrage information "i am difficult to pass today", a spectator detects that comment information contains information "difficult to pass", the action 4 "is determined as a target action, a control identifier corresponding to the action 4" may be "identifier 4", the identifier 4 "is information used for being sent to a first device (anchor), and when the first device receives the information of the control identifier" identifier 4 ", the virtual object can be directly controlled to execute a corresponding action.
And S76, sending the identification to first equipment so that the first equipment can control the virtual object to execute the target action according to the identification.
In another embodiment of the present disclosure, the live comment information is processed at the audience end, and a control identifier corresponding to a target action to be executed by the virtual object is directly obtained, where the control identifier may be a control instruction for directly controlling the virtual object. And the control identifier is sent to the anchor terminal at the audience terminal, and after the anchor terminal receives the control identifier, the anchor terminal can directly control the virtual object to execute the target action according to the control identifier.
It should be noted that although the various steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that these steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Further, a live virtual object control device applied to the first device is provided.
Fig. 8 schematically shows a block diagram of a live virtual object control apparatus according to an exemplary embodiment of the present disclosure. Referring to fig. 8, the live virtual object control apparatus 8 according to an exemplary embodiment of the present disclosure may include: screen display module 81, information processing module 83, and operation control module 85
Specifically, the picture display module 81 may be configured to display a live view, where the live view includes a virtual object; the information processing module 83 may be configured to receive live comment information sent by the second device; the action control module 85 may be configured to control the virtual object to perform a target action in the set of executable actions of the virtual object if the live comment information matches the target action.
In an exemplary embodiment of the present disclosure, the information processing module 83 may be configured to perform: identifying the live comment information to obtain an identification result; and comparing the recognition result with the control trigger information corresponding to each action in the executable action set of the virtual object, and determining the action of which the control trigger information is matched with the recognition result as the target action.
In an exemplary embodiment of the present disclosure, the information processing module 83 may be further configured to perform: acquiring a user grade corresponding to a live client of second equipment; identifying the on-air comment information if the user rating is above a rating threshold.
In an exemplary embodiment of the present disclosure, the information processing module may be further configured to perform: acquiring the accumulated online time corresponding to the live broadcast client of the second equipment; and if the accumulated online time is higher than the first time threshold, identifying the live comment information.
In an exemplary embodiment of the present disclosure, the information processing module 83 may be further configured to perform: acquiring a user grade and accumulated online time corresponding to a live client of second equipment; identifying the live comment information if the user rating is above the rating threshold and the accumulated online time is above a first time threshold.
In an exemplary embodiment of the present disclosure, the information processing module 83 may be further configured to perform: acquiring a time point of executing a target action from a current last virtual object; calculating the time interval between the current time point and the time point of executing the target action by the current virtual object at the last time; and if the time interval is greater than the second duration threshold, controlling the virtual object to execute the target action.
In an exemplary embodiment of the present disclosure, the action control module 85 may be configured to perform: determining a target layer set corresponding to the execution of the target expression action in the layer set of the virtual object; and respectively controlling each target layer in the target layer set based on the target expression action so as to execute the expression action.
In an exemplary embodiment of the present disclosure, the motion control module 85 may be further configured to perform: acquiring voice information associated with a target action; and when the virtual object is controlled to execute the target action, playing the voice information.
In an exemplary embodiment of the present disclosure, the action control module 85 may be configured to perform: sending a control instruction by adopting external equipment; and the first equipment controls the virtual object to execute the target action according to the received control instruction and the live comment information.
Further, a live virtual object control device applied to the second device is provided.
Fig. 9 schematically shows a block diagram of a live virtual object control apparatus applied to a second device according to the present disclosure. Referring to fig. 9, the live virtual object control apparatus 9 according to an exemplary embodiment of the present disclosure may include: the system comprises an information reading module 91, an information identification module 93 and an information interaction module 95.
Specifically, the information reading module 91 may be configured to obtain live comment information input by a user in live broadcast; the live scene comprises a virtual object; the information recognition module 93 may be configured to match the live comment information with a target action in the executable action set of the virtual object, and acquire an identifier corresponding to the target action; the information interaction module 95 may be configured to send the identifier to the first device, so that the first device controls the virtual object to execute the target action according to the identifier.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above section "exemplary methods" of the present description, when said program product is run on the terminal device.
The program product for implementing the above method according to an embodiment of the present invention may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical disk, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 1000 according to this embodiment of the invention is described below with reference to fig. 10. The electronic device 1000 shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 10, the electronic device 1000 is embodied in the form of a general purpose computing device. The components of the electronic device 1000 may include, but are not limited to: the at least one processing unit 1010, the at least one memory unit 1020, a bus 1030 connecting different system components (including the memory unit 1020 and the processing unit 1010), and a display unit 1040.
Wherein the storage unit stores program code that is executable by the processing unit 1010 to cause the processing unit 1010 to perform steps according to various exemplary embodiments of the present invention as described in the "exemplary methods" section above in this specification.
The storage unit 1020 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)10201 and/or a cache memory unit 10202, and may further include a read-only memory unit (ROM) 10203.
The memory unit 1020 may also include a program/utility 10204 having a set (at least one) of program modules 10205, such program modules 10205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 1030 may be any one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, and a local bus using any of a variety of bus architectures.
The electronic device 1000 may also communicate with one or more external devices 1100 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 1000, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 1000 to communicate with one or more other computing devices. Such communication may occur through input/output (I/O) interfaces 1050. Also, the electronic device 1000 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet) via the network adapter 1060. As shown, the network adapter 1060 communicates with the other modules of the electronic device 1000 over the bus 1030. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 1000, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (14)

1. A method for controlling a virtual object in live broadcasting is applied to first equipment, and is characterized in that the method for controlling the virtual object in live broadcasting comprises the following steps:
displaying a live broadcast picture, wherein the live broadcast picture comprises a virtual object;
receiving live comment information sent by second equipment;
and if the live comment information is matched with a target action in the executable action set of the virtual object, controlling the virtual object to execute the target action.
2. The live virtual object control method according to claim 1, wherein the live virtual object control method includes:
identifying the live comment information to obtain an identification result;
and comparing the recognition result with control trigger information corresponding to each action in the executable action set of the virtual object, and determining the action of which the control trigger information is matched with the recognition result as the target action.
3. The live virtual object control method according to claim 2, wherein the identifying the live comment information includes:
acquiring a user grade corresponding to a live client of the second equipment;
and if the user grade is higher than a grade threshold value, identifying the live comment information.
4. The live virtual object control method according to claim 2, wherein the identifying the live comment information includes:
acquiring the accumulated online time corresponding to the live broadcast client of the second equipment;
and if the accumulated online time is higher than a first time threshold, identifying the live comment information.
5. The live virtual object control method according to claim 2, wherein the identifying the live comment information includes:
acquiring a user grade and accumulated online time corresponding to a live client of the second equipment;
and if the user level is higher than a level threshold and the accumulated online time is higher than a first time threshold, identifying the live comment information.
6. The live virtual object control method according to claim 2, wherein controlling the virtual object to perform the target action comprises:
acquiring a time point of the target action executed by the virtual object at the last time;
calculating the time interval between the current time point and the time point which is far from the time point when the virtual object executes the target action at the last time;
and if the time interval is greater than a second duration threshold, controlling the virtual object to execute the target action.
7. The method for controlling the virtual object in the live broadcast according to claim 1, wherein the target action is a target expression action, the virtual object includes a plurality of layers, and controlling the virtual object to execute the target action includes:
determining a target layer set corresponding to the target expression action in the layer set of the virtual object;
and respectively controlling each target layer in the target layer set based on the target expression action so as to execute the expression action.
8. The live virtual object control method according to any one of claims 1 to 7, further comprising:
acquiring voice information associated with the target action;
and when the virtual object is controlled to execute the target action, playing the voice information.
9. The live virtual object control method according to claim 1, further comprising:
sending a control instruction by adopting external equipment;
and the first equipment controls the virtual object to execute the target action according to the received control instruction and the live comment information.
10. A method for controlling a virtual object in live broadcasting is applied to a second device, and is characterized in that the method for controlling the virtual object in live broadcasting comprises the following steps:
acquiring live comment information input by a user in live broadcasting; the live scene comprises a virtual object;
if the live comment information is matched with a target action in the executable action set of the virtual object, acquiring an identifier corresponding to the target action;
and sending the identification to first equipment so that the first equipment controls the virtual object to execute the target action according to the identification.
11. A virtual object control device in live broadcast is applied to a first device and is characterized by comprising:
the picture display module is used for displaying a live broadcast picture, wherein the live broadcast picture comprises a virtual object;
the information processing module is used for receiving live comment information sent by the second equipment;
and the action control module is used for controlling the virtual object to execute the target action if the live comment information is matched with the target action in the executable action set of the virtual object.
12. A virtual object control device in live broadcast is applied to a second device and is characterized by comprising:
the information reading module is used for acquiring live comment information input by a user in live broadcasting; the live scene comprises a virtual object;
the information identification module is used for matching the live comment information with a target action in the executable action set of the virtual object and acquiring an identifier corresponding to the target action;
and the information interaction module is used for sending the identifier to first equipment so that the first equipment can control the virtual object to execute the target action according to the identifier.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a live virtual object control method according to any one of claims 1 to 10.
14. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the live virtual object control method of any of claims 1-10 via execution of the executable instructions.
CN202011017157.7A 2020-09-24 2020-09-24 Virtual object control method and device in live broadcast, storage medium and electronic equipment Pending CN112135160A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011017157.7A CN112135160A (en) 2020-09-24 2020-09-24 Virtual object control method and device in live broadcast, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011017157.7A CN112135160A (en) 2020-09-24 2020-09-24 Virtual object control method and device in live broadcast, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN112135160A true CN112135160A (en) 2020-12-25

Family

ID=73839706

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011017157.7A Pending CN112135160A (en) 2020-09-24 2020-09-24 Virtual object control method and device in live broadcast, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112135160A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113115061A (en) * 2021-04-07 2021-07-13 北京字跳网络技术有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
CN113325952A (en) * 2021-05-27 2021-08-31 百度在线网络技术(北京)有限公司 Method, apparatus, device, medium and product for presenting virtual objects
CN113422977A (en) * 2021-07-07 2021-09-21 上海商汤智能科技有限公司 Live broadcast method and device, computer equipment and storage medium
CN113487709A (en) * 2021-07-07 2021-10-08 上海商汤智能科技有限公司 Special effect display method and device, computer equipment and storage medium
CN113630613A (en) * 2021-07-30 2021-11-09 出门问问信息科技有限公司 Information processing method, device and storage medium
CN114155322A (en) * 2021-12-01 2022-03-08 北京字跳网络技术有限公司 Scene picture display control method and device and computer storage medium
CN114286155A (en) * 2021-12-07 2022-04-05 咪咕音乐有限公司 Picture element modification method, device, equipment and storage medium based on barrage
CN114327182A (en) * 2021-12-21 2022-04-12 广州博冠信息科技有限公司 Special effect display method and device, computer storage medium and electronic equipment
CN115314749A (en) * 2022-06-15 2022-11-08 网易(杭州)网络有限公司 Interactive information response method and device and electronic equipment
WO2023103603A1 (en) * 2021-12-10 2023-06-15 腾讯科技(深圳)有限公司 Livestreaming interaction method and apparatus, device, and storage medium
WO2023075682A3 (en) * 2021-10-25 2023-08-03 脸萌有限公司 Image processing method and apparatus, and electronic device, and computer-readable storage medium
WO2023075681A3 (en) * 2021-10-25 2023-08-24 脸萌有限公司 Image processing method and apparatus, and electronic device, and computer-readable storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2008029466A1 (en) * 2006-09-07 2010-01-21 学校法人 大阪電気通信大学 Chat terminal device, chat system
CN107423809A (en) * 2017-07-07 2017-12-01 北京光年无限科技有限公司 The multi-modal exchange method of virtual robot and system applied to net cast platform
CN107680157A (en) * 2017-09-08 2018-02-09 广州华多网络科技有限公司 It is a kind of based on live interactive approach and live broadcast system, electronic equipment
CN108322832A (en) * 2018-01-22 2018-07-24 广州市动景计算机科技有限公司 Comment on method, apparatus and electronic equipment
CN108986192A (en) * 2018-07-26 2018-12-11 北京运多多网络科技有限公司 Data processing method and device for live streaming
CN109120985A (en) * 2018-10-11 2019-01-01 广州虎牙信息科技有限公司 Image display method, apparatus and storage medium in live streaming
CN109275040A (en) * 2018-11-06 2019-01-25 网易(杭州)网络有限公司 Exchange method, device and system based on game live streaming
CN109727303A (en) * 2018-12-29 2019-05-07 广州华多网络科技有限公司 Video display method, system, computer equipment, storage medium and terminal
CN110087121A (en) * 2019-04-30 2019-08-02 广州虎牙信息科技有限公司 Virtual image display methods, virtual image display device and electronic equipment
WO2019163129A1 (en) * 2018-02-26 2019-08-29 三菱電機株式会社 Virtual object display control device, virtual object display system, virtual object display control method, and virtual object display control program
CN110634483A (en) * 2019-09-03 2019-12-31 北京达佳互联信息技术有限公司 Man-machine interaction method and device, electronic equipment and storage medium
CN110662083A (en) * 2019-09-30 2020-01-07 北京达佳互联信息技术有限公司 Data processing method and device, electronic equipment and storage medium
CN110850983A (en) * 2019-11-13 2020-02-28 腾讯科技(深圳)有限公司 Virtual object control method and device in video live broadcast and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2008029466A1 (en) * 2006-09-07 2010-01-21 学校法人 大阪電気通信大学 Chat terminal device, chat system
CN107423809A (en) * 2017-07-07 2017-12-01 北京光年无限科技有限公司 The multi-modal exchange method of virtual robot and system applied to net cast platform
CN107680157A (en) * 2017-09-08 2018-02-09 广州华多网络科技有限公司 It is a kind of based on live interactive approach and live broadcast system, electronic equipment
CN108322832A (en) * 2018-01-22 2018-07-24 广州市动景计算机科技有限公司 Comment on method, apparatus and electronic equipment
WO2019163129A1 (en) * 2018-02-26 2019-08-29 三菱電機株式会社 Virtual object display control device, virtual object display system, virtual object display control method, and virtual object display control program
CN108986192A (en) * 2018-07-26 2018-12-11 北京运多多网络科技有限公司 Data processing method and device for live streaming
CN109120985A (en) * 2018-10-11 2019-01-01 广州虎牙信息科技有限公司 Image display method, apparatus and storage medium in live streaming
CN109275040A (en) * 2018-11-06 2019-01-25 网易(杭州)网络有限公司 Exchange method, device and system based on game live streaming
CN109727303A (en) * 2018-12-29 2019-05-07 广州华多网络科技有限公司 Video display method, system, computer equipment, storage medium and terminal
CN110087121A (en) * 2019-04-30 2019-08-02 广州虎牙信息科技有限公司 Virtual image display methods, virtual image display device and electronic equipment
CN110634483A (en) * 2019-09-03 2019-12-31 北京达佳互联信息技术有限公司 Man-machine interaction method and device, electronic equipment and storage medium
CN110662083A (en) * 2019-09-30 2020-01-07 北京达佳互联信息技术有限公司 Data processing method and device, electronic equipment and storage medium
CN110850983A (en) * 2019-11-13 2020-02-28 腾讯科技(深圳)有限公司 Virtual object control method and device in video live broadcast and storage medium

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113115061B (en) * 2021-04-07 2023-03-10 北京字跳网络技术有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
CN113115061A (en) * 2021-04-07 2021-07-13 北京字跳网络技术有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
CN113325952A (en) * 2021-05-27 2021-08-31 百度在线网络技术(北京)有限公司 Method, apparatus, device, medium and product for presenting virtual objects
CN113422977A (en) * 2021-07-07 2021-09-21 上海商汤智能科技有限公司 Live broadcast method and device, computer equipment and storage medium
CN113487709A (en) * 2021-07-07 2021-10-08 上海商汤智能科技有限公司 Special effect display method and device, computer equipment and storage medium
CN113422977B (en) * 2021-07-07 2023-03-14 上海商汤智能科技有限公司 Live broadcast method and device, computer equipment and storage medium
CN113630613A (en) * 2021-07-30 2021-11-09 出门问问信息科技有限公司 Information processing method, device and storage medium
CN113630613B (en) * 2021-07-30 2023-11-10 出门问问信息科技有限公司 Information processing method, device and storage medium
WO2023075682A3 (en) * 2021-10-25 2023-08-03 脸萌有限公司 Image processing method and apparatus, and electronic device, and computer-readable storage medium
WO2023075681A3 (en) * 2021-10-25 2023-08-24 脸萌有限公司 Image processing method and apparatus, and electronic device, and computer-readable storage medium
CN114155322A (en) * 2021-12-01 2022-03-08 北京字跳网络技术有限公司 Scene picture display control method and device and computer storage medium
CN114286155A (en) * 2021-12-07 2022-04-05 咪咕音乐有限公司 Picture element modification method, device, equipment and storage medium based on barrage
WO2023103603A1 (en) * 2021-12-10 2023-06-15 腾讯科技(深圳)有限公司 Livestreaming interaction method and apparatus, device, and storage medium
CN114327182A (en) * 2021-12-21 2022-04-12 广州博冠信息科技有限公司 Special effect display method and device, computer storage medium and electronic equipment
CN114327182B (en) * 2021-12-21 2024-04-09 广州博冠信息科技有限公司 Special effect display method and device, computer storage medium and electronic equipment
CN115314749A (en) * 2022-06-15 2022-11-08 网易(杭州)网络有限公司 Interactive information response method and device and electronic equipment
CN115314749B (en) * 2022-06-15 2024-03-22 网易(杭州)网络有限公司 Response method and device of interaction information and electronic equipment

Similar Documents

Publication Publication Date Title
CN112135160A (en) Virtual object control method and device in live broadcast, storage medium and electronic equipment
US11158102B2 (en) Method and apparatus for processing information
CN111741326B (en) Video synthesis method, device, equipment and storage medium
US11882319B2 (en) Virtual live video streaming method and apparatus, device, and readable storage medium
CN110517689B (en) Voice data processing method, device and storage medium
CN111050201B (en) Data processing method and device, electronic equipment and storage medium
US20210082394A1 (en) Method, apparatus, device and computer storage medium for generating speech packet
CN111654715B (en) Live video processing method and device, electronic equipment and storage medium
CN112653902B (en) Speaker recognition method and device and electronic equipment
CN109493888B (en) Cartoon dubbing method and device, computer-readable storage medium and electronic equipment
CN110602516A (en) Information interaction method and device based on live video and electronic equipment
EP4099709A1 (en) Data processing method and apparatus, device, and readable storage medium
US20230047858A1 (en) Method, apparatus, electronic device, computer-readable storage medium, and computer program product for video communication
CN110931042A (en) Simultaneous interpretation method and device, electronic equipment and storage medium
CN112399258A (en) Live playback video generation playing method and device, storage medium and electronic equipment
CN111629253A (en) Video processing method and device, computer readable storage medium and electronic equipment
CN111862280A (en) Virtual role control method, system, medium, and electronic device
US11886484B2 (en) Music playing method and apparatus based on user interaction, and device and storage medium
CN113299312A (en) Image generation method, device, equipment and storage medium
CN111343473A (en) Data processing method and device for live application, electronic equipment and storage medium
CN110349577B (en) Man-machine interaction method and device, storage medium and electronic equipment
CN112954390A (en) Video processing method, device, storage medium and equipment
CN113411674A (en) Video playing control method and device, electronic equipment and storage medium
CN113703579B (en) Data processing method, device, electronic equipment and storage medium
CN113282791A (en) Video generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201225