CN113194329B - Live interaction method, device, terminal and storage medium - Google Patents

Live interaction method, device, terminal and storage medium Download PDF

Info

Publication number
CN113194329B
CN113194329B CN202110507538.1A CN202110507538A CN113194329B CN 113194329 B CN113194329 B CN 113194329B CN 202110507538 A CN202110507538 A CN 202110507538A CN 113194329 B CN113194329 B CN 113194329B
Authority
CN
China
Prior art keywords
interaction
instruction
terminal
live broadcast
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110507538.1A
Other languages
Chinese (zh)
Other versions
CN113194329A (en
Inventor
陈盛福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Fanxing Huyu IT Co Ltd
Original Assignee
Guangzhou Fanxing Huyu IT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Fanxing Huyu IT Co Ltd filed Critical Guangzhou Fanxing Huyu IT Co Ltd
Priority to CN202110507538.1A priority Critical patent/CN113194329B/en
Publication of CN113194329A publication Critical patent/CN113194329A/en
Application granted granted Critical
Publication of CN113194329B publication Critical patent/CN113194329B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a live broadcast interaction method, a live broadcast interaction device, a terminal and a storage medium, and relates to the field of live broadcast. The method comprises the following steps: responding to an AR object setting instruction, displaying an AR object in a live broadcast picture, wherein the live broadcast picture is a picture acquired by a live broadcast terminal through a camera; receiving an AR object interaction instruction, wherein the AR object interaction instruction is triggered by a live broadcast terminal or a spectator terminal; and controlling the AR object to execute the interaction action corresponding to the AR object interaction instruction. Through the live broadcast interaction method provided by the application, the interaction between the AR object and the audience is controlled by the anchor terminal, and the AR object can be controlled by the audience terminal to interact, so that the interaction mode in the live broadcast process is enriched, and the interaction participation of the audience terminal in the live broadcast process is improved.

Description

Live interaction method, device, terminal and storage medium
Technical Field
The embodiment of the application relates to the field of live video broadcasting, in particular to a live video interaction method, a live video interaction device, a live video interaction terminal and a live video interaction storage medium.
Background
Live broadcast is an emerging network social mode of watching films on the same or different network platforms through a network, the network live broadcast platform also becomes a brand-new social media, independent signal acquisition equipment (audio+video) is erected on site and is led into a live broadcast end (guide equipment or platform), and then the live broadcast is uploaded to a server through the network and is released to a website for watching, and the live broadcast process has independent controllability, so that interaction with a viewer watching live broadcast to a certain extent can be realized.
In the related art, when a host broadcasts live through a terminal, interaction with a terminal of a spectator is realized through bullet screen information displayed on a live broadcast picture, or interaction effect is achieved by connecting with the spectator and receiving virtual gifts presented by the spectator.
However, in the related art, the interaction mode between the audience and the anchor during the live broadcast watching process is relatively single, and the interaction degree is relatively low.
Disclosure of Invention
The embodiment of the application provides a live interaction method, a live interaction device, a terminal and a storage medium. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a live interaction method, where the method includes:
responding to an AR object setting instruction, and displaying an AR object in a live broadcast picture, wherein the live broadcast picture is acquired by a live broadcast terminal through a camera;
receiving an AR object interaction instruction, wherein the AR object interaction instruction is triggered by the live broadcast terminal or the audience terminal;
and controlling the AR object to execute the interaction action corresponding to the AR object interaction instruction.
In another aspect, an embodiment of the present application provides a live interaction device, where the device includes:
the display module is used for responding to the AR object setting instruction and displaying the AR object in a live broadcast picture, wherein the live broadcast picture is acquired by the live broadcast terminal through the camera;
The interactive instruction receiving module is used for receiving an AR object interactive instruction, wherein the AR object interactive instruction is triggered by the live broadcast terminal or the audience terminal;
and the interaction module is used for controlling the AR object to execute the interaction action corresponding to the AR object interaction instruction.
In another aspect, an embodiment of the present application provides a terminal, where the terminal includes a processor and a memory; the memory stores at least one instruction for execution by the processor to implement the live interaction method of the above aspect.
In another aspect, embodiments of the present application provide a computer-readable storage medium storing at least one instruction for execution by a processor to implement a live interaction method as described in the above aspects.
In another aspect, embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the live interaction method provided in the above aspect.
The beneficial effects that technical scheme that this application embodiment provided include at least:
according to the method provided by the embodiment of the application, the AR object is displayed in the picture acquired by the live broadcast terminal through the AR object setting instruction, and after the AR object interaction instruction sent by the live broadcast terminal or the audience terminal is received, the AR object is controlled to execute the corresponding interaction action based on the AR object interaction instruction; by adopting the scheme provided by the embodiment of the application, the interaction between the AR object and the audience is controlled by the anchor terminal, and the AR object can be controlled by the audience terminal to interact, so that the interaction mode in the live broadcast process is enriched, and the interaction participation of the audience terminal in the live broadcast process is improved.
Drawings
FIG. 1 is an interface schematic diagram illustrating implementation of a live interaction method according to an exemplary embodiment of the present application;
FIG. 2 is a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 3 is a flowchart of a live interaction method provided in an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram of a display placement object provided in an exemplary embodiment of the present application;
FIG. 5 is a schematic illustration of a virtual gift presented by an audience terminal in accordance with an exemplary embodiment of the present application;
FIG. 6 is a flowchart of a live interaction method provided in another exemplary embodiment of the present application;
FIG. 7 is a schematic diagram of rendering AR objects based on depth information of a 3D object provided by one exemplary embodiment of the present application;
FIG. 8 is a flowchart of a live interaction method provided in another exemplary embodiment of the present application;
FIG. 9 is a schematic diagram of determining a point cloud movement amount based on an interactive action according to an exemplary embodiment of the present application;
FIG. 10 is a schematic diagram of performing an interactive action based on 3D object depth information and point cloud movement amount according to an exemplary embodiment of the present application;
FIG. 11 is a flowchart of a live interaction method provided in another exemplary embodiment of the present application;
FIG. 12 is a flowchart of a live interaction method provided in another exemplary embodiment of the present application;
FIG. 13 is a schematic diagram of a viewer terminal selecting a target interaction action provided in an exemplary embodiment of the present application;
FIG. 14 is a schematic diagram of an AR object provided by one exemplary embodiment of the present application performing a target interactive action mimicking user behavior;
FIG. 15 is a flowchart of a live interaction method provided in another exemplary embodiment of the present application;
FIG. 16 is a schematic diagram of an AR object performing a target interactive action at an interactive object location according to an exemplary embodiment of the present application;
FIG. 17 is a flowchart illustrating a live interaction method according to an exemplary embodiment of the present application;
FIG. 18 illustrates an interface schematic of a custom AR object provided by one embodiment of the present application;
FIG. 19 is a block diagram illustrating a live interaction device according to one embodiment of the present application;
fig. 20 is a block diagram illustrating a structure of a terminal according to an exemplary embodiment of the present application.
Detailed Description
For the purposes of clarity, technical solutions and advantages of the present application, the following description will further describe in detail the embodiments of the present application with reference to the accompanying drawings.
For ease of understanding, the terms referred to in the embodiments of the present application are described below:
augmented reality (Augmented Reality, AR): the AR technology is a technology for skillfully fusing virtual information with the real world, and widely uses various technical means such as multimedia, three-dimensional modeling, real-time tracking and registering, intelligent interaction, sensing and the like, and applies virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer to the real world after simulation, wherein the two information are mutually complemented, so that the enhancement of the real world is realized.
3D:3D (three dimensional) is a three-dimensional figure, and displaying a 3D figure in a computer is displaying a three-dimensional figure in a plane. Unlike the real world, a real three-dimensional space has a real distance space. The images displayed in the computer simply look much like the real world, so the 3D graphics displayed in the computer look much like the real world to the human eye. Since the image is observed with naked eyes, the stereoscopic effect is formed due to the characteristic of near small and far large. The computer screen is planar and two-dimensional, and the naked eye can appreciate a three-dimensional image which is truly solid, because the difference of color gray scales causes the naked eye to generate visual illusion when the computer screen is displayed, and the two-dimensional computer screen is perceived as the three-dimensional image. Based on knowledge of colorimetry, the raised portions of the edges of a three-dimensional object are typically highly colored, while the recessed portions are darkened by being blocked by light. This knowledge is widely used in the drawing of buttons, 3D lines in web pages or other applications. For example, the 3D text to be drawn is displayed with a high brightness color at the original position, and outlined with a low brightness color at the lower left or upper right position, so that the 3D text is visually produced. In particular, when the method is implemented, two 2D characters with different colors can be respectively drawn at different positions by using the completely same fonts, and 3D characters with different effects can be completely generated visually as long as the coordinates of the two characters are suitable.
In the related art, when a host uses a host terminal to perform network live broadcast, the audience terminal can only interact with the host terminal by sending a barrage or giving a virtual gift, the interaction mode is single, the interaction degree is low, and the interest of the audience and the vermicelli watching live broadcast cannot be fully stimulated.
In the embodiment of the application, the anchor terminal triggers the AR object to execute the corresponding interaction action based on the received AR object interaction instruction sent by the audience terminal, so that interaction between the audience terminal and the AR object is realized, and the participation of the audience in the live broadcast process is improved.
Fig. 1 is an interface schematic diagram of an implementation process of a live interaction method according to an embodiment of the present application. The anchor terminal 110 logs in an anchor account and performs network live broadcast through a live broadcast application program, and in the live broadcast process, the anchor collects live broadcast environment images through a camera, and after the anchor terminal 110 selects the AR pet 111, the AR pet 111 is placed on an object displayed in a live broadcast picture, which may be a floor, a table, a cabinet, a bed and other objects in the live broadcast picture. The anchor terminal 110 can control the AR pet 111 to display interactive motion to the audience terminal 120 by inputting instructions at any time.
The audience terminal 120 logs in a corresponding audience account and watches live webcast through a live broadcast room, when the audience terminal 120 needs to interact with an AR pet placed in the live broadcast room by the anchor terminal 110, an opportunity for interaction is obtained by transferring virtual resources (such as giving a virtual gift to the anchor) to the anchor terminal 110, and when the anchor terminal 110 receives an AR object interaction instruction sent by the audience terminal 120, the AR pet 111 is controlled to execute a corresponding interaction action, so that interaction between the audience terminal 120 and the AR pet 111 is realized.
FIG. 2 illustrates a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 200 includes: a anchor terminal 210, a server 220, and a viewer terminal 230.
The anchor terminal 210 installs and runs a live application program having an AR live function, that is, capable of adding an AR object to the acquired live view and controlling the AR object to perform a corresponding action. The application may be any one of a game-type live application, a comprehensive-type live application, a chat-type live application, a food-type live application, and a shopping-type live application. The anchor terminal 210 is a terminal used by an anchor and logs in a corresponding live application to live in a live room, the anchor performs network live broadcasting by using the anchor terminal 210 and performs interaction with the viewer terminal 230 by placing AR objects in the live room and making interaction actions by using an AR live broadcasting function, the interaction actions including but not limited to: at least one of changing the physical posture, walking, running, jumping, selling loved ones, mimicking actions of an AR subject. Illustratively, the anchor controls the AR object to walk or jump in the room by inputting voice instructions, controls the AR object (which may be an AR pet or an AR character) to interact with itself by simulating the limb motion, and the like.
The anchor terminal 210 is connected to the server 220 through a wireless network or a wired network.
Server 220 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. Server 220 provides background services for live applications in anchor terminal 210 and audience terminal 230. For example, server 220 may be a background server for the application programs described above. In this embodiment, the server 220 may receive the live video stream sent from the anchor terminal 210 and push the live video stream to the viewer terminal 230 that views live video; optionally, the server 220 is further configured to receive the barrage information and the transferred virtual resources sent by the audience terminal 230, and push the fused live video stream to the anchor terminal 210 and the audience terminal 230. In addition, the server 220 may also receive a connection request from the audience terminal 230 to the anchor terminal 210, so as to implement the connection interaction between the anchor terminal 210 and the audience terminal 230.
The viewer terminal 230 is connected to the server 220 through a wireless network or a wired network.
The viewer terminal 230 installs and runs a live application, which may be any one of a game-type live application, a comprehensive-type live application, a chat-type live application, a food-type live application, and a shopping-type live application. The audience terminal 230 is a terminal used by an audience watching live broadcast, and is provided with a corresponding live broadcast application program, and enters a live broadcast room to watch live broadcast, the audience terminal 230 obtains an opportunity to interact with an AR object by transferring virtual resources to the anchor terminal 210 (such as giving a virtual gift to the anchor), and controls the AR object to execute corresponding interaction actions by sending an AR object interaction instruction, wherein the interaction actions include but are not limited to: at least one of changing the physical posture, walking, running, jumping, selling lovely, mimicking actions of an AR subject. Illustratively, after brushing the anchor terminal 210 with the gift, the audience terminal 230 triggers the AR object (which may be an AR pet or an AR character) to turn around in the room, controls the AR object to follow its own limb movements to make interactive actions, etc.
Alternatively, the live applications installed on the anchor terminal 210 and the viewer terminal 230 are the same, or the live applications installed on the two terminals are the same type of live application of different control system platforms. The anchor terminal 210 is the only terminal controlled by the anchor, and the audience terminal 230 may refer broadly to one of a plurality of terminals, the present embodiment being illustrated only with the anchor terminal 210 and the audience terminal 230. The device types of the anchor terminal 210 and the viewer terminal 230 are the same or different, and include: at least one of a smart phone, a tablet computer, a smart television, a portable computer, and a desktop computer. The following embodiments are illustrated with the terminal comprising a smart phone.
Those skilled in the art will recognize that the number of terminals may be greater or lesser. Such as the above-mentioned terminals may be only one, or the above-mentioned terminals may be several tens or hundreds, or more. The number of terminals and the device type are not limited in the embodiment of the present application.
Fig. 3 is a flowchart of a live interaction method according to an exemplary embodiment of the present application, where the embodiment is illustrated by taking the method used in the anchor terminal shown in fig. 2 as an example. The method comprises the following steps:
In step 301, in response to the AR object setting instruction, an AR object is displayed in a live broadcast picture, where the live broadcast picture is a picture acquired by the live broadcast terminal through the camera.
When the anchor needs to display the AR object on the live broadcast picture, an AR object selection column is called out in a user interface, the AR object to be displayed is selected from an AR object selection list, and the AR object selection list at least comprises one AR object, and optionally, the AR object can be a virtual pet, a virtual character, a virtual pendant and the like. The present embodiment will be described with reference to displaying a virtual pet on a live broadcast screen.
The AR objects in the AR object selection list are displayed in the form of thumbnails, which may optionally display static or dynamic icons. In addition, the thumbnail may be a general two-dimensional picture display or a 3D picture display, and the embodiment is not limited to a specific form of the thumbnail.
In one possible implementation, when the anchor needs to display the AR object in the live broadcast screen, the AR object selection list is called out by means of a finger sliding screen, the AR object in the AR object selection list is displayed in the form of a 3D thumbnail, and when the 3D thumbnail of the AR object is clicked with a finger, the selected 3D thumbnail is displayed in a dynamic form, and the dynamic display content can be an interactive action that can be performed by the AR object, so that the anchor can make the selection better.
The selected AR object is a virtual object displayed in the form of a 3D image in a live view, and in order to smartly fuse the selected AR object with a real-world object, a live view image needs to be acquired by a camera and a placement object contained in the image is acquired, and the placement object is used for placing the selected AR object. For example, when the anchor terminal receives the AR object setting instruction, the selected AR object is displayed on the selected floor or table, and the frames are rendered and fused, so that the frames are more truly presented on the live broadcast frames.
As shown in fig. 4, in one possible implementation, when the anchor determines the AR object, the anchor terminal 400 collects an environmental image in the environment where the anchor terminal 400 is located through a camera, identifies a placement object included in the environmental image based on an object identification algorithm, and displays the identification result in the form of text at the location of the placement object, for example, when the AR object selected by the anchor terminal is the AR pet 410, prompts the anchor to determine that the AR pet 410 is placed on the specific placement object according to the displayed text, and when the anchor terminal receives an AR object setting instruction, displays the AR pet 410 at a corresponding location in a live broadcast picture.
Step 302, an AR object interaction instruction is received, where the AR object interaction instruction is triggered by a live broadcast terminal or a viewer terminal.
And when the audience needs to interact with the AR objects displayed in the live broadcast display picture, the AR objects are triggered to execute interaction actions by sending an AR object interaction instruction to the anchor terminal, wherein the AR object interaction instruction comprises the interaction actions which the audience terminal needs to execute by the AR objects.
Similarly, the anchor terminal can also receive an AR object interaction instruction triggered by itself, namely, the anchor triggers the AR object interaction instruction in the live broadcast process to control the AR object to execute corresponding interaction actions.
In one possible implementation, the audience terminal may trigger the AR object interaction instruction by presenting a virtual gift to the anchor terminal. As shown in fig. 5, a viewer views a live webcast through a viewer terminal 500, an AR object 510 is displayed in a live webcast picture, and when the viewer terminal 500 needs to interact with the AR object, by presenting a virtual gift, such as a virtual gift of "rocket" or "pet food", to a host terminal, an AR object interaction instruction is transmitted to the host terminal after presenting the virtual gift.
In step 303, the AR object is controlled to execute the interaction action corresponding to the AR object interaction instruction.
When the anchor terminal receives an AR object interaction object instruction triggered by the audience terminal or the anchor terminal, acquiring an AR object interaction instruction, wherein the AR object interaction instruction comprises interaction actions, and controlling the AR object to execute corresponding interaction actions, if the AR object selected by the anchor terminal is an AR puppy, and if the interaction actions contained in the received AR object interaction instruction are 'pet puppy tail shaking bars', the AR puppy in the live broadcast display picture is controlled to stand from the floor and shake the tail.
In summary, in the embodiment of the present application, the AR object is displayed in the picture acquired by the live broadcast terminal through the AR object setting instruction, and after the AR object interaction instruction sent by the live broadcast terminal or the audience terminal is received, the AR object is controlled to execute the corresponding interaction action based on the AR object interaction instruction; by adopting the scheme provided by the embodiment of the application, the interaction between the AR object and the audience is controlled by the anchor terminal, and the AR object can be controlled by the audience terminal to interact, so that the interaction mode in the live broadcast process is enriched, and the interaction participation of the audience terminal in the live broadcast process is improved.
Fig. 6 is a flowchart of a live interaction method according to another exemplary embodiment of the present application, where the embodiment is illustrated by taking the method used in the anchor terminal shown in fig. 2 as an example. The method comprises the following steps.
In step 601, in response to an AR object setting instruction, an AR object is displayed in a live broadcast picture, where the live broadcast picture is a picture acquired by a live broadcast terminal through a camera.
The implementation of this step may refer to step 301, and this embodiment is not described herein.
In step 602, an AR object interaction instruction is received, where the AR object interaction instruction is triggered by the audience terminal or the anchor terminal.
The implementation of this step may refer to step 302, and this embodiment is not described herein.
Step 603, identify 3D objects in the live environment.
Because the AR object is a virtual object placed in the live broadcast picture according to the collected environment image, in order to improve the authenticity of the AR object in rendering the image when executing the interactive action in the live broadcast process, the 3D object contained in the live broadcast picture can be identified, and a relatively real live broadcast picture can be rendered according to the position relationship between the 3D object and the AR object. The 3D object is an object in a live broadcast environment collected through a camera in a live broadcast process, for example, a host broadcast starts in home, and the 3D object possibly contained in a live broadcast picture can be an object such as a table, a bed, furniture and the like in a room. Because the AR object may have a touch shielding with the 3D object in the live view when performing the interactive action. If the AR pet in the live broadcast picture moves from one position to another position on the floor, but a table or other object is arranged in front of the moving path of the AR pet to cover the rendering picture of the AR pet, in order that the display picture of the AR pet is more real than the picture of the live broadcast environment, the AR pet needs to be rendered according to the characteristics of the table to be covered by the table from the table to the body, to be completely covered by the table, and then to the picture appearing after the AR pet passes through the table. Therefore, before the AR object is controlled to execute the interactive action, the 3D object in the live broadcast environment is identified according to the live broadcast picture acquired by the camera, and a more real picture is rendered according to the position information of the 3D object and the AR object. In one possible implementation, the 3D object contained in the live view may be identified by a 3D object identification algorithm.
Step 604, based on the depth information of each 3D object, controlling the AR object to execute the interaction action corresponding to the AR object interaction instruction in the live environment.
Because the live broadcast picture collected by the camera only can display various 3D objects contained in the live broadcast environment, but when the AR pet needs to execute rich interaction actions through the 3D objects and render the AR objects and the 3D objects, depth information of the various 3D objects needs to be obtained, namely, spatial position information, distance information and the like of the various 3D objects and the AR objects in the live broadcast picture are determined, and the distance and time when the AR objects move to or pass through the 3D objects are conveniently calculated. For example, the distance between the AR pet and the table in the live broadcast picture is determined, the time for the AR pet to execute the interactive action is calculated through the walking route and the walking speed set for the AR pet, and the action picture that the AR pet walks to the table and passes through the table is rendered in the picture.
As shown in fig. 7, AR pet 710 is a virtual image rendered in a live view, table 720 is a 3D object collected by a camera and displayed in the live view, and when AR pet 710 needs to pass behind table 720 in a moving path when performing an interactive action, since table 720 is an object displayed in front of AR pet, when rendering an image of AR pet 710, a portion not covered by table 720 (the head portion of AR pet in the figure is exposed and other portions are covered by table) is displayed, and as AR pet 710 moves, the body of AR pet 710 displayed in the live view is changed in real time, and when AR pet 710 passes completely behind table 720, the entire body of AR pet 710 is re-rendered in the live view.
In another possible implementation, when the moving path of the AR pet is in front of the table or drilled from below the table, a display screen that the AR pet shields the table or moves below the table is correspondingly rendered. When the moving path of the AR pet just passes through the table, a display picture that the AR pet directly passes through the table or bypasses the table can be also rendered.
In one possible implementation manner, the depth information of the 3D object in the live broadcast picture can be determined by a binocular stereo vision method, that is, two environment images in the same live broadcast environment are acquired simultaneously by two cameras, pixel points corresponding to the same 3D object in the two images are found according to a stereo matching algorithm, then time difference information is calculated according to a triangle principle, and parallax information can be used for representing the depth information of the 3D object in the scene through conversion. Based on the stereo matching algorithm, a set of images of different angles in the same live broadcast environment can be shot to obtain the depth image of the scene. In addition, the depth information can be obtained by analyzing and indirectly estimating the characteristics such as luminosity characteristics, brightness characteristics and the like of the acquired image.
After depth information of various 3D objects is determined, the AR object can be controlled to execute interaction actions corresponding to the AR object interaction instructions in the live broadcast environment based on the depth information of the various 3D objects.
In order to make the AR pet displayed in the live broadcast picture perform richer and real interaction actions, an AR object can be constructed and rendered through point clouds, wherein the point clouds refer to massive point sets of surface characteristics of a target object, namely, the AR object is a virtual image formed by the massive point sets. Therefore, the AR object and the live interaction action displayed in the live view are formed based on a large number of point cloud renderings, and the form and action of the AR object are changed by controlling the position change of the point cloud. After the anchor terminal receives the AR object interaction instruction, a change path of the point cloud to be controlled is determined based on the specific interaction action, so that the AR object is controlled. Thus, as shown in FIG. 8, step 604 further includes the following steps.
In step 604A, a point cloud movement amount of an interaction action point cloud corresponding to the AR object interaction instruction is determined, and the point cloud is used for controlling the AR object movement.
After the anchor terminal receives the AR object interaction instruction, coordinate information of the current point cloud of the AR object and coordinate information of the point cloud corresponding to the interaction action to be executed are obtained, and the point cloud movement amount of the point cloud when the interaction action is required to be executed is calculated.
Schematically, as shown in fig. 9, after receiving the AR object interaction instruction, the hosting terminal obtains the point cloud composition of the AR pet and the coordinate information of the point cloud in the live broadcast picture at the current moment, and calculates the corresponding point cloud composition and the corresponding coordinate information according to the interaction action that needs to be executed (in order to improve the recognition degree, only the point cloud forming the outline of the AR pet is shown in the figure). As shown in the figure, the arms of the AR pet are in an unfolded state at the current moment, the interaction action indicates the AR pet to perform an arm folding action, and the expression and the morphology of the AR pet are correspondingly changed. As can be seen from the figure, the front and rear positions of the point clouds corresponding to the interaction motion are changed, so that when the interaction motion is executed, the point clouds before and after the interaction motion are needed to be compared, the point cloud movement amount of each point cloud is calculated, and the coordinate positions of the point clouds are changed to control the AR pet to execute the corresponding interaction motion.
Step 604B, based on the depth information and the amount of movement of the point cloud of each 3D object, controls the AR object to execute the interaction action corresponding to the AR object interaction instruction in the live environment.
After the anchor terminal calculates the point cloud movement amount of the AR object, the anchor terminal can control the AR object to execute the interaction action corresponding to the AR object interaction instruction in the live broadcast environment according to the depth information of the 3D object and the point cloud movement amount. When the interactive action indicates the AR object to execute continuous action, the coordinate position of the point cloud is continuously changed by calculating the position information of the AR object and the coordinate information of the point cloud in real time and calculating the distance information and the coordinate position change information between the 3D object and the AR object, so that the AR object is controlled to make rich interactive action in a live broadcast picture and form interaction with a host or audience.
As shown in fig. 10, after the anchor terminal determines the 3D object 1010 and the corresponding spatial position coordinates, a corresponding point cloud composition is determined based on the shape of the 3D object 1010, and the distance of the AR pet 1020 from the 3D object 1010 is calculated. For example, when the distance between the AR pet 1020 and the 3D object 1010 is calculated to be 2 meters, the AD pet is controlled to move according to the movement path indicated by the interaction action, and the shape and action of the AR pet 1020 are changed in real time according to the point cloud movement amount of the point cloud in the movement process until the AR pet 1020 moves to the rear of the 3D object 1010, at this time, a picture that the AR pet 1020 is blocked by the 3D object 1010 is rendered until the AR pet 1020 completely passes through the 3D object 1010, and an action of folding the arm is made at a designated position, so that the interaction action is displayed to the audience or the anchor.
In the embodiment of the application, when the anchor terminal performs live broadcast, the 3D object in the live broadcast picture is identified, and the depth information of the 3D object and the position coordinates of the point cloud forming the AR object are determined, so that after the AR object interaction instruction is received, the amount of the point cloud movement of the point cloud of the execution action is determined based on the interaction action, and further, the transformation display picture between the AR object and the 3D object when the interaction action is executed is rendered, so that the AR object shows richer and real interaction action.
Under a possible application scenario, after the anchor terminal receives the AR interaction instruction sent by the audience terminal, interaction data contained in the AR object interaction instruction is required to be obtained, and a target interaction action is determined according to the interaction data, so that the AR object is controlled to execute the corresponding target interaction action.
Fig. 11 is a flowchart of a live interaction method according to another exemplary embodiment of the present application, where the embodiment is described by taking the method used in the anchor terminal shown in fig. 2 as an example. The method comprises the following steps.
In step 1101, in response to the AR object setting instruction, an AR object is displayed in a live broadcast picture, where the live broadcast picture is a picture acquired by the live broadcast terminal through the camera.
The implementation of this step may refer to step 301, and this embodiment is not described herein.
In step 1102, an AR object interaction instruction is received, where the AR object interaction instruction is triggered by the viewer terminal.
The implementation of this step may refer to step 302, and this embodiment is not described herein.
In step 1103, a target interaction action is determined based on the interaction data contained in the AR object interaction instruction, where the interaction data is data obtained when the viewer terminal receives a virtual resource transfer instruction, and the virtual resource transfer instruction is used to trigger the viewer account to transfer virtual resources to the live account.
In one possible implementation manner, after the anchor terminal receives the virtual resource transferred by the audience terminal, determining a grade reached by the virtual resource according to the transfer amount of the virtual resource, and further triggering the AR object to execute a corresponding interaction action according to the grade. The level of the virtual resource is positively related to the interactive action executed by the AR object, that is, the higher the level reached by transferring the virtual resource, the richer the interactive action that the AR object can execute. For example, the audience terminal gives a rocket to the anchor terminal, and when the anchor terminal receives the given rocket, the virtual amount is calculated according to the given amount, and the AR object is triggered to execute corresponding interaction actions, such as a tail shaking action, based on the amount of the virtual amount. Thus, as shown in fig. 12, step 1103 may further include the following steps.
In step 1103A, virtual resource transfer amount data included in the AR object interaction instruction is obtained.
When the audience needs to interact with the AR object, virtual resources are transferred to the anchor terminal, so that the opportunity of interaction with the AR object is obtained, as shown in fig. 5, the virtual resources are virtual gifts of the anchor terminal, the virtual gifts can be flowers or pet grains fed to a virtual pet, each virtual gift has a corresponding virtual amount, and when the audience terminal receives a virtual resource transfer instruction, the audience terminal is triggered to transfer the virtual resources to the live account.
Further, after the anchor terminal receives the AR object instruction, virtual resource transfer amount data contained in the AR object instruction is obtained, and an animation corresponding to the virtual resource is displayed in the live broadcast picture, if the virtual gift presented by the audience terminal is a rocket, the virtual animation launched by the rocket is displayed in the live broadcast display picture.
In step 1103B, a target interaction action is determined based on the virtual resource transfer amount data, wherein different virtual resource transfer amounts correspond to different interaction actions.
And after the anchor terminal receives the virtual resource transfer amount data, determining the grade reached by the virtual resource transfer amount, and determining the target interaction action according to the grade reached. I.e. different virtual resource transfer amounts correspond to different interaction actions.
In one possible implementation, the level of the virtual resource transfer amount received by the anchor terminal is positively correlated with the interactive action performed by the AR object, that is, the higher the level of the virtual resource transfer amount is, the more the target interactive action triggered by the AR object is, or the more the interactive action triggered by the AR object is.
Schematically, the level of virtual resource transfer amount and the target interaction action are shown in table one.
List one
Figure BDA0003059024660000121
Figure BDA0003059024660000131
As shown in table one, the anchor terminal determines a corresponding target interaction according to the level of the virtual resource transfer amount of the received audience terminal, if the virtual resource transfer amount of the received audience terminal is 30 virtual coins, the target interaction is determined to be "the puppy jumps to the bed".
Optionally, the target interaction action may also be determined by the viewer, after the viewer terminal transfers the virtual resource, an interaction option list is displayed by clicking an interaction control of the user interface, and the AR object is triggered to make a corresponding target interaction action according to the list content, as shown in fig. 13, an interaction control 1310 is disposed at an edge of the user interface of the viewer terminal 1300, after the viewer clicks the interaction control 1310, an interaction option list 1220 is displayed at an edge of the user interface, and interaction action options that can be triggered by the viewer are displayed in the interaction option list 1320, where the number of interaction action options may be determined by the virtual resource transfer amount, that is, the more virtual resource is transferred from the viewer account to the anchor account, the more interaction action options are displayed in the interaction option list 1320.
In another possible implementation, after the virtual resource is transferred from the viewer account to the live account, the prompting information is displayed on the user interface of the viewer terminal, where the prompting information is used to prompt the user on the viewer terminal side to interact with the AR object by clicking on the screen, and at this time, step 1103 may further include the following steps.
In step 1103C, the interactive gesture data contained in the AR object interaction instruction is obtained, where the interactive gesture data is used to characterize the interactive gesture operation on the AR object.
When the audience terminal transfers virtual resources to the anchor terminal, the opportunity of interaction with the AR object is obtained, and optionally, an interaction prompt can be played in a live broadcast picture displayed by the audience terminal to prompt the audience to interact by clicking the AR object in the live broadcast picture. And the audience for virtual resource transfer controls the AR object to perform interaction action in a mode of clicking the screen or sliding the screen, for example, a user at the terminal side of the audience clicks the AR object to realize touching the AR object, or manually drags the AR object and moves, so as to realize controlling the AR object to run in a live broadcast picture according to a dragging track.
In one possible embodiment, when the AR object is a virtual puppy, the AR object is used to indicate an interactive action of stroking the puppy or walking the puppy by clicking or sliding an interactive gesture operation of the virtual puppy in the live view.
Further, after the anchor terminal receives the AR object interaction instruction, interaction gesture data contained in the AR object interaction instruction is obtained, and the gesture interaction data is used for representing interaction gesture operation of controlling the AR object by the audience.
In step 1103D, a target interaction action is determined based on the interaction gesture operation represented by the interaction gesture data, wherein different interaction gesture operations correspond to different interaction actions.
The anchor terminal determines interactive gesture operation of the AR object controlled by the audience based on gesture interaction data, and determines target interaction actions to be executed by the AR object, wherein the specific content of the gesture interaction operation is determined by sliding operation of the audience and content displayed on a live broadcast picture, for example, the audience clicks the AR object in the live broadcast picture to indicate to touch the AR object, and the corresponding target interaction actions are sitting or lying actions.
In another possible implementation, to better achieve interaction between the audience terminal and the AR object, the audience terminal may capture the user facial expression and/or limb motion on the audience terminal side through a camera to control the AR object to simulate so as to increase the live participation of the audience terminal. Thus, step 1103 may further comprise the following steps.
In step 1103E, based on receiving the AR object interaction instruction, interaction behavior data included in the AR object interaction instruction is obtained, where the interaction behavior data is used to characterize user behavior at the terminal side of the audience, and the user behavior is acquired by the audience terminal through a camera.
After the audience terminal transfers virtual resources to the anchor terminal, the audience terminal automatically starts a camera to collect user behaviors at the side of the audience terminal, and the audience controls AR objects to imitate interaction behaviors by making various actions, wherein the interaction behaviors can be limb behaviors, expression behaviors and the like. The audience terminal performs portrait identification on the pictures acquired by the cameras, identifies facial expressions and/or limb actions of the audience contained in the pictures and sends corresponding interactive behavior data to the anchor terminal.
Further, the anchor terminal obtains interaction behavior data contained in the AR object interaction instruction based on the received AR object interaction instruction.
In step 1103F, based on the interaction behavior data, an action of the AR object imitating the user behavior is determined as a target interaction action.
The anchor terminal determines the action of the AR object imitating the user action as the target interaction action based on the interaction action data, and if the interaction action of the audience is the head shaking and blinking action, the target interaction action imitated by the AR object is the head shaking and blinking action.
In one possible implementation manner, when the target interaction action is to imitate the facial expression of the user at the terminal side of the audience, the face image in the live broadcast picture is acquired and identified by using a face recognition algorithm, key data such as the width of the face, the position and coordinates of the face and the like are determined based on the gray value of the image, when the facial expression of the user changes, the changed data are sent to the anchor terminal, and the anchor terminal determines the change amplitude of the facial expression of the user according to the received interaction action data, so as to control the AR object to imitate the corresponding target interaction action.
When the received interaction data is limb actions of the user at the audience terminal side, key nodes of a human body can be identified based on a human body gesture identification algorithm, the limb actions of the user are determined according to the information such as the movement direction and the acceleration of the key nodes, the limb actions are further sent to the anchor terminal as the interaction data, the anchor terminal determines specific limb actions made by the user based on the received interaction data, and then the AR object is controlled to simulate corresponding target interaction actions.
Taking the AR object to simulate the user's limb movements as an example, as shown in fig. 14, when the viewer terminal 1410 turns on the camera, the user makes head-up and hand-up movements within the capture range of the camera, and the viewer terminal 1410 sends the captured interaction data to the anchor terminal, so as to control the AR object 1411 to simulate the corresponding target interaction movements.
In step 1104, the AR object is controlled to execute the target interaction corresponding to the AR object interaction command.
When the anchor terminal determines that the AR object needs to execute the target interaction action, the AR object is controlled to execute the corresponding target interaction action according to the depth information of the 3D object indicated by the target interaction action and the point cloud movement amount of the AR object, and specific content may refer to step 604, which is not described in detail herein.
In the embodiment of the application, when the audience terminal needs to perform live interaction with the AR object, the opportunity of interaction with the AR object is obtained by transferring virtual resources to the anchor terminal, an AR object interaction instruction is sent to the anchor terminal, and then the anchor terminal determines a target interaction action based on interaction data contained in the received AR object interaction instruction.
The anchor terminal can determine a target interaction action based on the grade of the virtual resource transfer amount data, and further control the AR object to execute the target interaction action; or, based on the interactive gesture data of the audience terminal side, the AR object is controlled to execute the target interactive action, so that the audience terminal can autonomously select the target interactive action of the AR object in a sliding screen mode; in addition, facial expressions and/or limb actions of a user at the terminal side of the audience can be used as target interaction actions, so that the AR object simulates the interaction effect of the user at the terminal side of the audience.
In one possible application scenario, when the anchor needs to actively control the interaction between the AR object and the viewer, the target interaction action can be performed by controlling the AR object through voice input or key press.
When the AR object interaction instruction is triggered by voice input, namely the anchor terminal recognizes the voice instruction input by the anchor through a voice recognition algorithm, and determines a target interaction action according to the recognized semantics, if a keyword 'puppy dancing' is recognized from the collected voice data, namely the target interaction action requiring the AR object to execute dancing is determined, and then the AR puppy dancing is controlled at the corresponding position; when the AR object interaction instruction is triggered by the interaction option operation, namely, the anchor displays an interaction option list by clicking a control of the user interface, and a target interaction action to be triggered is determined in the interaction option list, so that the AR object is controlled to execute the target interaction action.
In one possible implementation manner, in order to better show the interaction action of the AR object in the live broadcast display screen, after receiving the interaction instruction of the AR object of the audience terminal or the anchor terminal, the interactive object is determined according to the interaction instruction, so that the corresponding interaction action is executed on the interactive object, so that the interaction action is more true to be shown on the live broadcast screen, and the richer target interaction action is executed.
Fig. 15 is a flowchart of a live interaction method according to another exemplary embodiment of the present application, where the embodiment is described by taking the method used in the anchor terminal shown in fig. 2 as an example. The method comprises the following steps:
in step 1501, in response to the AR object setting instruction, an AR object is displayed in a live broadcast picture, where the live broadcast picture is a picture acquired by the live broadcast terminal through the camera.
The implementation of this step may refer to step 301, and this embodiment is not described herein.
In step 1502, an AR object interaction instruction is received, where the AR object interaction instruction is triggered by a hosting terminal or a viewer terminal.
The implementation of this step may refer to step 302, and this embodiment is not described herein.
In step 1503, a target interaction is determined based on the interaction data contained in the AR object interaction instruction.
The implementation of this step may refer to step 1103, and this embodiment is not described herein.
In step 1504, in response to the interactive object included in the AR object interaction instruction, object recognition is performed on the live broadcast picture, so as to obtain an object recognition result.
In order to better show the interactive action of the AR object in the live broadcast display picture, the anchor terminal or the audience terminal may further designate that the AR object performs the target interactive action at a specific position, that is, control the AR object to perform the target interactive action at the interactive object, for example, the interactive object may be a bed, a table, a chair, an anchor, etc. in the live broadcast environment, and perform the interactive action on the interactive object by controlling the AR object. If the audience terminal selects the target interaction action to be executed through voice input or clicking the interaction option, the AR object is controlled to jump to a table or to jump to a host. After the anchor terminal determines the target interaction action, extracting the interaction objects contained in the target interaction action, and identifying the interaction objects in the live broadcast picture by carrying out image identification on the live broadcast picture acquired by the camera.
In step 1505, in response to the object recognition result indicating that the live broadcast picture contains the interactive object, the AR object is controlled to move to the display position of the interactive object in the live broadcast picture.
When the anchor terminal recognizes that the live broadcast picture contains the corresponding interactive object, it is determined that the AR object can execute the target interactive action, and then the AR object is controlled to move to the display position of the interactive object in the live broadcast picture.
As shown in fig. 16, a live view is displayed on the viewer terminal 1610, the AR object 1611 is laid on the floor, the viewer clicks the AR object 1611 and slides the finger to a position where the bed is displayed on the live view, the gesture interaction operation means that the AR object 1611 is controlled to run from the floor to the bed, when the anchor terminal 1620 acquires the interaction gesture data, the interaction object is determined, and then the specific position of the bed in the screen is identified and confirmed through the image recognition algorithm, and after the specific position is determined, the AR object 1611 is controlled to run to the bed.
In step 1506, the AR object is controlled to execute the target interaction action corresponding to the AR object interaction instruction at the interaction object.
And after the AR object moves to the display position of the interactive object in the live broadcast picture, controlling the AR object to execute target interaction actions corresponding to the AR object interaction instructions at the interactive object.
As shown in fig. 16, the anchor terminal 1620 controls the AR object 1611 to move to the interactive object and perform a corresponding interactive action based on the received AR object interactive command.
In the embodiment of the application, the interactive objects contained in the live broadcast picture are identified, so that the AR objects are controlled to move to the positions of the interactive objects to execute corresponding interactive actions, the interactive actions are more truly displayed on the live broadcast picture, and meanwhile, the interactive actions of the AR objects are enriched.
In a possible implementation manner, in order to further enrich the content and manner of live interaction, the anchor may further receive an AR object customized by the target terminal during live broadcast, and at this time, at least two AR objects may be displayed in a live broadcast picture displayed by the anchor terminal, where the customized AR object is only used for performing a corresponding interaction action when the target terminal sends an AR object interaction instruction.
Fig. 17 is a flowchart illustrating a live interaction method according to an exemplary embodiment of the present application.
In step 1701, in response to receiving the target terminal customized AR object, the customized AR object is displayed on the live view.
When the target terminal needs to customize an AR object and gives the AR object to the anchor terminal, the target terminal enters a customization interface through a customization link in a live broadcast application program, information such as attribute characteristics, picture information, display duration, customization cost and the like of the AR object needing to be customized is input in the customization interface, and after customization is finished, the customized AR object can be given to the anchor terminal. When the anchor terminal receives the AR object customized by the target terminal in the live broadcast process, the anchor terminal is prompted to place the customized AR object at a specific position of the live broadcast picture by displaying prompt information on the live broadcast picture. In addition, the anchor may also customize the corresponding AR object for itself through the customization interface.
Illustratively, as shown in fig. 18, the target terminal 1800 inputs information such as attribute features, picture information, display duration, and custom fees of the AR object to be customized through the custom interface, and when the target terminal needs to give the custom AR object to the anchor terminal, gives the custom AR object to the anchor terminal by inputting account information or live broadcasting room information of the anchor.
In step 1702, in response to receiving the AR object interaction instruction, determining an AR object or a customized AR object to be controlled, and obtaining interaction data included in the AR object instruction, to determine a target interaction action.
When the anchor terminal receives the AR object interaction instruction, a terminal account number contained in the AR object interaction instruction is acquired, and a terminal for sending the AR object interaction instruction is determined based on the acquired terminal account number. When the AR object interaction instruction is determined to be sent by the target terminal, determining that the corresponding customized AR object needs to be controlled, when the AR object interaction instruction is not sent by the target terminal, determining that the AR object set by the anchor terminal needs to be controlled, acquiring interaction data contained in the AR object instruction, and determining a target interaction action.
Step 1703, based on the obtained target interaction action, controlling the AR object or the customized AR object to execute the target interaction action.
When the AR object is determined to be sent by the target terminal, the corresponding customized AR object is controlled to execute the target interaction action based on the determined target interaction data, and when the AR object is determined to be sent by other terminals, the other AR objects are controlled to execute the target interaction action based on the determined target interaction data.
In the embodiment of the application, the anchor terminal receives the customized AR object presented by the target terminal and places the customized AR object in the live broadcasting room, and after receiving the AR object interaction instruction, determines the AR object or the customized AR object which needs to execute the target interaction action according to the AR object interaction instruction, so that the target terminal and other audience terminals respectively control the corresponding customized AR object and the AR object set by the anchor terminal, and the live broadcasting interaction mode is enriched.
In addition, in another possible implementation manner, the customized AR object may be further configured to display the customized AR object in the live broadcast screen when the target account corresponding to the target terminal enters the live broadcast room, and not display the customized AR object in the live broadcast screen when the target account does not enter the live broadcast room.
Referring to fig. 19, a block diagram of a live interaction device according to an embodiment of the present application is shown. The device comprises:
A display module 1901, configured to respond to an augmented reality AR object setting instruction, and display an AR object in a live broadcast picture, where the live broadcast picture is a picture acquired by a live broadcast terminal through a camera;
the interactive instruction receiving module 1902 is configured to receive an AR object interactive instruction, where the AR object interactive instruction is triggered by the live broadcast terminal or the audience terminal;
the interaction module 1903 is configured to control the AR object to execute an interaction action corresponding to the AR object interaction instruction.
Optionally, the interaction module 1903 includes:
the identification unit is used for identifying the 3D object in the live broadcast environment;
and the execution unit is used for controlling the AR object to execute the interaction action corresponding to the AR object interaction instruction in the live broadcast environment based on the depth information of each 3D object.
Optionally, the execution unit is configured to:
determining a point cloud movement amount of an interaction action time point cloud corresponding to the AR object interaction instruction executed by the AR object, wherein the point cloud is used for controlling the AR object to move;
and controlling the AR object to execute interaction actions corresponding to the AR object interaction instructions in the live broadcast environment based on the depth information and the point cloud movement amount of each 3D object.
Optionally, the AR object interaction instruction is triggered by the audience terminal; the interaction module 1903 further includes:
the first determining unit is used for determining a target interaction action based on interaction data contained in the AR object interaction instruction, wherein the interaction data are data obtained when the audience terminal receives a virtual resource transfer instruction, and the virtual resource transfer instruction is used for triggering an audience account to transfer virtual resources to a live account;
the first interaction unit is used for controlling the AR object to execute the target interaction action.
Optionally, the first determining unit is configured to:
obtaining virtual resource transfer amount data contained in the AR object interaction instruction;
and determining the target interaction action based on the virtual resource transfer amount data, wherein different virtual resource transfer amounts correspond to different interaction actions.
Optionally, the first determining unit is configured to:
acquiring interactive gesture data contained in the AR object interactive instruction, wherein the interactive gesture data is used for representing interactive gesture operation on the AR object;
and determining the target interaction action based on the interaction gesture operation characterized by the interaction gesture data, wherein different interaction gesture operations correspond to different interaction actions.
Optionally, the first determining unit is configured to:
the method comprises the steps of obtaining interactive behavior data contained in an AR object interactive instruction, wherein the interactive behavior data are used for representing user behaviors at the terminal side of the audience, and the user behaviors are acquired by the terminal of the audience through a camera;
based on the interaction behavior data, determining an action of the AR object mimicking the user behavior as the target interaction action.
Optionally, the AR object interaction instruction is triggered by the anchor terminal, and the AR object interaction instruction is triggered by the anchor terminal; the interaction module 1903 further includes:
the second determining unit is used for responding to the AR object interaction instruction and triggering by voice, and determining a target interaction action through semantic recognition; or, responding to the AR object interaction instruction and triggering by interaction option selection operation, and determining a target interaction action indicated by the selected interaction option;
and the second interaction unit is used for controlling the AR object to execute the target interaction action.
Optionally, the apparatus further includes:
the identification module is used for responding to the AR object interaction instruction to contain an interaction object and carrying out object identification on the live broadcast picture to obtain an object identification result;
The moving module is used for responding to the object identification result to indicate that the live broadcast picture contains the interactive object and controlling the AR object to move to the display position of the interactive object in the live broadcast picture;
the interaction module 1903 is further configured to:
and controlling the AR object to execute the interaction action corresponding to the AR object interaction instruction at the interaction object.
Optionally, the display module 1901 is further configured to:
at least two AR objects are displayed in the live broadcast picture, and the at least two AR objects comprise customized AR objects corresponding to the target audience account, wherein the customized AR objects are customized by the target audience account;
the interaction module 1903 is further configured to:
and responding to the AR object interaction instruction triggered by the target audience account, and controlling the customized AR object to execute the interaction action corresponding to the AR object interaction instruction.
Optionally, the display module 1901 is further configured to:
and in response to the target audience account being located in a live broadcast room, displaying the customized AR object in the live broadcast picture.
In summary, in the embodiment of the present application, the AR object is displayed in the picture acquired by the live broadcast terminal through the AR object setting instruction, and after the AR object interaction instruction sent by the live broadcast terminal or the audience terminal is received, the AR object is controlled to execute the corresponding interaction action based on the AR object interaction instruction; by adopting the scheme provided by the embodiment of the application, the interaction between the AR object and the audience is controlled by the anchor terminal, and the AR object can be controlled by the audience terminal to interact, so that the interaction mode in the live broadcast process is enriched, and the interaction participation of the audience terminal in the live broadcast process is improved.
Fig. 20 is a block diagram illustrating a structure of a terminal according to an exemplary embodiment of the present application. The terminal 2000 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 2000 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, etc.
In general, the terminal 2000 includes: a processor 2001 and a memory 2002.
Processor 2001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 2001 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). Processor 2001 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 2001 may be integrated with a GPU (Graphics Processing Unit, image processor) for taking care of rendering and drawing of content that the display screen is required to display. In some embodiments, the processor 2001 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 2002 may include one or more computer-readable storage media, which may be non-transitory. Memory 2002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 2002 is used to store at least one instruction for execution by processor 2001 to implement the live interaction methods provided by the method embodiments herein.
In some embodiments, the terminal 2000 may further optionally include: a peripheral interface 2003 and at least one peripheral. The processor 2001, memory 2002, and peripheral interface 2003 may be connected by a bus or signal line. The respective peripheral devices may be connected to the peripheral device interface 2003 through a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 2004, a display 2005, a camera assembly 2006, audio circuitry 2007, a positioning assembly 2008, and a power supply 2009.
Peripheral interface 2003 may be used to connect I/O (Input/Output) related at least one peripheral device to processor 2001 and memory 2002. In some embodiments, processor 2001, memory 2002, and peripheral interface 2003 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 2001, memory 2002, and peripheral interface 2003 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 2004 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 2004 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 2004 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 2004 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 2004 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 2004 may also include NFC (Near Field Communication, short range wireless communication) related circuitry, which is not limited in this application.
The display 2005 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 2005 is a touch display, the display 2005 also has the ability to capture touch signals at or above the surface of the display 2005. The touch signal may be input to the processor 2001 as a control signal for processing. At this point, the display 2005 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 2005 may be one and disposed on the front panel of the terminal 2000; in other embodiments, the display 2005 may be at least two, respectively disposed on different surfaces of the terminal 2000 or in a folded design; in other embodiments, the display 2005 may be a flexible display disposed on a curved surface or a folded surface of the terminal 2000. Even more, the display 2005 may be arranged in an irregular pattern that is not rectangular, i.e., a shaped screen. The display 2005 can be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 2006 is used to capture images or video. Optionally, the camera assembly 2006 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 2006 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
Audio circuitry 2007 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 2001 for processing, or inputting the electric signals to the radio frequency circuit 2004 for voice communication. For purposes of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different portions of the terminal 2000. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is then used to convert electrical signals from the processor 2001 or the radio frequency circuit 2004 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuit 2007 may also include a headphone jack.
The locating component 2008 is used to locate the current geographic location of the terminal 2000 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 2008 may be a positioning component based on the United states GPS (Global Positioning System ), the Beidou system of China, or the Galileo system of Russia.
A power supply 2009 is used to power the various components in terminal 2000. The power source 2009 may be alternating current, direct current, disposable or rechargeable. When the power source 2009 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 2000 can further include one or more sensors 2010. The one or more sensors 2010 include, but are not limited to: acceleration sensor 2011, gyroscope sensor 2012, pressure sensor 2013, fingerprint sensor 2014, optical sensor 2015, and proximity sensor 2016.
The acceleration sensor 2011 may detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the terminal 2000. For example, the acceleration sensor 2011 may be used to detect components of gravitational acceleration on three coordinate axes. The processor 2001 may control the display screen 2005 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 2011. The acceleration sensor 2011 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 2012 may detect a body direction and a rotation angle of the terminal 2000, and the gyro sensor 2012 may cooperate with the acceleration sensor 2011 to collect a 3D motion of the user to the terminal 2000. The processor 2001 may implement the following functions based on the data collected by the gyro sensor 2012: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at photographing, interface control, and inertial navigation.
Pressure sensor 2013 may be disposed on a side frame of terminal 2000 and/or below display 2005. When the pressure sensor 2013 is disposed at a side frame of the terminal 2000, a grip signal of the user to the terminal 2000 may be detected, and the processor 2001 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 2013. When the pressure sensor 2013 is disposed at the lower layer of the display 2005, the processor 2001 controls the operability control on the UI interface according to the pressure operation of the user on the display 2005. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 2014 is used for collecting the fingerprint of the user, and the processor 2001 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 2014, or the fingerprint sensor 2014 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 2001 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, and the like. The fingerprint sensor 2014 may be provided at the front, rear, or side of the terminal 2000. When a physical key or a vendor Logo is provided on the terminal 2000, the fingerprint sensor 2014 may be integrated with the physical key or the vendor Logo.
The optical sensor 2015 is used to collect ambient light intensity. In one embodiment, processor 2001 may control the display brightness of display 2005 based on the intensity of ambient light collected by optical sensor 2015. Specifically, when the intensity of the ambient light is high, the display luminance of the display screen 2005 is turned high; when the ambient light intensity is low, the display brightness of the display screen 2005 is turned down. In another embodiment, the processor 2001 may also dynamically adjust the shooting parameters of the camera assembly 1106 based on the ambient light intensity collected by the optical sensor 2015.
The proximity sensor 2016, also referred to as a distance sensor, is typically disposed on the front panel of the terminal 2000. The proximity sensor 2016 is used to collect the distance between the user and the front of the terminal 2000. In one embodiment, when the proximity sensor 2016 detects a gradual decrease in the distance between the user and the front face of the terminal 2000, the processor 2001 controls the display 2005 to switch from the bright screen state to the off screen state; when the proximity sensor 2016 detects that the distance between the user and the front surface of the terminal 2000 becomes gradually larger, the processor 2001 controls the display 2005 to switch from the off-screen state to the on-screen state.
It will be appreciated by those skilled in the art that the structure shown in fig. 20 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
The application provides a computer readable storage medium, wherein at least one instruction is stored in the storage medium, and the at least one instruction is loaded and executed by a processor to realize the live interaction method provided by each method embodiment.
The present application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the terminal reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the terminal performs the live interaction method of any of the above embodiments.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof, but rather as being included within the spirit and principles of the present invention.

Claims (12)

1. A live interaction method, the method comprising:
responding to an Augmented Reality (AR) object setting instruction, and displaying an AR object in a live broadcast picture, wherein the live broadcast picture is acquired by a live broadcast terminal through a camera;
receiving an AR object interaction instruction, wherein the AR object interaction instruction is triggered by the live broadcast terminal or the audience terminal;
controlling the AR object to execute the interaction action corresponding to the AR object interaction instruction, including:
identifying a 3D object in a live broadcast environment, wherein the 3D object is an object in the live broadcast environment acquired by a camera in the live broadcast process;
based on the depth information of each 3D object, controlling the AR object to execute interaction actions corresponding to the AR object interaction instructions in the live broadcast environment;
at least two AR objects are displayed in the live broadcast picture, and the at least two AR objects comprise customized AR objects corresponding to the target audience accounts, wherein the customized AR objects are customized by the target audience accounts;
the controlling the AR object to execute the interaction action corresponding to the AR object interaction instruction includes:
responding to the AR object interaction instruction triggered by the target audience account, and controlling the customized AR object to execute the interaction action corresponding to the AR object interaction instruction;
And responding to the AR object interaction instruction not triggered by the target audience account, and controlling the AR object set by the anchor terminal to execute the interaction action corresponding to the AR object interaction instruction.
2. The method according to claim 1, wherein the controlling the AR object to execute the interaction corresponding to the AR object interaction instruction in the live environment based on the depth information of each 3D object includes:
determining a point cloud movement amount of an interaction action time point cloud corresponding to the AR object interaction instruction executed by the AR object, wherein the point cloud is used for controlling the AR object to move;
and controlling the AR object to execute interaction actions corresponding to the AR object interaction instructions in the live broadcast environment based on the depth information and the point cloud movement amount of each 3D object.
3. The method of claim 1, wherein the AR object interaction instruction is triggered by the viewer terminal;
the controlling the AR object to execute the interaction action corresponding to the AR object interaction instruction includes:
determining a target interaction action based on interaction data contained in the AR object interaction instruction, wherein the interaction data is data obtained when the audience terminal receives a virtual resource transfer instruction, and the virtual resource transfer instruction is used for triggering an audience account to transfer virtual resources to a live account;
And controlling the AR object to execute the target interaction action.
4. The method of claim 3, wherein the determining a target interaction action based on the interaction data contained in the AR object interaction instruction comprises:
obtaining virtual resource transfer amount data contained in the AR object interaction instruction;
and determining the target interaction action based on the virtual resource transfer amount data, wherein different virtual resource transfer amounts correspond to different interaction actions.
5. The method of claim 3, wherein the determining a target interaction action based on the interaction data contained in the AR object interaction instruction comprises:
acquiring interactive gesture data contained in the AR object interactive instruction, wherein the interactive gesture data is used for representing interactive gesture operation on the AR object;
and determining the target interaction action based on the interaction gesture operation characterized by the interaction gesture data, wherein different interaction gesture operations correspond to different interaction actions.
6. The method of claim 3, wherein the determining a target interaction action based on the interaction data contained in the AR object interaction instruction comprises:
The method comprises the steps of obtaining interactive behavior data contained in an AR object interactive instruction, wherein the interactive behavior data are used for representing user behaviors at the terminal side of the audience, and the user behaviors are acquired by the terminal of the audience through a camera;
based on the interaction behavior data, determining an action of the AR object mimicking the user behavior as the target interaction action.
7. The method of claim 1, wherein the AR object interaction instruction is triggered by the anchor terminal;
the controlling the AR object to execute the interaction action corresponding to the AR object interaction instruction further includes:
responding to the AR object interaction instruction to be triggered by voice, and determining a target interaction action through semantic recognition; or, responding to the AR object interaction instruction and triggering by interaction option selection operation, and determining a target interaction action indicated by the selected interaction option;
and controlling the AR object to execute the target interaction action.
8. The method according to any one of claims 1 to 7, wherein before the controlling the AR object to perform the interaction corresponding to the AR object interaction instruction, the method further comprises:
responding to the AR object interaction instruction containing an interaction object, and carrying out object recognition on the live broadcast picture to obtain an object recognition result;
Responding to the object identification result to indicate that the live broadcast picture contains the interactive object, and controlling the AR object to move to the display position of the interactive object in the live broadcast picture;
the controlling the AR object to execute the interaction action corresponding to the AR object interaction instruction includes:
and controlling the AR object to execute the interaction action corresponding to the AR object interaction instruction at the interaction object.
9. The method according to claim 1, wherein the method further comprises:
and in response to the target audience account being located in a live broadcast room, displaying the customized AR object in the live broadcast picture.
10. A live interaction device, the device comprising:
the display module is used for responding to the AR object setting instruction and displaying the AR object in a live broadcast picture, wherein the live broadcast picture is acquired by the live broadcast terminal through the camera;
the interactive instruction receiving module is used for receiving an AR object interactive instruction, wherein the AR object interactive instruction is triggered by the live broadcast terminal or the audience terminal;
the interaction module is used for controlling the AR object to execute the interaction action corresponding to the AR object interaction instruction;
the interaction module is used for identifying 3D objects in a live broadcast environment, wherein the 3D objects are objects in the live broadcast environment acquired by a camera in the live broadcast process; based on the depth information of each 3D object, controlling the AR object to execute interaction actions corresponding to the AR object interaction instructions in the live broadcast environment;
At least two AR objects are displayed in the live broadcast picture, and the at least two AR objects comprise customized AR objects corresponding to the target audience accounts, wherein the customized AR objects are customized by the target audience accounts;
the interaction module is further configured to:
responding to the AR object interaction instruction triggered by the target audience account, and controlling the customized AR object to execute the interaction action corresponding to the AR object interaction instruction;
and responding to the AR object interaction instruction not triggered by the target audience account, and controlling the AR object set by the anchor terminal to execute the interaction action corresponding to the AR object interaction instruction.
11. A terminal comprising a processor and a memory, wherein the memory stores at least one program, and wherein the at least one program is loaded and executed by the processor to implement the live interaction method of any of claims 1-9.
12. A computer readable storage medium, wherein at least one program is stored in the readable storage medium, and the at least one program is loaded and executed by a processor to implement the live interaction method of any of claims 1 to 9.
CN202110507538.1A 2021-05-10 2021-05-10 Live interaction method, device, terminal and storage medium Active CN113194329B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110507538.1A CN113194329B (en) 2021-05-10 2021-05-10 Live interaction method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110507538.1A CN113194329B (en) 2021-05-10 2021-05-10 Live interaction method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN113194329A CN113194329A (en) 2021-07-30
CN113194329B true CN113194329B (en) 2023-04-25

Family

ID=76980931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110507538.1A Active CN113194329B (en) 2021-05-10 2021-05-10 Live interaction method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN113194329B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116260985A (en) * 2021-12-10 2023-06-13 腾讯科技(深圳)有限公司 Live interaction method, device, equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107750014A (en) * 2017-09-25 2018-03-02 迈吉客科技(北京)有限公司 One kind connects wheat live broadcasting method and system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120113223A1 (en) * 2010-11-05 2012-05-10 Microsoft Corporation User Interaction in Augmented Reality
US9704298B2 (en) * 2015-06-23 2017-07-11 Paofit Holdings Pte Ltd. Systems and methods for generating 360 degree mixed reality environments
US10147237B2 (en) * 2016-09-21 2018-12-04 Verizon Patent And Licensing Inc. Foreground identification for virtual objects in an augmented reality environment
US20190108558A1 (en) * 2017-07-28 2019-04-11 Magical Technologies, Llc Systems, Methods and Apparatuses Of Multidimensional Mapping Of Universal Locations Or Location Ranges For Alternate Or Augmented Digital Experiences
US20190371071A1 (en) * 2018-06-01 2019-12-05 Merge Labs, Inc. Precise placement of and animation creation for virtual objects in an environment using a trackable three-dimensional object
CN110019918B (en) * 2018-08-30 2021-03-30 腾讯科技(深圳)有限公司 Information display method, device, equipment and storage medium of virtual pet
US20200360816A1 (en) * 2019-05-16 2020-11-19 Microsoft Technology Licensing, Llc Capturing Subject Representation Within an Augmented Reality Environment
CN110519611B (en) * 2019-08-23 2021-06-11 腾讯科技(深圳)有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
CN110850983B (en) * 2019-11-13 2020-11-24 腾讯科技(深圳)有限公司 Virtual object control method and device in video live broadcast and storage medium
CN112148189A (en) * 2020-09-23 2020-12-29 北京市商汤科技开发有限公司 Interaction method and device in AR scene, electronic equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107750014A (en) * 2017-09-25 2018-03-02 迈吉客科技(北京)有限公司 One kind connects wheat live broadcasting method and system

Also Published As

Publication number Publication date
CN113194329A (en) 2021-07-30

Similar Documents

Publication Publication Date Title
CN110147231B (en) Combined special effect generation method and device and storage medium
CN111701238A (en) Virtual picture volume display method, device, equipment and storage medium
CN111726536A (en) Video generation method and device, storage medium and computer equipment
KR20210113333A (en) Methods, devices, devices and storage media for controlling multiple virtual characters
CN108664231B (en) Display method, device, equipment and storage medium of 2.5-dimensional virtual environment
CN112156464B (en) Two-dimensional image display method, device and equipment of virtual object and storage medium
CN111050189B (en) Live broadcast method, device, equipment and storage medium
CN109646944B (en) Control information processing method, control information processing device, electronic equipment and storage medium
CN112533017B (en) Live broadcast method, device, terminal and storage medium
CN113230655B (en) Virtual object control method, device, equipment, system and readable storage medium
CN111787407B (en) Interactive video playing method and device, computer equipment and storage medium
CN113244616B (en) Interaction method, device and equipment based on virtual scene and readable storage medium
CN111028566A (en) Live broadcast teaching method, device, terminal and storage medium
CN111026318A (en) Animation playing method, device and equipment based on virtual environment and storage medium
CN111541928A (en) Live broadcast display method, device, equipment and storage medium
CN110662105A (en) Animation file generation method and device and storage medium
CN112581571A (en) Control method and device of virtual image model, electronic equipment and storage medium
CN113274729A (en) Interactive observation method, device, equipment and medium based on virtual scene
CN110833695B (en) Service processing method, device, equipment and storage medium based on virtual scene
CN113457173A (en) Remote teaching method, device, computer equipment and storage medium
CN112367533B (en) Interactive service processing method, device, equipment and computer readable storage medium
CN113194329B (en) Live interaction method, device, terminal and storage medium
CN112306332A (en) Method, device and equipment for determining selected target and storage medium
CN114415907B (en) Media resource display method, device, equipment and storage medium
CN112188268B (en) Virtual scene display method, virtual scene introduction video generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant