CN107592575B - Live broadcast method, device and system and electronic equipment - Google Patents

Live broadcast method, device and system and electronic equipment Download PDF

Info

Publication number
CN107592575B
CN107592575B CN201710807197.3A CN201710807197A CN107592575B CN 107592575 B CN107592575 B CN 107592575B CN 201710807197 A CN201710807197 A CN 201710807197A CN 107592575 B CN107592575 B CN 107592575B
Authority
CN
China
Prior art keywords
client
controlled object
video picture
limb
sending
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710807197.3A
Other languages
Chinese (zh)
Other versions
CN107592575A (en
Inventor
鄢蔓
张庭亮
王天旸
陈成
王啸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN201710807197.3A priority Critical patent/CN107592575B/en
Publication of CN107592575A publication Critical patent/CN107592575A/en
Application granted granted Critical
Publication of CN107592575B publication Critical patent/CN107592575B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a live broadcast method, a live broadcast device, a live broadcast system and electronic equipment, wherein the method comprises the following steps: rendering a controlled object under an AR scene to a first client through an image frame captured by a camera to form a first video picture, and sending the first video picture to a second client; calculating and updating the position of the controlled object in the first video picture based on the control signal and the limb action sent by the second client; and sending the first video picture after the position of the controlled object is updated to the second client and the audience client. According to the method and the system, the AR scene is added on the basis of the image frame captured by the anchor client through the camera to form the video picture, the anchor can influence the position of a controlled object in the AR scene, the video picture of the online game is sent to the audience client, the audience can visually see the appearance of the anchor playing the AR game, and the live interactive mode is increased.

Description

Live broadcast method, device and system and electronic equipment
Technical Field
The present application relates to the field of video games, and in particular, to a live broadcast method, apparatus, system, and electronic device.
Background
The current live broadcast content mainly comprises: the method comprises the steps of mainly playing talent, showing scenes played outdoors, showing video pictures played in games and the like. Along with the popularization of the live broadcast concept, more and more people become the anchor, but a wonderful live broadcast needs the anchor to plan a lot of contents, and the atmosphere of audiences is also called out from time to time, however, due to the particularity of the live broadcast, the anchor communicates with the audiences through a screen, the available interactive mode is limited, and the existing interactive mode in the live broadcast is more and more difficult to meet the requirements of the majority of users on the live broadcast interaction.
Disclosure of Invention
In view of this, the present application provides a live broadcast method, apparatus, system and electronic device, which aim to increase the interactive manner of live broadcast.
Specifically, the method is realized through the following technical scheme:
a live broadcast method comprising the steps of:
performing limb feature recognition on a target object in an image frame captured by a first client through a camera to recognize limb actions;
rendering the controlled object in the AR scene in the image frame to form a first video picture, and sending the first video picture to the second client; the first client and the second client are connected through a connecting microphone;
calculating and updating the position of the controlled object in the first video picture based on the control signal and the limb action sent by the second client;
and sending the first video picture after the position of the controlled object is updated to the second client and the audience client.
In one embodiment, the step of establishing the connection between the first client and the second client through the connecting microphone includes one of the following steps:
sending a game interaction request to a second client through a first client, establishing a connecting microphone between the first client and the second client when receiving a response sent by the second client, and starting an interactive game at the first client and the second client simultaneously;
and in the live broadcast and microphone connection process of the first client and the second client, sending a game interaction request to the second client through the first client, and starting an interactive game at the first client and the second client simultaneously when receiving a response sent by the second client.
In one embodiment, the method further comprises:
counting scores of a first client user and a second client user when the game is finished, and adding a special effect corresponding to the scores in the video picture;
updating the score ranking list according to the scores of the users;
and recommending the game interaction object according to the score ranking list.
In one embodiment, the step of calculating the position of the controlled object in the AR scene based on the control signal and the limb movement sent by the second client includes:
calculating the position of the controlled object based on a control signal sent by a second client, and calculating whether the controlled object falls into the mouth according to the position and the opening degree of the mouth;
after the step of calculating the position of the controlled object based on the control signal sent by the second client and calculating whether the controlled object falls into the mouth according to the position and the opening degree of the mouth, the method further comprises any one of the following steps:
adjusting the state of a game progress bar according to whether the controlled object falls into the mouth or not;
when the controlled object does not fall into the mouth, controlling the controlled object to exit according to the position of the target object;
when the controlled object falls into the mouth and/or hits a target object, adding a special effect corresponding to the attribute in the video picture according to the recorded attribute of the controlled object.
In one embodiment, the method further comprises:
when the number of the faces in the image frame is more than one, determining a target object according to a preset rule;
wherein the preset rule comprises at least one of:
taking the face with the centered position as a target object;
taking the face with the largest area as a target object;
taking the face detected earliest as a target object;
determining a target object according to an externally input instruction;
and taking the face matched with the user identity information as a target object.
The application also discloses a live broadcast method, which comprises the following steps:
performing limb feature recognition on a target object in an image frame captured by a first client through a camera to recognize limb actions;
rendering the associated object in the AR scene in the image frame to form a first video picture, sending the first video picture to the second client, and adjusting the position of the associated object based on a control signal sent by the second client; the first client and the second client are connected through a connecting microphone;
rendering the controlled object in the AR scene based on the position of the mouth, and calculating and updating the position of the controlled object in the first video picture by combining the limb action and the position of the associated object;
and sending the first video picture after the position of the controlled object is updated to the second client and the audience client.
In one embodiment, the step of establishing the connection between the first client and the second client through the connecting microphone includes one of the following steps:
sending a game interaction request to a second client through a first client, establishing a connecting microphone between the first client and the second client when receiving a response sent by the second client, and starting an interactive game at the first client and the second client simultaneously;
and in the live broadcast and microphone connection process of the first client and the second client, sending a game interaction request to the second client through the first client, and starting an interactive game at the first client and the second client simultaneously when receiving a response sent by the second client.
In one embodiment, the method further comprises:
counting scores of users of the first client and the second client when the game is finished, and adding a special effect corresponding to the scores in the video picture;
updating the score ranking list according to the scores of the users;
and recommending the game interaction object according to the score ranking list.
In one embodiment, the step of rendering the controlled object in the AR scene based on the mouth position comprises:
identifying the opening degree of a mouth, and rendering the controlled object in the AR scene based on the position of the mouth when the opening degree of the mouth is greater than a starting threshold value;
the step of calculating and updating the position of the controlled object in the AR scene by combining the limb action and the position of the associated object comprises the following steps:
identifying the face orientation and the mouth closing speed;
setting the moving direction of the controlled object based on the face orientation, setting the moving speed of the controlled object based on the closing speed of the mouth, and calculating the position of the controlled object based on the moving direction and speed;
the step of setting a direction of movement of the controlled object based on the face orientation and setting a speed of movement of the controlled object based on the closing speed of the mouth, and calculating the position of the controlled object based on the direction and speed of movement includes:
setting the initial speed of the movement of the controlled object based on the face orientation and the closing speed of the mouth, and calculating the position of the controlled object by combining the starting point of the movement of the controlled object and the gravity acceleration;
after the step of rendering the controlled object in the AR scene based on the mouth position, and calculating and updating the position of the controlled object in the AR scene by combining the limb movement and the position of the associated object, the method further includes at least one of the following steps:
judging whether the controlled object falls into the associated object or not according to the position relation between the controlled object and the associated object;
adjusting the state of the game progress bar according to whether the controlled object falls into the associated object;
when the controlled object does not fall into the associated object, acquiring the position relation between the controlled object and the associated object, and controlling the controlled object to exit and/or add a special effect according to the position relation;
and when the controlled object falls into the associated object, acquiring a hit attribute according to the position relation between the controlled object and the associated object, and controlling the controlled object to exit and/or hit the associated object according to the hit attribute.
In one embodiment, the method further comprises:
when the number of the faces in the image frame is more than one, determining a target object according to a preset rule;
wherein the preset rule comprises at least one of:
taking the face with the centered position as a target object;
taking the face with the largest area as a target object;
taking the face detected earliest as a target object;
determining a target object according to an externally input instruction;
and taking the face matched with the user identity information as a target object.
The application also discloses a live broadcast method which is used for a live broadcast system, wherein the live broadcast system comprises a first client, a server and a second client; the method comprises the following steps:
a first client captures an image frame through a camera, performs limb feature recognition on a target object in the image frame, recognizes limb actions, and renders a controlled object in an AR scene in the image frame to form a first video picture;
the first client sends the first video picture to the second client and the audience client through the server, and the second client sends the control signal of the controlled object to the first client through the server; the first client and the second client are connected through a connecting microphone;
the first client calculates and updates the position of the controlled object in the first video picture based on the control signal and the limb action, and sends the updated first video picture to the server;
and the server sends the updated first video picture to the second client and the audience client.
The application also discloses a live broadcast method which is used for a live broadcast system, wherein the live broadcast system comprises a first client, a server and a second client; the method comprises the following steps:
the method comprises the steps that a first client captures an image frame through a camera, performs limb feature recognition on a target object in the image frame, recognizes limb actions, and renders a related object in an AR scene in the image frame to form a first video picture;
the first client sends the first video picture to the second client and the audience client through the server, and the second client sends the control signal of the associated object to the first client through the server; the first client and the second client are connected through a connecting microphone;
the first client calculates the position of the controlled object in the AR scene based on the body movement, calculates the position of the associated object based on the control signal, updates the positions of the controlled object and the associated object in the AR scene in the first video picture, and sends the updated first video picture to the server;
and the server sends the updated first video picture to the second client and the audience client.
The application also discloses a live device, include:
the identification module is used for identifying the limb characteristics of a target object in an image frame captured by the first client through the camera and identifying limb actions;
the rendering module is used for rendering the controlled object in the AR scene in the image frame to form a first video picture and sending the first video picture to the second client; the first client and the second client are connected through a connecting microphone; and
calculating and updating the position of the controlled object in the first video picture based on the control signal and the limb action sent by the second client;
and the sending module is used for sending the first video picture after the position of the controlled object is updated to the second client and the audience client.
The application also discloses a live device, include:
the identification module is used for identifying the limb characteristics of a target object in an image frame captured by the first client through the camera and identifying limb actions;
the rendering module is used for rendering the associated object in the AR scene in the image frame to form a first video picture, sending the first video picture to the second client, and adjusting the position of the associated object based on a control signal sent by the second client; the first client and the second client are connected through a connecting microphone; and
rendering the controlled object in the AR scene based on the position of the mouth, and calculating and updating the position of the controlled object in the AR scene by combining the limb action and the position of the associated object;
and the sending module is used for sending the first video picture after the position of the controlled object is updated to the second client and the audience client.
The application also discloses an electronic device, including:
a memory storing processor-executable instructions; wherein the processor is coupled to the memory for reading program instructions stored by the memory and, in response, performing the following:
establishing game interaction with a first client and a second client, and performing limb feature recognition on a target object in an image frame captured by the first client through a camera to recognize limb actions;
rendering the controlled object in the AR scene in the image frame to form a first video picture, and sending the first video picture to the second client;
calculating and updating the position of the controlled object based on the control signal and the limb action sent by the second client;
and sending the first video picture after the position of the controlled object is updated to the second client and the audience client.
The application also discloses an electronic device, including:
a memory storing processor-executable instructions; wherein the processor is coupled to the memory for reading program instructions stored by the memory and, in response, performing the following:
establishing game interaction with a first client and a second client, and performing limb feature recognition on a target object in an image frame captured by the first client through a camera to recognize limb actions;
rendering the associated object in the AR scene in the image frame to form a first video picture, sending the first video picture to the second client, and adjusting the position of the associated object based on a control signal sent by the second client;
rendering the controlled object in the AR scene based on the position of the mouth, and calculating and updating the position of the controlled object in the AR scene by combining the limb action and the position of the associated object;
and sending the first video picture after the position of the controlled object is updated to the second client and the audience client.
The application also discloses a live broadcast system, include:
the system comprises a first client, a second client and a server;
the server is used for establishing a microphone connection between the first client and the second client;
the first client is used for capturing image frames through a camera, identifying the limb characteristics of target objects in the image frames, identifying limb actions, rendering controlled objects in an AR scene in the image frames to form a first video picture, and sending the first video picture to the server;
the server is also used for sending the first video picture to the second client and the audience client;
the second client is used for collecting control signals of the controlled object and sending the control signals to the server;
the server is also used for sending a control signal to the first client;
the first client is further used for calculating and updating the position of the controlled object in the first video picture based on the control signal and the limb action, and sending the updated first video picture to the server;
and the server is also used for sending the updated first video picture to the second client and the audience client.
The application also discloses a live broadcast system, include:
the system comprises a first client, a second client and a server;
the server is used for establishing a microphone connection between the first client and the second client;
the first client is used for capturing image frames through a camera, identifying the limb characteristics of target objects in the image frames, identifying limb actions, rendering associated objects in an AR scene in the image frames to form a first video picture, and sending the first video picture to the server;
the server is also used for sending the first video picture to the second client and the audience client;
the second client is used for collecting control signals of the associated objects and sending the control signals to the server;
the server is also used for sending a control signal to the first client;
the first client is further used for calculating the position of the controlled object in the AR scene based on the body movement, calculating the position of the associated object based on the control signal, updating the positions of the controlled object and the associated object in the AR scene in the first video picture, and sending the updated first video picture to the server;
and the server is also used for sending the updated first video picture to the second client and the audience client.
The method comprises the steps of performing limb feature recognition on a target object in an image frame captured by a first client through a camera, and recognizing limb actions; rendering the controlled object in the AR scene in the image frame to form a first video picture, and sending the first video picture to the second client; the first client and the second client are connected through a connecting microphone; calculating and updating the position of the controlled object in the first video picture based on the control signal and the limb action sent by the second client; and sending the first video picture after the position of the controlled object is updated to the second client and the audience client. According to the method and the system, the AR scene is added on the basis of the image frame captured by the anchor client through the camera to form a video picture, the anchor can influence the position of the controlled object in the AR scene, for example, the motion track of the controlled object is changed, and the interaction between a user and the virtual world is more and the substitution feeling is strong. The video picture of the connected-to-microphone confrontation can be sent to the client side of the audience, the audience can visually see the AR game playing state of the main broadcast, and the live broadcast interaction mode is increased.
Drawings
FIG. 1 is a flow chart illustrating a live method according to an exemplary embodiment of the present application;
fig. 2 is a schematic diagram of a live system shown in an exemplary embodiment of the present application;
3a, 3b, 3c are schematic diagrams of the microphone interaction shown in an exemplary embodiment of the present application;
FIG. 4 is a flow chart illustrating a live method according to an exemplary embodiment of the present application;
FIG. 5a is a schematic illustration of a food game shown in an exemplary embodiment of the present application;
FIG. 5b is a schematic view of a eaten food item shown in an exemplary embodiment of the present application;
FIGS. 5c and 5d are schematic views of an exemplary embodiment of the present application showing food not eaten;
FIG. 6 is a flow chart illustrating a live method according to an exemplary embodiment of the present application;
7a, 7b, 7c are schematic views of a basketball shooting game shown in an exemplary embodiment of the present application;
FIG. 8 is a schematic diagram of a spectator client displayed dart game, shown in an exemplary embodiment of the present application;
FIG. 9 is a flow chart illustrating a live method according to an exemplary embodiment of the present application;
FIG. 10 is a flow chart illustrating a live method according to an exemplary embodiment of the present application;
fig. 11 is a logical block diagram of a live device according to an exemplary embodiment of the present application;
fig. 12 is a logic block diagram of an electronic device shown in an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The current live broadcast content mainly comprises: the method comprises the steps of mainly playing talent, showing scenes played outdoors, showing video pictures played in games and the like. With the popularization of the live broadcast concept, more and more people become the anchor, but one wonderful live broadcast needs the anchor to plan a lot of contents, and the atmosphere of audiences is also called at intervals, so that the anchor which has no outstanding characteristics and is not good for the interaction with the audiences has less fan, and further the anchor and the audiences run off, in order to improve the ecosystem of the live broadcast platform, each live broadcast platform generally increases the form of the broadcast contents and improves the live broadcast effect by adding new contents (such as special effects, games and the like) so as to attract users.
With the development of science and technology, Virtual Reality concepts are continuously exploded, and people can interact with a Virtual world by wearing VR (Virtual Reality) glasses and a gamepad. The virtual reality technology is a computer simulation system capable of creating and experiencing a virtual world, which utilizes a computer to generate a simulation environment, is a system simulation of multi-source information fusion and interactive three-dimensional dynamic visual and entity behaviors, and enables a user to be immersed in the environment.
Because VR recreation needs to be with the help of equipment such as VR glasses, game paddle, the recreation popularizes the degree of difficulty great. An AR (augmented reality) technology that integrates a real world and a virtual world, such as a pocketmon Go game that has raised a hot tide around the world, can capture a sprite by a user who takes a picture of a real scene and presses and throws the sprite ball on a screen, and is rapidly popularized because it does not require additional equipment.
However, the current AR game is basically a game that a user operates with a finger, and from the perspective of game experience, the AR game is not much different from a traditional game (for example, a fruit-cutting or angry bird, etc.), but only the background of the game is changed into a picture of the current environment of the user, and the interaction between the user and the virtual world is little, and the sense of substitution is not strong. Based on this, the present application proposes a scheme of combining an AR game with live broadcasting, as shown in fig. 1:
step S110: performing limb feature recognition on a target object in an image frame captured by a first client through a camera to recognize limb actions;
step S120: rendering the controlled object in the AR scene in the image frame to form a first video picture, and sending the first video picture to the second client; the first client and the second client are connected through a connecting microphone;
step S130: calculating and updating the position of the controlled object in the first video picture based on the control signal and the limb action sent by the second client;
step S140: and sending the first video picture after the position of the controlled object is updated to the second client and the audience client.
The limb movement refers to the coordinated movement of the human body parts such as the head, eyes, neck, hands, elbows, arms, body, crotch, feet and the like.
The present embodiment is applied to a continuous wheat scene in live broadcasting, as shown in fig. 2, a first client 100 and a second client 200 establish connection through a server 400 in a continuous wheat mode based on a network; at least one of the first client 100 and the second client 200 belongs to an anchor client.
The function of the AR game may be added to live broadcast software, and the AR game needs an AR scene, that is, the software needs to add functions of establishing, driving, rendering an AR model, and the like, and the function may be added to the original live broadcast software in a form of a plug-in, or may be added to a new version of software, which is not limited in this application.
After the main broadcast is played, it may wish to interactively play the AR game with other users, for example, as shown in fig. 3b, a friend icon below the friend icon is clicked to send a game interaction request to other users, and when the second client 200 accepts the invitation, the server 400 establishes a connection between the first client 100 and the second client 200, and issues an instruction to the first client 100 and the second client 200 to enter the same interactive game. Of course, sometimes the first client 100 and the second client 200 are already in the connected state, and at this time, as shown in fig. 3a, only one of the connected parties needs to send a game interaction request, and after the other connected party accepts the invitation, the server 400 sends an instruction to the first client 100 and the second client 200 to enter the same interactive game.
The method comprises the steps that the identity of the anchor and the identity of the audience are changed into an initiator and a participant in the process of connecting the live broadcast, when the initiator initiates a request for connecting the live broadcast to the participant, the participant receives the live broadcast, the connection is established between the client sides of the initiator and the participant, and the live broadcast picture is provided by the two client sides. In general, the live broadcast picture can be displayed in a picture-in-picture mode in which the live broadcast picture of the initiator is a large window and the main broadcast picture of the participant is a small window. Of course, the display mode can be adjusted by the initiator or the participant at will. In some examples, the continuous wheat can also be multi-person continuous wheat.
Taking a food game as an example, the confrontation referred to in the application means that one party connected with the wheat throws food and the other party receives the food to eat.
Because the AR game can be played only by adding the corresponding functional module, if the plug-in is not installed or the version does not support playing the AR game, the corresponding prompt message can be sent. For example, as shown in fig. 4, after the server 400 sends a game interaction request to the second client 200, the following steps are performed at the second client 200:
step S410: detecting whether the second client 200 supports the interactive game;
step S420: if yes, generating a game interaction request at the second client 200;
step S430: if not, acquiring the reason why the second client 200 does not support, and generating solution guide information; prompt information can also be sent to the first client 100 through the server 400;
the solution guidance information includes: downloading plug-ins, upgrading applications, replacing hardware devices, etc. The prompt message may be "the opposite side hardware equipment does not support, the friend is changed to challenge the bar", and even some hot anchor supporting the interactive game can be recommended to the user for counterwork, etc.
Of course, the first client 100 also needs to send a game interaction request to the second client in support of playing the AR game; if the first client 100 does not support playing the AR game, the solution guidance information may be sent first when the user clicks to send the game interaction request.
Taking the food game shown in fig. 5a as an example, the physical model of the first client 100 sets the image of the food (controlled objects 231, 232, 233), and one side can throw the food, for example, the user of the second client 200 clicks the selected food with the hand 999, slides the hand 999 upwards to make the action of throwing the food, and of course, the direction and speed of throwing the food can be controlled according to the direction, strength, etc. of the finger sliding, the second client 200 parses the action into a control signal and sends the control signal to the first client 100, the driving model of the first client 100 calculates the position of the controlled object according to the parameters, and then renders the controlled object at the position; if the game rule is that a person wants to eat food as much as possible (mouth-opening eating food), the position of the controlled object is affected by the body motion, for example, the food to be eaten, and the parameters obtained by driving the model are different, so that the motion route of the food is changed, i.e., the position of the controlled object in the first video picture is updated.
The first client 100 sends the updated first video frame to the second client 200 and the viewer client 300 through the server 400, and certainly, the user of the second client 200 may feel that there is a certain delay in the food throwing because the first client 100 returns the video frame of the food throwing after processing.
According to the method and the system, an AR scene is added on the basis of an image frame captured by a main broadcasting client through a camera to form a video picture, and the two parties connected with the microphone can influence the position of a controlled object in the AR scene, for example, the motion track of the controlled object is changed, and the interaction between a user and a virtual world is more and the substitution feeling is strong. The video picture of the microphone PK can be sent to the client side of the audience, the audience can visually see the AR game playing mode of the main broadcast, the live broadcast content of the main broadcast is increased, the interactive topic of the audience and the main broadcast can be mobilized in the game mode, and the purposes of improving the live broadcast effect and attracting users are achieved.
The PK game is played, and the battle performance of the PK can be displayed when the game is finished. Therefore, when the game is completed, for example, the first client 100 transmits the score of the player, the portion of the second client 200 that has thrown food, and the like to the server 400, and the server 400 counts the score of the first client user and the score of the second client user, and adds a special effect corresponding to the scores to the video screen. For example, the achievement special effect of the first client 100 player may be that "you eat 80 of 100 foods thrown by opponents, defeat nationwide 99.99% of the players who eat, the achievement special effect of the second client 200 player may be that" 80 of 100 foods thrown by you are eaten by opponents, defeat nationwide 99.99% of the players who raise ", and so on.
Because there are many game players on the whole live platform, there can be score leaderboards related to all players, and the scores of the players in the leaderboards can be synchronously updated every time one game is completed. There may be multiple categories in the leaderboard, such as, for example, a win ratio, a score for a single field, a number of gift harvests, a number of viewers, and the like. Therefore, PK objects can be recommended to players according to the ranking list, for example, PK is recommended to popular anchor broadcasters with fewer friends for recommending PK to anchor broadcasters with close recommendation scores, or PK is recommended to anchor broadcasters with more audiences for recommending anchor broadcasters with fewer audiences, so that medium and small anchor broadcasters can interact with more people, the exposure rate of the anchor broadcasters, particularly the medium and small anchor broadcasters, can be increased, and the popularity can be improved.
According to the statistical score, a PK record can be generated, as shown in fig. 3c, the player can see the PK record of the player, and certainly can also see the PK record of other people, for example, when receiving a game interaction request sent by other anchor, the player can determine whether to accept the game interaction request by combining the PK record of the opposite side, for example, some players do not want to follow the PK of the player with too poor technique, and when finding that the opposite side is too much lost, the player can reject the game interaction request, etc.; through disclosing game scores, the anchor can conveniently select high-quality opponent PK to improve the live broadcast wonderful degree.
Next, a game of the user following the controlled object in the AR scene will be described by taking a food game as an example. As shown in fig. 5a, the system can throw different foods (pepper 231, cake 232, egg 233) for the player to eat, and can calculate whether the foods fall into the mouth according to the position of the controlled object, the position of the mouth and the opening degree; for example, as shown in FIG. 5b, the pepper 231 falls into the mouth of the player, i.e., the player is considered to have eaten the pepper 231; as shown in fig. 5c, the cake 232 does not fall into the player's mouth, i.e., the player may be considered to have not eaten the cake 232; as shown in fig. 5d, the egg 233 does not fall into the player's mouth, i.e., the player is considered to have not eaten the egg 233.
The game usually has a progress bar for recording time, score, resources (e.g. props, etc.), as shown in fig. 5a, the progress bar 109 records information of the remaining time (e.g. 10s remaining), the highest score, the score of the game, etc. the state of the progress bar 109 is continuously adjusted as the game progresses, for example, the corresponding score is increased when the pepper 231 is eaten.
In order to improve the reality of the AR scene, the game simulates the effect of a human throwing food in the real world, for example, the throwing angle and/or force are different, the food movement track is different, and the player can be set to have a certain distance from the position of throwing the food, so that the food flies towards the player in a parabolic manner. Food eaten by the player can be returned in a disappearing form; and uneaten food, perhaps in the form of complete lack of contact with the player as shown in fig. 5c, may fall toward the rear of the player and disappear in the system's default course of motion; of course, uneaten food may hit the player as shown in fig. 5d, and the exit route may change, such as bounce, fall, or the like.
In the real world, different foods have different tastes, for example, pepper is spicy, and people feel hot face and spicy after eating the foods. Corresponding attributes can be set for different foods, and different special effects can be added correspondingly after the player eats or is hit by the foods.
There are many types of food attributes, such as: taste attributes, physical attributes, caloric attributes, etc.; taste attributes in turn include: sour, sweet, bitter, spicy, salty, etc.; the physical properties may include: solid, liquid, gaseous; so that an expression representing taste attributes, a print in contact with the controlled object, an adjustment of the body weight of the target object, and the like can be rendered on the player's face. For example, as shown in FIG. 5b, if the player eats the peppers 231, a special effect indicating a spicy hot may be added; as shown in fig. 5d, if the player is hit by the egg 233, the egg 233 can be added to break the special effect of the flowing egg liquid; or the player is hit by solid food such as apple, the face will be swollen, etc.; of course, the obesity of the target object 110 may also be adjusted according to the calories of food consumed by the player.
When the player eats food, special effects such as scores and continuous hitting numbers can be displayed, or special effects such as virtual cheering squad can be added, various special effects can be added in the game according to needs, the special effects can be flash special effects and mapping special effects, and can also be special effects in other forms, the duration time of the special effects can also be specifically set according to the game scene, and the method is not limited in the application.
By the mode, the playability and the sense of reality of the game can be enhanced, and the fun of the anchor and the audience in the game interaction is improved.
During game playing, the position of the controlled object is adjusted according to the position, the opening degree and the like of the mouth of the target object 110, generally speaking, a game is played by one person, however, there may be multiple persons on the air during live broadcasting, that is, there may be multiple faces in the image frame captured by the camera of the on-air client, for example, as shown in fig. 3b, there are faces 110 and 120 in the image frame, and the rule for determining which is the target object may include one of the following:
taking the face with the centered position as a target object;
taking the face with the largest area as a target object; usually, the face of the player is located at the center of the picture and is closer to the camera, so the area of the face is larger;
taking the face detected earliest as a target object; usually, the person shot by the camera is the player, or other people go into the game during the playing process of the player, so the face detected at the earliest time is taken as the target object;
taking a face matched with the user identity information as a target object; for example, a player may register an account, particularly a anchor, and need to authenticate an identity card and face information to perform real-name authentication, so that a face of a registered user may be matched from a plurality of faces as a target object according to a photo used when the user registers;
the above manner is that the system automatically matches the target object, and may be used alone or in combination, and of course, the user may also directly specify the target object, for example, when a plurality of faces are detected, a selection box pops up on each face, and which selection box is pointed, the face is considered as the target object, that is, the target object is determined according to an externally input instruction.
The video picture is an image of one frame and one frame at the end, when the AR scene is rendered, the position of an object (including a controlled object and a related object) of the AR scene in each frame can be calculated, and since the position of the controlled object is also influenced by the body motion, the position of the controlled object in the next frame is usually calculated according to parameters such as the current body motion when the position calculation of one frame is finished, that is, the position of the controlled object in the next frame is calculated based on the body motion of the previous frame of the video picture.
Both parties of the food game, along with the microphone, adjust the position of the controlled object, however, there is also a class of games, such as the basketball shooting game shown in fig. 7a, which can be one person controlling the basketball, one person controlling the basket; a game of darts, etc. as shown in fig. 8, may be one person controlling darts, one person controlling a dart board. Next, a live broadcast method of the present application will be described by taking a basketball shooting game as an example.
As shown in fig. 6, the live broadcasting method includes:
step S610: performing limb feature recognition on a target object in an image frame captured by a first client through a camera to recognize limb actions;
step S620: rendering the associated object in the AR scene in the image frame to form a first video picture, sending the first video picture to the second client, and adjusting the position of the associated object based on a control signal sent by the second client; the first client and the second client are connected through a connecting microphone;
step S630: rendering the controlled object in the AR scene based on the position of the mouth, and calculating and updating the position of the controlled object in the first video picture by combining the limb action and the position of the associated object;
step S640: and sending the first video picture after the position of the controlled object is updated to the second client and the audience client.
The present embodiment is applied to a wheat-connecting scene in live broadcasting, and is similar to the previous embodiment in that a wheat-connecting mode is used to detect whether the second client 200 supports the interactive game, count scores, determine a target object, and the like, and is not described herein again.
In the game process, a basketball controlled by a player needs to be generated firstly, and then the movement of the basketball is controlled according to the change of the mouth shape of the player, so that the basketball needs to be generated under the control of a trigger condition, and the basketball can be launched out when the mouth of the player is closed and opened; as shown in fig. 3c and 7a, when it is detected that the mouth 101 of the player is opened to reach the start threshold, the basketball 211 (controlled object) may be rendered based on the position of the mouth, and certainly, the basketball 211 may not be rendered at the position of the mouth, and may be set according to the game, which is not limited in this application.
As shown in fig. 7a, one party may control the movement of the basket 311, for example, the user of the second client 200 clicks the selected basket 311 with a hand 999, slides the hand 999 to the left (or right) to drag the basket 311, the second client 200 parses the action into a control signal and sends the control signal to the first client 100, the driving model of the first client 100 calculates the position of the associated object according to the parameters, and then renders the associated object at the position.
The rules of play are typically such that as many balls 211 as possible are shot into rim 311. In the real world, when a person shoots a basket, the angle, the strength and the like of the shot can be adjusted, in order to increase the reality of an AR scene, in the embodiment of the present application, when limb feature recognition is performed, gesture actions can be recognized, the position and the opening degree of eyes can be recognized, the face orientation and the mouth closing speed can also be recognized, for example, 68 2D feature points are recognized from the face of the target object 110, and the 3D posture (including the face position and the orientation) of the target object 110 can be solved by corresponding the 2D feature points to the 3D feature points of a standard face; the opening and closing speed of the mouth can be calculated according to the moving distance and the consumed time of the lip area feature points, the moving direction of the basketball 211 is set according to the face direction, the moving speed of the basketball 211 is set according to the closing speed of the mouth, and the position of the basketball 211 is calculated according to the moving direction and the moving speed. In the case of shooting, since the basketball shooting is performed by adjusting various factors such as direction and strength, the hit rate may be low, and different levels may be set for the game in order to increase the shooting hit rate and increase the power of the player. Since the first client 100 returns to the video frame after the rim 311 is dragged after processing, the user of the second client 200 may feel that there is a certain delay time for dragging the rim 311, but the hit rate for the basketball shooter is very low if the rim 311 is continuously moved. Thus, the basket 311 may be limited to moving only once between shots, etc.; the force of closing the mouth of the player can be set to be no matter how strong, and the player can hit the basketball as long as the face is directed to the basket.
As shown in fig. 7a, since the ball starts to fly outward from the initial position after the mouth of the player is closed and the object is thrown and then makes a parabolic motion because gravity acts in the real world, the initial velocity (vector) of the motion of the basketball 211 may be set based on the face direction and the closing velocity of the mouth in combination with the gravitational acceleration when calculating the position of the basketball 211, and the position of the basketball 211 may be calculated by combining the starting point of the motion and the gravitational acceleration. Of course, the distance between the target object 110 and the screen may also be set, so as to determine whether the basketball 211 hits the screen during the flight of the basketball 211, for example, when the basketball 211 hits the screen, a special effect that the screen is broken up as shown in fig. 7a may be added, so as to increase the real effect of the AR scene.
As shown in fig. 7b, the basketball 211 is a controlled object, the rim 311 is an associated object of the controlled object, and the basketball 211 and the rim 311 may be rendered in the same layer or different layers, which is not limited in this application. The positions of the basketball 211 and rim 311 are obtained to determine whether the basketball 211 is dropped into the rim 311. For example, as shown in FIG. 7c, the basketball 211 falls into the rim 311; as shown in fig. 7a, the basketball 211 does not fall into the rim 311.
The game usually has a progress bar for recording time, score, resources (such as props, etc.), as shown in fig. 7c, the progress bar 109 records the remaining time (such as 10s remaining), the top score, the score of the game, etc. of the game, and the status of the progress bar 109 is continuously adjusted as the game progresses, for example, the basketball 211 falls into the basket 311, and the corresponding score is increased.
In order to improve the reality and interest of the AR scene, as shown in fig. 7c, a certain special effect may be added to the basketball 211 when the basket is put into the basket 311, for example, a special effect that the basketball 211 is ignited when the speed of putting into the basket 311 is greater than a threshold value, or a special effect that the basketball 211 is ignited when the basket 311 is hollow, or the like. In one embodiment, the reduced envelope of the rim 311 is positioned at the center of the rim 311, and if the center point of the basketball 211 falls within the reduced envelope, a hit is considered. Of course, the size of the rim 311 may also change during the game, and the envelope of the rim 311 used to determine whether the basketball 211 hits may be modified accordingly. Rules for scoring hits may also be set, such as 2 points when the basket 311 is outlined, 1 point for other hits, and so on.
Objects such as a backboard can be rendered in the AR scene, and when the player throws the basketball 211, the player may hit or miss the basketball 211, and when the basketball is missed, for example, the strength may be too small, the basketball 211 falls and disappears between the target object 110 and the basket 311; of course, the rebound may also disappear after hitting the backboard or rim 311; when the position deviation is large, the screen can be impacted, the special effect that the screen is broken can be increased, and the like, so that the real effect of the AR scene is increased.
When a player shoots and hits, special effects such as scores, continuous hitting numbers, hit display 'good' and hollow display 'prefect' can be displayed, or special effects such as virtual cheering squad are added, various special effects can be added in the game according to needs, the special effects can be flash special effects and chartlet special effects, and can also be special effects in other forms, the duration time of the special effects can also be specifically set according to the game scene, and the method is not limited in the application.
By the mode, the playability and the sense of reality of the game can be enhanced, and the fun of the anchor and the audience in the game interaction is improved.
The game of shooting darts as shown in fig. 8 is similar to the game of shooting, and may be that when the mouth 101 of the player opens and reaches the starting threshold, the darts 221 (controlled objects) are rendered based on the positions of the mouth, and the darts are controlled to fly to the dart board 321 after the mouth of the player closes, and the detailed process may refer to the above-mentioned case of shooting a basketball, and will not be described herein.
The video picture is an image of one frame and one frame at the end, when the AR scene is rendered, the position of an object (including a controlled object and a related object) of the AR scene in each frame can be calculated, and since the position of the controlled object is also influenced by the body motion, the position of the controlled object in the next frame is usually calculated according to parameters such as the current body motion when the position calculation of one frame is finished, that is, the position of the controlled object in the next frame is calculated based on the body motion of the previous frame of the video picture. Of course, the image frames captured by the camera may be further processed for beautifying, and the beautifying manner and the like may be the same as those in the prior art, which is not described in this application.
The application also discloses a live broadcast method which is used for a live broadcast system, wherein the live broadcast system comprises a first client, a server and a second client; as shown in fig. 9, the method comprises the steps of:
step S901: a first client captures an image frame through a camera, performs limb feature recognition on a target object in the image frame, recognizes limb actions, and renders a controlled object in an AR scene in the image frame to form a first video picture;
step S902: the first client sends the first video picture to the second client and the audience client through the server, and the second client sends the control signal of the controlled object to the first client through the server; the first client and the second client are connected through a connecting microphone;
step S903: the first client calculates and updates the position of the controlled object in the first video picture based on the control signal and the limb action, and sends the updated first video picture to the server;
step S904: and the server sends the updated first video picture to the second client and the audience client.
The application also discloses a live broadcast method which is used for a live broadcast system, wherein the live broadcast system comprises a first client, a server and a second client; as shown in fig. 10, the method comprises the steps of:
step S101: the method comprises the steps that a first client captures an image frame through a camera, performs limb feature recognition on a target object in the image frame, recognizes limb actions, and renders a related object in an AR scene in the image frame to form a first video picture;
step S102: the first client sends the first video picture to the second client and the audience client through the server, and the second client sends the control signal of the associated object to the first client through the server; the first client and the second client are connected through a connecting microphone;
step S103: the first client calculates the position of the controlled object in the AR scene based on the body movement, calculates the position of the associated object based on the control signal, updates the positions of the controlled object and the associated object in the AR scene in the first video picture, and sends the updated first video picture to the server;
step S104: and the server sends the updated first video picture to the second client and the audience client.
Corresponding to the embodiment of the live broadcast method, the application also provides an embodiment of a live broadcast device.
The embodiment of the live device can be applied to the electronic equipment. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. Taking a software implementation as an example, as a logical device, the device is formed by reading, by a processor of the electronic device where the device is located, a corresponding computer program instruction in the nonvolatile memory into the memory for operation. From a hardware aspect, as shown in fig. 12, the present application is a hardware structure diagram of an electronic device where a live broadcast apparatus is located, and besides a processor, a memory, a network interface, and a nonvolatile memory shown in fig. 12, the electronic device where the apparatus is located in the embodiment may also include other hardware, such as a camera, according to an actual function of the live broadcast apparatus, which is not described again.
Referring to fig. 11, the present application further discloses a live broadcasting apparatus, including:
the identification module is used for identifying the limb characteristics of a target object in an image frame captured by the first client through the camera and identifying limb actions;
the rendering module is used for rendering the controlled object in the AR scene in the image frame to form a first video picture and sending the first video picture to the second client; the first client and the second client are connected through a connecting microphone; and
calculating and updating the position of the controlled object in the first video picture based on the control signal and the limb action sent by the second client;
and the sending module is used for sending the first video picture after the position of the controlled object is updated to the second client and the audience client.
The application also discloses a live device, include:
the identification module is used for identifying the limb characteristics of a target object in an image frame captured by the first client through the camera and identifying limb actions;
the rendering module is used for rendering the associated object in the AR scene in the image frame to form a first video picture, sending the first video picture to the second client, and adjusting the position of the associated object based on a control signal sent by the second client; the first client and the second client are connected through a connecting microphone; and
rendering the controlled object in the AR scene based on the position of the mouth, and calculating and updating the position of the controlled object in the AR scene by combining the limb action and the position of the associated object;
and the sending module is used for sending the first video picture after the position of the controlled object is updated to the second client and the audience client.
As shown in fig. 12, the present application also discloses an electronic device including:
a memory storing processor-executable instructions; wherein the processor is coupled to the memory for reading program instructions stored by the memory and, in response, performing the following:
establishing game interaction with a first client and a second client, and performing limb feature recognition on a target object in an image frame captured by the first client through a camera to recognize limb actions;
rendering the controlled object in the AR scene in the image frame to form a first video picture, and sending the first video picture to the second client;
calculating and updating the position of the controlled object based on the control signal and the limb action sent by the second client;
and sending the first video picture after the position of the controlled object is updated to the second client and the audience client.
The application also discloses an electronic device, including:
a memory storing processor-executable instructions; wherein the processor is coupled to the memory for reading program instructions stored by the memory and, in response, performing the following:
establishing game interaction with a first client and a second client, and performing limb feature recognition on a target object in an image frame captured by the first client through a camera to recognize limb actions;
rendering the associated object in the AR scene in the image frame to form a first video picture, sending the first video picture to the second client, and adjusting the position of the associated object based on a control signal sent by the second client;
rendering the controlled object in the AR scene based on the position of the mouth, and calculating and updating the position of the controlled object in the AR scene by combining the limb action and the position of the associated object;
and sending the first video picture after the position of the controlled object is updated to the second client and the audience client.
The application also discloses a live broadcast system, include:
the system comprises a first client, a second client and a server;
the server is used for establishing a microphone connection between the first client and the second client;
the first client is used for capturing image frames through a camera, identifying the limb characteristics of target objects in the image frames, identifying limb actions, rendering controlled objects in an AR scene in the image frames to form a first video picture, and sending the first video picture to the server;
the server is also used for sending the first video picture to the second client and the audience client;
the second client is used for collecting control signals of the controlled object and sending the control signals to the server;
the server is also used for sending a control signal to the first client;
the first client is further used for calculating and updating the position of the controlled object in the first video picture based on the control signal and the limb action, and sending the updated first video picture to the server;
and the server is also used for sending the updated first video picture to the second client and the audience client.
The application also discloses a live broadcast system, include:
the system comprises a first client, a second client and a server;
the server is used for establishing a microphone connection between the first client and the second client;
the first client is used for capturing image frames through a camera, identifying the limb characteristics of target objects in the image frames, identifying limb actions, rendering associated objects in an AR scene in the image frames to form a first video picture, and sending the first video picture to the server;
the server is also used for sending the first video picture to the second client and the audience client;
the second client is used for collecting control signals of the associated objects and sending the control signals to the server;
the server is also used for sending a control signal to the first client;
the first client is further used for calculating the position of the controlled object in the AR scene based on the body movement, calculating the position of the associated object based on the control signal, updating the positions of the controlled object and the associated object in the AR scene in the first video picture, and sending the updated first video picture to the server;
and the server is also used for sending the updated first video picture to the second client and the audience client.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (18)

1. A live broadcast method is characterized by comprising the following steps:
performing limb feature recognition on a target object in an image frame captured by a first client through a camera to recognize limb actions; wherein the limb actions comprise direction and force;
rendering the controlled object in the AR scene in the image frame to form a first video picture, and sending the first video picture to the second client; the first client and the second client are connected through a connecting microphone; the image of the controlled object is set based on a physical model;
calculating and updating the position of the controlled object in the first video picture based on the control signal and the limb action sent by the second client; the direction and the strength of the limb action are used for controlling the movement direction and the speed of the controlled object;
and sending the first video picture after the position of the controlled object is updated to the second client and the audience client.
2. A live method as claimed in claim 1 wherein the step of the first client establishing a connection with the second client over a connecting wire comprises one of:
sending a game interaction request to a second client through a first client, establishing a connecting microphone between the first client and the second client when receiving a response sent by the second client, and starting an interactive game at the first client and the second client simultaneously;
and in the live broadcast and microphone connection process of the first client and the second client, sending a game interaction request to the second client through the first client, and starting an interactive game at the first client and the second client simultaneously when receiving a response sent by the second client.
3. A live method as defined in claim 1, wherein the method further comprises:
counting scores of a first client user and a second client user when the game is finished, and adding a special effect corresponding to the scores in the video picture;
updating the score ranking list according to the scores of the users;
and recommending the game interaction object according to the score ranking list.
4. The live broadcasting method of claim 1, wherein the step of calculating the position of the controlled object in the AR scene based on the control signal and the body motion sent by the second client comprises:
calculating the position of the controlled object based on a control signal sent by a second client, and calculating whether the controlled object falls into the mouth according to the position and the opening degree of the mouth;
after the step of calculating the position of the controlled object based on the control signal sent by the second client and calculating whether the controlled object falls into the mouth according to the position and the opening degree of the mouth, the method further comprises any one of the following steps:
adjusting the state of a game progress bar according to whether the controlled object falls into the mouth or not;
when the controlled object does not fall into the mouth, controlling the controlled object to exit according to the position of the target object;
when the controlled object falls into the mouth and/or hits a target object, adding a special effect corresponding to the attribute in the video picture according to the recorded attribute of the controlled object.
5. A live method as claimed in any one of claims 1 to 4 wherein the method further comprises:
when the number of the faces in the image frame is more than one, determining a target object according to a preset rule;
wherein the preset rule comprises at least one of:
taking the face with the centered position as a target object;
taking the face with the largest area as a target object;
taking the face detected earliest as a target object;
determining a target object according to an externally input instruction;
and taking the face matched with the user identity information as a target object.
6. A live broadcast method is characterized by comprising the following steps:
performing limb feature recognition on a target object in an image frame captured by a first client through a camera to recognize limb actions; wherein the limb actions comprise direction and force;
rendering the associated object in the AR scene in the image frame to form a first video picture, sending the first video picture to the second client, and adjusting the position of the associated object based on a control signal sent by the second client; the first client and the second client are connected through a connecting microphone;
rendering the controlled object in the AR scene based on the position of the mouth, and calculating and updating the position of the controlled object in the first video picture by combining the limb action and the position of the associated object; the image of the controlled object is set based on a physical model; the direction and the strength of the limb action are used for controlling the movement direction and the speed of the controlled object;
and sending the first video picture after the position of the controlled object is updated to the second client and the audience client.
7. A live method as claimed in claim 6 wherein the step of the first client establishing a connection with the second client over the connecting to the microphone comprises one of:
sending a game interaction request to a second client through a first client, establishing a connecting microphone between the first client and the second client when receiving a response sent by the second client, and starting an interactive game at the first client and the second client simultaneously;
and in the live broadcast and microphone connection process of the first client and the second client, sending a game interaction request to the second client through the first client, and starting an interactive game at the first client and the second client simultaneously when receiving a response sent by the second client.
8. A live method as defined in claim 7, wherein the method further comprises:
counting scores of a first client user and a second client user when the game is finished, and adding a special effect corresponding to the scores in the video picture;
updating the score ranking list according to the scores of the users;
and recommending the game interaction object according to the score ranking list.
9. The live method of claim 6, wherein the step of rendering the controlled object in the AR scene based on the mouth position comprises:
identifying the opening degree of a mouth, and rendering the controlled object in the AR scene based on the position of the mouth when the opening degree of the mouth is greater than a starting threshold value;
the step of calculating and updating the position of the controlled object in the AR scene by combining the limb action and the position of the associated object comprises the following steps:
identifying the face orientation and the mouth closing speed;
setting the moving direction of the controlled object based on the face orientation, setting the moving speed of the controlled object based on the closing speed of the mouth, and calculating the position of the controlled object based on the moving direction and speed;
the step of setting a direction of movement of the controlled object based on the face orientation and setting a speed of movement of the controlled object based on the closing speed of the mouth, and calculating the position of the controlled object based on the direction and speed of movement includes:
setting the initial speed of the movement of the controlled object based on the face orientation and the closing speed of the mouth, and calculating the position of the controlled object by combining the starting point of the movement of the controlled object and the gravity acceleration;
after the step of rendering the controlled object in the AR scene based on the mouth position, and calculating and updating the position of the controlled object in the AR scene by combining the limb movement and the position of the associated object, the method further includes at least one of the following steps:
judging whether the controlled object falls into the associated object or not according to the position relation between the controlled object and the associated object;
adjusting the state of the game progress bar according to whether the controlled object falls into the associated object;
when the controlled object does not fall into the associated object, acquiring the position relation between the controlled object and the associated object, and controlling the controlled object to exit and/or add a special effect according to the position relation;
and when the controlled object falls into the associated object, acquiring a hit attribute according to the position relation between the controlled object and the associated object, and controlling the controlled object to exit and/or hit the associated object according to the hit attribute.
10. A live method as claimed in any one of claims 6 to 9 wherein the method further comprises:
when the number of the faces in the image frame is more than one, determining a target object according to a preset rule;
wherein the preset rule comprises at least one of:
taking the face with the centered position as a target object;
taking the face with the largest area as a target object;
taking the face detected earliest as a target object;
determining a target object according to an externally input instruction;
and taking the face matched with the user identity information as a target object.
11. A live broadcast method is used for a live broadcast system, and the live broadcast system comprises a first client, a server and a second client; characterized in that the method comprises the following steps:
a first client captures an image frame through a camera, performs limb feature recognition on a target object in the image frame, recognizes limb actions, and renders a controlled object in an AR scene in the image frame to form a first video picture; the image of the controlled object is set based on a physical model; wherein the limb actions comprise direction and force;
the first client sends the first video picture to the second client and the audience client through the server, and the second client sends the control signal of the controlled object to the first client through the server; the first client and the second client are connected through a connecting microphone;
the first client calculates and updates the position of the controlled object in the first video picture based on the control signal and the limb action, and sends the updated first video picture to the server; the direction and the strength of the limb action are used for controlling the movement direction and the speed of the controlled object;
and the server sends the updated first video picture to the second client and the audience client.
12. A live broadcast method is used for a live broadcast system, and the live broadcast system comprises a first client, a server and a second client; characterized in that the method comprises the following steps:
the method comprises the steps that a first client captures an image frame through a camera, performs limb feature recognition on a target object in the image frame, recognizes limb actions, and renders a related object in an AR scene in the image frame to form a first video picture; wherein the limb actions comprise direction and force;
the first client sends the first video picture to the second client and the audience client through the server, and the second client sends the control signal of the associated object to the first client through the server; the first client and the second client are connected through a connecting microphone;
the first client calculates the position of the controlled object in the AR scene based on the body movement, calculates the position of the associated object based on the control signal, updates the positions of the controlled object and the associated object in the AR scene in the first video picture, and sends the updated first video picture to the server; the image of the controlled object is set based on a physical model; the direction and the strength of the limb action are used for controlling the movement direction and the speed of the controlled object;
and the server sends the updated first video picture to the second client and the audience client.
13. A live broadcast apparatus, comprising:
the identification module is used for identifying the limb characteristics of a target object in an image frame captured by the first client through the camera and identifying limb actions; wherein the limb actions comprise direction and force;
the rendering module is used for rendering the controlled object in the AR scene in the image frame to form a first video picture and sending the first video picture to the second client; the first client and the second client are connected through a connecting microphone; the image of the controlled object is set based on a physical model; and
calculating and updating the position of the controlled object in the first video picture based on the control signal and the limb action sent by the second client; the direction and the strength of the limb action are used for controlling the movement direction and the speed of the controlled object;
and the sending module is used for sending the first video picture after the position of the controlled object is updated to the second client and the audience client.
14. A live broadcast apparatus, comprising:
the identification module is used for identifying the limb characteristics of a target object in an image frame captured by the first client through the camera and identifying limb actions; wherein the limb actions comprise direction and force;
the rendering module is used for rendering the associated object in the AR scene in the image frame to form a first video picture, sending the first video picture to the second client, and adjusting the position of the associated object based on a control signal sent by the second client; the first client and the second client are connected through a connecting microphone; and
rendering the controlled object in the AR scene based on the position of the mouth, and calculating and updating the position of the controlled object in the AR scene by combining the limb action and the position of the associated object; the image of the controlled object is set based on a physical model; the direction and the strength of the limb action are used for controlling the movement direction and the speed of the controlled object;
and the sending module is used for sending the first video picture after the position of the controlled object is updated to the second client and the audience client.
15. An electronic device, comprising:
a memory storing processor-executable instructions; wherein the processor is coupled to the memory for reading program instructions stored by the memory and, in response, performing the following:
establishing game interaction with a first client and a second client, and performing limb feature recognition on a target object in an image frame captured by the first client through a camera to recognize limb actions; wherein the limb actions comprise direction and force;
rendering the controlled object in the AR scene in the image frame to form a first video picture, and sending the first video picture to the second client; the image of the controlled object is set based on a physical model;
calculating and updating the position of the controlled object based on the control signal and the limb action sent by the second client; the direction and the strength of the limb action are used for controlling the movement direction and the speed of the controlled object;
and sending the first video picture after the position of the controlled object is updated to the second client and the audience client.
16. An electronic device, comprising:
a memory storing processor-executable instructions; wherein the processor is coupled to the memory for reading program instructions stored by the memory and, in response, performing the following:
establishing game interaction with a first client and a second client, and performing limb feature recognition on a target object in an image frame captured by the first client through a camera to recognize limb actions; wherein the limb actions comprise direction and force;
rendering the associated object in the AR scene in the image frame to form a first video picture, sending the first video picture to the second client, and adjusting the position of the associated object based on a control signal sent by the second client;
rendering the controlled object in the AR scene based on the position of the mouth, and calculating and updating the position of the controlled object in the AR scene by combining the limb action and the position of the associated object; the image of the controlled object is set based on a physical model; the direction and the strength of the limb action are used for controlling the movement direction and the speed of the controlled object;
and sending the first video picture after the position of the controlled object is updated to the second client and the audience client.
17. A live broadcast system, comprising:
the system comprises a first client, a second client and a server;
the server is used for establishing a microphone connection between the first client and the second client;
the first client is used for capturing image frames through a camera, identifying the limb characteristics of target objects in the image frames, identifying limb actions, rendering controlled objects in an AR scene in the image frames to form a first video picture, and sending the first video picture to the server; the image of the controlled object is set based on a physical model; wherein the limb actions comprise direction and force;
the server is also used for sending the first video picture to the second client and the audience client;
the second client is used for collecting control signals of the controlled object and sending the control signals to the server;
the server is also used for sending a control signal to the first client;
the first client is further used for calculating and updating the position of the controlled object in the first video picture based on the control signal and the limb action, and sending the updated first video picture to the server; the direction and the strength of the limb action are used for controlling the movement direction and the speed of the controlled object;
and the server is also used for sending the updated first video picture to the second client and the audience client.
18. A live broadcast system, comprising:
the system comprises a first client, a second client and a server;
the server is used for establishing a microphone connection between the first client and the second client;
the first client is used for capturing image frames through a camera, identifying the limb characteristics of target objects in the image frames, identifying limb actions, rendering associated objects in an AR scene in the image frames to form a first video picture, and sending the first video picture to the server; wherein the limb actions comprise direction and force;
the server is also used for sending the first video picture to the second client and the audience client;
the second client is used for collecting control signals of the associated objects and sending the control signals to the server;
the server is also used for sending a control signal to the first client;
the first client is further used for calculating the position of the controlled object in the AR scene based on the body movement, calculating the position of the associated object based on the control signal, updating the positions of the controlled object and the associated object in the AR scene in the first video picture, and sending the updated first video picture to the server; the image of the controlled object is set based on a physical model; the direction and the strength of the limb action are used for controlling the movement direction and the speed of the controlled object;
and the server is also used for sending the updated first video picture to the second client and the audience client.
CN201710807197.3A 2017-09-08 2017-09-08 Live broadcast method, device and system and electronic equipment Active CN107592575B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710807197.3A CN107592575B (en) 2017-09-08 2017-09-08 Live broadcast method, device and system and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710807197.3A CN107592575B (en) 2017-09-08 2017-09-08 Live broadcast method, device and system and electronic equipment

Publications (2)

Publication Number Publication Date
CN107592575A CN107592575A (en) 2018-01-16
CN107592575B true CN107592575B (en) 2021-01-26

Family

ID=61051919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710807197.3A Active CN107592575B (en) 2017-09-08 2017-09-08 Live broadcast method, device and system and electronic equipment

Country Status (1)

Country Link
CN (1) CN107592575B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108255304B (en) * 2018-01-26 2022-10-04 腾讯科技(深圳)有限公司 Video data processing method and device based on augmented reality and storage medium
CN108391174B (en) * 2018-03-22 2021-08-20 卓米私人有限公司 Live broadcast interaction method and device and electronic equipment
CN108712661B (en) * 2018-05-28 2022-02-25 广州虎牙信息科技有限公司 Live video processing method, device, equipment and storage medium
CN108905192A (en) * 2018-06-01 2018-11-30 北京市商汤科技开发有限公司 Information processing method and device, storage medium
CN110166787B (en) * 2018-07-05 2022-11-29 腾讯数码(天津)有限公司 Augmented reality data dissemination method, system and storage medium
CN109040849B (en) * 2018-07-20 2021-08-31 广州虎牙信息科技有限公司 Live broadcast platform interaction method, device, equipment and storage medium
CN109045688B (en) * 2018-07-23 2022-04-26 广州方硅信息技术有限公司 Game interaction method and device, electronic equipment and storage medium
CN109257612B (en) * 2018-08-09 2020-11-20 广州虎牙信息科技有限公司 Game live broadcast potential evaluation method and device, computer storage medium and server
CN109453517B (en) * 2018-10-16 2022-06-10 Oppo广东移动通信有限公司 Virtual character control method and device, storage medium and mobile terminal
CN109529317B (en) * 2018-12-19 2022-05-31 广州方硅信息技术有限公司 Game interaction method and device and mobile terminal
CN111641844B (en) * 2019-03-29 2022-08-19 广州虎牙信息科技有限公司 Live broadcast interaction method and device, live broadcast system and electronic equipment
CN110097811B (en) * 2019-04-01 2021-11-09 郑州万特电气股份有限公司 Electric injury and human body resistance change demonstration system
CN110659560B (en) * 2019-08-05 2022-06-28 深圳市优必选科技股份有限公司 Method and system for identifying associated object
CN111013135A (en) * 2019-11-12 2020-04-17 北京字节跳动网络技术有限公司 Interaction method, device, medium and electronic equipment
CN112218107B (en) * 2020-09-18 2022-07-08 广州虎牙科技有限公司 Live broadcast rendering method and device, electronic equipment and storage medium
CN112153405A (en) * 2020-09-25 2020-12-29 北京字节跳动网络技术有限公司 Game live broadcast interaction method and device
CN112333459B (en) * 2020-10-30 2022-10-25 北京字跳网络技术有限公司 Video live broadcasting method and device and computer storage medium
CN112911178B (en) * 2021-01-18 2024-04-05 奈特视讯科技股份有限公司 Dart connection competition system
CN113766335A (en) * 2021-09-09 2021-12-07 思享智汇(海南)科技有限责任公司 Multi-player participation game live broadcast system and method
CN114374856B (en) * 2022-01-24 2024-04-30 北京优酷科技有限公司 Interaction method and device based on live broadcast

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103369288A (en) * 2012-03-29 2013-10-23 深圳市腾讯计算机系统有限公司 Instant communication method based on network video and system thereof
CN104645614A (en) * 2015-03-02 2015-05-27 郑州三生石科技有限公司 Multi-player video on-line game method
CN105307737A (en) * 2013-06-14 2016-02-03 洲际大品牌有限责任公司 Interactive video games
CN106341720A (en) * 2016-08-18 2017-01-18 北京奇虎科技有限公司 Method for adding face effects in live video and device thereof
CN106789991A (en) * 2016-12-09 2017-05-31 福建星网视易信息系统有限公司 A kind of multi-person interactive method and system based on virtual scene
CN106792246A (en) * 2016-12-09 2017-05-31 福建星网视易信息系统有限公司 A kind of interactive method and system of fusion type virtual scene

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11520741B2 (en) * 2011-11-14 2022-12-06 Scorevision, LLC Independent content tagging of media files

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103369288A (en) * 2012-03-29 2013-10-23 深圳市腾讯计算机系统有限公司 Instant communication method based on network video and system thereof
CN105307737A (en) * 2013-06-14 2016-02-03 洲际大品牌有限责任公司 Interactive video games
CN104645614A (en) * 2015-03-02 2015-05-27 郑州三生石科技有限公司 Multi-player video on-line game method
CN106341720A (en) * 2016-08-18 2017-01-18 北京奇虎科技有限公司 Method for adding face effects in live video and device thereof
CN106789991A (en) * 2016-12-09 2017-05-31 福建星网视易信息系统有限公司 A kind of multi-person interactive method and system based on virtual scene
CN106792246A (en) * 2016-12-09 2017-05-31 福建星网视易信息系统有限公司 A kind of interactive method and system of fusion type virtual scene

Also Published As

Publication number Publication date
CN107592575A (en) 2018-01-16

Similar Documents

Publication Publication Date Title
CN107592575B (en) Live broadcast method, device and system and electronic equipment
CN107566911B (en) Live broadcast method, device and system and electronic equipment
CN107613310B (en) Live broadcast method and device and electronic equipment
CN107680157B (en) Live broadcast-based interaction method, live broadcast system and electronic equipment
US8177611B2 (en) Scheme for inserting a mimicked performance into a scene and providing an evaluation of same
JP7184913B2 (en) Creating Winner Tournaments with Fandom Influence
KR102045449B1 (en) Virtual tennis simulation system and control method for the same
US8241118B2 (en) System for promoting physical activity employing virtual interactive arena
JP5641263B2 (en) Virtual golf simulation apparatus, system including the same, and virtual golf simulation method
JP6088087B2 (en) Screen baseball system management method
WO2017019530A1 (en) Augmented reality rhythm game
US10850186B2 (en) Gaming apparatus and a method for operating a game
CN109045688B (en) Game interaction method and device, electronic equipment and storage medium
CN109529317B (en) Game interaction method and device and mobile terminal
US20080268952A1 (en) Game apparatus and method for controlling game played by multiple players to compete against one another
TWI748119B (en) Recording media and dart game system
US20220270447A1 (en) System and method for enabling wagering event between sports activity players with stored event metrics
CN109068181A (en) Football game exchange method, system, terminal and device based on net cast
CN113992974A (en) Method and device for simulating competition, computing equipment and computer-readable storage medium
CN113596558A (en) Interaction method, device, processor and storage medium in game live broadcast
CN111773702A (en) Control method and device for live game
WO2022137519A1 (en) Viewing method, computer-readable medium, computer system, and information processing device
JP2010067002A (en) Information processing apparatus, method, and program
JP2022545441A (en) Multiplayer multisport indoor game system and method
JP7168870B2 (en) Game system and game control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210112

Address after: 511442 3108, 79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Applicant after: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 511442 24 floors, B-1 Building, Wanda Commercial Square North District, Wanbo Business District, 79 Wanbo Second Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Applicant before: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant