CN108012179B - Live broadcast-based data analysis method and device and terminal equipment - Google Patents

Live broadcast-based data analysis method and device and terminal equipment Download PDF

Info

Publication number
CN108012179B
CN108012179B CN201711093043.9A CN201711093043A CN108012179B CN 108012179 B CN108012179 B CN 108012179B CN 201711093043 A CN201711093043 A CN 201711093043A CN 108012179 B CN108012179 B CN 108012179B
Authority
CN
China
Prior art keywords
user
preset
animation
indication information
preset object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711093043.9A
Other languages
Chinese (zh)
Other versions
CN108012179A (en
Inventor
谢纨楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Mijinghefeng Technology Co ltd
Original Assignee
Beijing Mijinghefeng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Mijinghefeng Technology Co ltd filed Critical Beijing Mijinghefeng Technology Co ltd
Priority to CN201711093043.9A priority Critical patent/CN108012179B/en
Publication of CN108012179A publication Critical patent/CN108012179A/en
Application granted granted Critical
Publication of CN108012179B publication Critical patent/CN108012179B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0207Discounts or incentives, e.g. coupons or rebates
    • G06Q30/0209Incentive being awarded or redeemed in connection with the playing of a video game
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4756End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for rating content, e.g. scoring a recommended movie
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4882Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Human Computer Interaction (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a data analysis method, a data analysis device and terminal equipment based on live broadcast, wherein the method comprises the following steps: generating and displaying preset object animation according to the first user operation; acquiring indication information of a second user, and selecting a target object from preset objects corresponding to the preset object animation according to the indication information; determining an object corresponding to the target object, and sending the object to the second user account; therefore, the interactivity of the user watching the live broadcast and the user conducting the live broadcast in the live broadcast process is improved.

Description

Live broadcast-based data analysis method and device and terminal equipment
Technical Field
The invention relates to the technical field of internet, in particular to a live broadcast-based data analysis method, a live broadcast-based data analysis device and a terminal device.
Background
With the continuous development of internet technology and the increase of network bandwidth, more convenience is provided for users to use the network, and a plurality of internet-based industries such as live webcast, online shopping and the like are also newly generated.
The live webcasting is a new social networking mode, and can be used for watching videos on different communication platforms through a network system at the same time through live webcasting. With the increasing maturity of live broadcast technology, more and more live broadcast programs begin to be introduced, such as live game, live food, live singing and the like.
Disclosure of Invention
In view of the above, the present invention is proposed to provide a live broadcast based data analysis method, a live broadcast based data analysis apparatus and a corresponding terminal device, which overcome or at least partially solve the above problems, so as to improve the interactivity of the live broadcast.
According to an aspect of the present invention, there is provided a live broadcast-based data analysis method applied to a live broadcast system, the method including: generating and displaying preset object animation according to the first user operation; acquiring indication information of a second user, and selecting a target object from preset objects corresponding to the preset object animation according to the indication information; and determining an object corresponding to the target object, and sending the object to the second user account.
Optionally, the generating a preset object animation according to the first user operation includes: acquiring a first user operation, and determining preset object information according to the operation, wherein the preset object information comprises: the method comprises the following steps of presetting objects, the number of the presetting objects and target objects corresponding to the presetting objects; and generating the preset object animation according to the preset object information.
Optionally, the obtaining of the indication information of the second user includes at least one of: receiving indication information corresponding to a second user; extracting audio data from the live video data of the second user, and identifying indication information according to the audio data; and identifying user characteristics from the live video data of the second user, and determining corresponding indication information according to the user characteristics.
Optionally, identifying a user characteristic from the live video data of the second user, and determining corresponding indication information according to the user characteristic includes: extracting corresponding frame image data from the live video data of the second user; respectively carrying out image recognition on each frame of image data, and recognizing user characteristics in corresponding image data, wherein the user characteristics comprise: a position characteristic of the hand; determining a preset object matched with the position characteristics of the hand according to the preset object animation; and generating indication information of a second user by adopting the matched preset object corresponding number.
Optionally, the selecting a target object from preset objects corresponding to the preset object animation according to the indication information includes: and acquiring a number from the indication information, and selecting a preset object corresponding to the number as a target object.
Optionally, the determining a target object corresponding to the target object includes: and determining a corresponding preset object animation according to the target object, and determining a corresponding target object according to the preset object animation.
Optionally, the preset object includes at least one of: red envelope data, golden egg data and turntable data.
Optionally, the preset object animation is implemented based on an animation engine, and the animation engine includes: a 3D animation engine and a 2D animation engine.
According to another aspect of the present invention, there is provided a live broadcast-based data analysis apparatus including:
the animation generation module is used for generating and displaying a preset object animation according to the first user operation; the object selection module is used for acquiring the indication information of the second user and selecting a target object from preset objects corresponding to the preset object animation according to the indication information; and the data sending module is used for determining a target object corresponding to the target object and sending the target object to the second user account.
Optionally, the animation generation module is specifically configured to obtain a first user operation, and determine preset object information according to the operation, where the preset object information includes: the method comprises the following steps of presetting objects, the number of the presetting objects and target objects corresponding to the presetting objects; and generating the preset object animation according to the preset object information.
Optionally, the object selecting module includes:
the receiving submodule is used for receiving the indication information corresponding to the second user;
the extraction submodule is used for extracting audio data from the live video data of the second user and identifying indication information according to the audio data;
and the identification submodule is used for identifying user characteristics from the live video data of the second user and determining corresponding indication information according to the user characteristics.
Optionally, the identification submodule is specifically configured to extract corresponding frame image data from the live video data of the second user; respectively carrying out image recognition on each frame of image data, and recognizing user characteristics in corresponding image data, wherein the user characteristics comprise: a position characteristic of the hand; determining a preset object matched with the position characteristics of the hand according to the preset object animation; and generating indication information of a second user by adopting the matched preset object corresponding number.
Optionally, the object selection module is configured to obtain a number from the indication information, and select a preset object corresponding to the number as a target object.
Optionally, the data sending module is configured to determine a corresponding preset object animation according to the target object, and determine a corresponding target object according to the animation data.
Optionally, the preset object includes at least one of: red envelope data, golden egg data and turntable data.
Optionally, the animation data is implemented based on an animation engine, the animation engine comprising: a 3D animation engine and a 2D animation engine.
According to another aspect of the present invention, there is provided a terminal device including: one or more processors; and one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the terminal device to perform a live based data analytics method as described in one or more of the embodiments of the present invention.
According to another aspect of the present invention, one or more machine-readable media are provided, on which instructions are stored, which when executed by one or more processors, cause a terminal device to perform a live based data analysis method as described in one or more of the embodiments of the present invention.
According to the live broadcast-based data analysis method, in the live broadcast interaction process, preset object animations can be generated and displayed according to the operation of a first user, so that a second user can view the preset object animations, and one preset object can be selected from all preset objects in the preset object animations; further acquiring indication information of a second user, and selecting a target object from preset objects corresponding to the preset object animation according to the indication information, namely determining the preset object selected by the second user; then sending the object corresponding to the preset object to the second user account, so that the second user can obtain the object of the selected preset object; therefore, the problem of poor interactivity of live broadcasting is solved, and the beneficial effects of improving the interactivity of users watching and conducting live broadcasting in the live broadcasting process are achieved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flow chart illustrating the steps of a live data analysis method in accordance with one embodiment of the present invention;
FIG. 2 is a flow chart illustrating steps of a live data analysis method according to another embodiment of the present invention;
FIG. 3 is a block diagram of a live data analysis device according to an embodiment of the present invention;
FIG. 4 is a block diagram of a live data analysis device according to another embodiment of the present invention;
fig. 5 is a block diagram illustrating a partial structure related to a terminal device provided in an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The embodiment of the invention can be applied to a live broadcast system, which comprises a server (cluster) and terminal equipment, wherein the server is a server for providing services for live broadcast and can be composed of a cluster formed by a plurality of servers, for example, the server provides services such as live broadcast management, data synthesis and the like through different servers. The live broadcast system may have a plurality of terminal devices, including a terminal device that performs live broadcast and a terminal device that watches live broadcast. In the embodiment of the application, a live broadcast group or a live broadcast room can be set for each live broadcast video, one or more terminal devices can be connected to watch the live broadcast video in the live broadcast room, and therefore each live broadcast room is characterized through a live broadcast room ID or a live broadcast identification.
A user watching a live broadcast is referred to as a first user (i.e., a viewer), and a user playing the live broadcast is referred to as a second user (i.e., a main broadcast of the live broadcast). In the live broadcasting process, the anchor side, namely the terminal equipment of the second user, can collect video data and upload the video data to the server, and the server generates a live video stream according to the video data, feeds the live video stream back to the watching side, namely the terminal equipment of the first user, and feeds the live video stream back to the anchor side to be displayed, so that the anchor and audience interaction is facilitated.
Referring to fig. 1, a flow chart of steps of a live data analysis method according to an embodiment of the present invention is shown.
And 102, generating and displaying a preset object animation according to the first user operation.
In the process of watching the live video, if the first user feels that the anchor is interesting, the live broadcast is good, and the like, the first user can enjoy the anchor and can also interact with the anchor. The embodiment of the invention provides an interaction mode, a first user can display a plurality of articles to a anchor on the basis of animation in a live video, and correspondingly, the anchor can select any article from the articles, such as an anchor for smashing golden eggs, robbing red envelope, rotating a disc and the like.
Thus the first user can select the interaction to be performed, determine objects for the interaction, such as red envelope, gold eggs, rotating discs, etc., and determine the number of each object selected; that is, the first user may instruct the execution of the interaction, and after the first user selects to execute the interaction, the corresponding terminal setting may provide a plurality of objects for the interaction to the user, which may also be directly set on the screen, and the user may select the object.
Therefore, after the first user can select the interactive object, the rewarded article and the like according to the requirement, the interaction instruction can be obtained from the first user operation, the interaction instruction is used for instructing to execute interaction, and the carried parameters of the interaction instruction can comprise interaction configuration information such as the number of the object and the object, wherein if the interaction is the rewarded interaction, the interaction configuration information can also comprise the rewarded article such as the rewarded amount, the yacht, flowers and the like. Then, information of preset objects and the like can be obtained from the interaction indication, and animations corresponding to the preset objects are generated, wherein the preset objects are interactive objects such as red parcels, golden eggs and the like, each preset object can have a corresponding target object, and the target objects are data converted based on the preset objects, such as money amount, flowers, conversion money amount corresponding to the flowers and the like; for example, the first user enjoys 10-yuan in three red packs, each of which corresponds to 5-yuan, 3-yuan, and 2-yuan, and for example, enjoys four diamonds in three gold eggs, each of which corresponds to 2-course diamonds, 1 diamond, and the like. The preset object animation is an animation corresponding to the interaction mode, such as animation of a plurality of golden eggs and a plurality of red envelopes. And then displaying the preset object animation so that the first user and the second user can watch the animation.
And 104, acquiring indication information of a second user, and selecting a target object from preset objects corresponding to the preset object animation according to the indication information.
After the anchor views the animation of the preset objects for interaction, the anchor may perform corresponding operations (e.g., clicking a mouse, voice, etc.) to interact, and select one of the preset objects to obtain a target object of the selected preset object. After the anchor selects the preset object according to the viewed animation, the indication information of the second user can be obtained in various ways, such as according to the input operation of the second user, the voice of the second user and the like, because the interaction execution ways of the second user comprise various ways; and then according to the indication information, determining a preset object selected by the second user from the preset objects corresponding to the preset animation, and further determining the preset object selected by the second user as a target object.
And 106, determining a target object corresponding to the target object, and sending the target object to the second user account.
Then, a target object corresponding to the target object can be searched from the interaction instruction, and then the target object is sent to a second user account, namely for a second user, the target object is successfully received; e.g., placing the corresponding amount into the anchor's account, etc.; and for the first user, the second user is successfully appreciated, and the interaction with the second user is realized.
The steps 102 to 106 may be executed by the terminal device, or may be executed by a combination of the server and the terminal device, which may be specifically set as required.
In summary, in the live broadcast interaction process, the preset object animation can be generated and displayed according to the first user operation, so that the second user can view the preset object animation, and can select one preset object from all preset objects in the preset object animation; further acquiring indication information of a second user, and selecting a target object from preset objects corresponding to the preset object animation according to the indication information, namely determining the preset object selected by the second user; then sending the object corresponding to the preset object to the second user account, so that the second user can obtain the object of the selected preset object; therefore, the problem of poor interactivity of live broadcasting is solved, and the beneficial effects of improving the interactivity of users watching and conducting live broadcasting in the live broadcasting process are achieved.
After the anchor watches the interactive animation, the anchor can interact in various ways, for example, the anchor can directly receive the indication information of a second user by moving a mouse to select a corresponding preset object; for example, the corresponding preset object is selected through voice, and the indication information of the second user can be determined through the extraction of the audio data; the indication information may also be determined by analyzing the motion data of the second user, for example, by performing a preset motion, such as grasping with a hand, and selecting the corresponding preset object.
Referring to fig. 2, a flow chart of steps of a live-based data analysis method according to another embodiment of the present invention is shown.
Step 202, obtaining a first user operation, and determining preset object information according to the operation, where the preset object information includes: the number of the preset objects and the objects corresponding to the preset objects.
And 204, generating the preset object animation according to the preset object information.
In the process of watching the live video, if the first user feels that the anchor is interesting, the live broadcast is good, and the like, the first user can enjoy the anchor and can also interact with the anchor. The embodiment of the invention provides an interaction mode, a first user can display a plurality of articles to a second user in a live video based on animation, and a corresponding anchor can randomly select one article, such as a second user smashing golden eggs, robbing red envelope, turning a turntable and the like.
Therefore, after the first user selects to perform interaction, the corresponding terminal setting can provide various objects (such as red packets, turntables, golden eggs and the like) to the first user, and the first user can select one of the objects and the number of the objects according to the requirement and select the objects needing to be enjoyed, such as money, flowers, automobiles and the like; and then the terminal corresponding to the first user generates a corresponding preset instruction according to the interactive configuration, and sends the preset instruction to the server.
Therefore, after the first user can select the interactive objects, the rewarded articles and other operations according to the requirements, the first user can obtain the interactive instruction from the first user operation, and further obtain the configuration information from the interactive instruction, wherein the configuration information at least comprises the total sum of the objects such as the rewarded objects, the number of the preset objects and the preset objects; and then configuring corresponding objects for each preset object according to preset rules or at random according to the objects and the number of the objects, and generating corresponding preset object animations according to the preset objects configured with the objects. Wherein the preset object may include at least one of: red envelope data, golden egg data and turntable data; of course, other types may also be included, and here, not illustrated, the corresponding preset object animation includes at least one of the following: red envelope animation, golden egg animation and rotary disc animation. The embodiment of the invention can generate each frame of animation based on an animation engine, wherein the animation engine can comprise a 3D animation engine and a 2D animation engine, each frame of animation is generated based on the 3D animation engine, namely, a model and a scene are established according to the shape size, the number and the like of a preset object, the motion trail of the model, the motion of a virtual camera and other animation parameters are set according to requirements, and finally, a specific material is given to the model according to the requirements and light is applied; when all the data are finished, generating each frame of animation data; and then make each object in the animation more three-dimensional, improve user's visual effect.
After the preset object animation is generated, the animation can be displayed on the terminal equipment of the first user and the terminal equipment of the second user; the display area can be determined in advance according to requirements, for example, an upper left corner quarter area of the display interface is determined as the display area.
In addition, of course, after the first user selects the preset objects and the target objects, the first user may also configure the target objects corresponding to the preset objects, and the preset object information may include the number of the preset objects and the preset objects configured with the target objects.
And step 206, acquiring the indication information of the second user.
And after the anchor watches the preset object animation, corresponding interaction can be executed so as to select a target object from the preset objects and obtain a target object corresponding to the target object. In an embodiment of the present invention, after the anchor watches the animation, the interaction manner includes multiple types, and correspondingly, the manner of determining the indication information may also include multiple types, and specifically, the determining the indication information corresponding to the second user may at least include one of the following:
1. and receiving indication information corresponding to the second user.
One mode of anchor interaction is that a mouse is moved to a position corresponding to a preset object to be selected in the preset object animation and clicked, or a position corresponding to the preset object to be selected is clicked on a screen, or a number corresponding to the preset object is input to select the preset object, so that after a click operation matched with a position corresponding to any preset object in the preset object animation is received, or after input information corresponding to a number of any preset object in the preset object animation is received, indication information of a second user can be received.
2. And extracting audio data from the live video data of the second user, and identifying indication information according to the audio data.
Another way of anchor interaction is to voice select a preset object, e.g., a second user says "pound second golden egg"; after the anchor selects the preset object through voice, the audio data of the preset object selected by the second user is contained in the live video data live broadcast by the second user; therefore, the audio data can be extracted from the live video data and analyzed, so that the audio data of the preset object selected by the user can be identified, and the indication information can be determined.
3. And identifying user characteristics from the live video data of the second user, and determining corresponding indication information according to the user characteristics.
Yet another way for the anchor to perform the interaction is to act on a selection of preset objects, such as: aligning the position of the preset object to be selected by a hand, and determining that the user selects the preset object when the position of the hand of the second user is matched with the position of the selected preset object; the position of the preset object can be determined according to the preset object animation, the user characteristics of the second user are determined according to the live video data of the second user, the user characteristics comprise position characteristics, the preset object selected by the second user is determined according to the position of the preset object and the position characteristics of the second user, and the indication information corresponding to the second user is determined. The analyzing of the live video data and the identifying of the user characteristics of the second user may specifically include the following sub-steps:
and a substep 31 of extracting corresponding frame image data from the live video data of the second user.
Substep 32, performing image recognition on each frame of image data, and recognizing user features in corresponding image data, wherein the user features include: the location characteristics of the hand.
Substep 33, determining a preset object matched with the position feature of the hand according to the preset object animation,
and a substep 34 of generating indication information of the second user by using the matched preset object corresponding number.
Specifically, each frame of image data can be extracted from the live video data of the second user, where each frame of image data can be extracted from the live video data of the second user, and the image data of the corresponding frame can also be extracted from the live video data of the second user according to a preset interval; and then, performing image recognition on each frame of image data, and recognizing the user characteristics of the second user in each frame of image data, wherein the user characteristics include the position characteristics of the hand, and of course, the user characteristics may also include other characteristics such as mouth characteristics, foot characteristics, and the like. Then, matching the position characteristics of the hand with the positions of all preset objects in the preset object animation in the extracted image data, and determining the preset object matched with the hand position characteristics; and if the preset object matched with the hand position characteristics is found, determining the number of the matched preset object, and determining the number as the indication information.
Certainly, if the second user does not perform corresponding interaction, the indication information of the second user cannot be acquired, so that the preset time can be set in advance according to requirements, if the indication information is not acquired within the preset time interval, the interaction failure between the first user and the second user is determined, and the information of the interaction failure can be returned to the first user and the second user.
And 208, acquiring a number from the indication information, and selecting a preset object corresponding to the number as a target object.
And then, selecting a target object from preset objects corresponding to the preset object animation according to the indication information, namely acquiring a number from the determined indication information, and selecting the preset object corresponding to the number as the target object.
Step 210, determining corresponding animation data according to the target object, and determining a corresponding target object according to the animation data.
In the embodiment of the invention, a plurality of first users can enjoy the views at the same time, so that the corresponding preset object animation can be determined according to the target object, and the target object corresponding to the target object can be determined according to the preset object animation.
Step 212, sending the subject matter to the second user account.
Then the object is sent to a second user, namely, the object is stored in an account corresponding to the second user; and corresponding objects corresponding to the preset objects which are not selected by other second users are returned to the account of the first user.
In one example of the invention, the first user chooses to award the second user with 50 yuan of cracked gold eggs, the number of the gold eggs is 4, the numbers are respectively No. 1, No. 2, No. 3 and No. 4, and the amount of money corresponding to each gold egg is randomly distributed and is respectively 32.4 yuan, 0.4 yuan, 11.7 yuan and 5.5 yuan. And then, determining that the second user smashes the golden egg with the number of 4, storing the 5.5 yuan into an account of the second user, and returning the 44.5 yuan to the account of the first user, namely the amount of money rewarded by the first user at this time is 5.5 yuan.
In the live broadcast interaction process, a preset object animation is generated and displayed according to the operation of a first user, so that a second user can watch the preset object animation, and a preset object can be selected from all preset objects in the preset object animation; further acquiring indication information of a second user, and selecting a target object from preset objects corresponding to the preset object animation according to the indication information, namely determining the preset object selected by the second user; and then the object corresponding to the preset object is sent to the second user account, so that the interaction is more interesting compared with the method that the value data of the first user rewarded at this time is sent to the second user at one time.
Secondly, the indication information corresponding to the second user can be determined through at least one of the following modes, namely the indication information corresponding to the second user is received; extracting audio data from the live video data of the second user, and identifying indication information according to the audio data; identifying indication information of a second user according to the preset object animation and the live video data of the second user; the second user can interact with the first user in multiple modes, interaction diversity in the live broadcast process is increased, and user experience is improved.
For simplicity of explanation, the method embodiments are described as a series of acts or combinations, but those skilled in the art will appreciate that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently with other steps in accordance with the embodiments of the invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
The embodiment of the invention also provides a data analysis device based on live broadcast, which is applied to terminal equipment and specifically comprises the following steps:
referring to fig. 3, a block diagram of a live broadcast-based data analysis apparatus according to an embodiment of the present invention is shown. The device comprises: an animation generation module 302, an object selection module 304, and a data transmission module 304, wherein,
the data determining module 302 is configured to generate and display a preset object animation according to a first user operation;
an object selection module 304, configured to obtain indication information of the second user, and select a target object from preset objects corresponding to the preset object animation according to the indication information;
the data sending module 306 determines a target object corresponding to the target object, and sends the target object to the second user account.
According to the embodiment of the invention, in the live broadcast interaction process, the preset object animation can be generated and displayed according to the operation of the first user, so that the second user can watch the preset object animation, and one preset object can be selected from all preset objects in the preset object animation; further acquiring indication information of a second user, and selecting a target object from preset objects corresponding to the preset object animation according to the indication information, namely determining the preset object selected by the second user; then sending the object corresponding to the preset object to the second user account, so that the second user can obtain the object of the selected preset object; therefore, the problem of poor interactivity of live broadcasting is solved, and the beneficial effects of improving the interactivity of users watching and conducting live broadcasting in the live broadcasting process are achieved.
Referring to fig. 4, a block diagram of another live broadcast-based data analysis apparatus according to another embodiment of the present invention is shown.
In another embodiment of the present invention, the animation generating module 302 is specifically configured to obtain a first user operation, and determine preset object information according to the operation, where the preset object information includes: the method comprises the following steps of presetting objects, the number of the presetting objects and target objects corresponding to the presetting objects; and generating the preset object animation according to the preset object information.
In another embodiment of the present invention, the object selecting module 304 includes: a receiving submodule 3042, an extracting submodule 3044, and an identifying submodule 3046, wherein,
a receiving submodule 3042, configured to receive indication information corresponding to a second user;
an extracting submodule 3044, configured to extract audio data from the live video data of the second user, and identify indication information according to the audio data;
the identifying submodule 3046 is configured to identify a user characteristic from the live video data of the second user, and determine corresponding indication information according to the user characteristic.
In another embodiment of the present invention, the identifier sub-module 3046 is specifically configured to extract corresponding image data of each frame from the live video data of the second user; respectively carrying out image recognition on each frame of image data, and recognizing user characteristics in corresponding image data, wherein the user characteristics comprise: a position characteristic of the hand; determining a preset object matched with the position characteristics of the hand according to the preset object animation; and generating indication information of a second user by adopting the matched preset object corresponding number.
In another embodiment of the present invention, the object selecting module 304 is configured to obtain a number from the indication information, and select a preset object corresponding to the number as a target object.
In another embodiment of the present invention, the data sending module 306 is configured to determine corresponding animation data according to the target object, and determine a corresponding target object according to the animation data.
In another embodiment of the present invention, the preset object includes at least one of: red envelope data, golden egg data and turntable data.
In another embodiment of the present invention, the animation data is implemented based on an animation engine, the animation engine comprising: a 3D animation engine and a 2D animation engine.
In the live broadcast interaction process, a preset object animation is generated and displayed according to the operation of a first user, so that a second user can watch the preset object animation, and a preset object can be selected from all preset objects in the preset object animation; further acquiring indication information of a second user, and selecting a target object from preset objects corresponding to the preset object animation according to the indication information, namely determining the preset object selected by the second user; and then the object corresponding to the preset object is sent to the second user account, so that the interaction is more interesting compared with the method that the value data of the first user rewarded at this time is sent to the second user at one time.
Secondly, the indication information corresponding to the second user can be determined through at least one of the following modes, namely the indication information corresponding to the second user is received; extracting audio data from the live video data of the second user, and identifying indication information according to the audio data; identifying user characteristics from the live video data of the second user, and determining corresponding indication information according to the user characteristics; the second user can interact with the first user in multiple modes, interaction diversity in the live broadcast process is increased, and user experience is improved.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components in a terminal device according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
As shown in fig. 5, for convenience of description, only the parts related to the embodiment of the present invention are shown, and details of the specific technology are not disclosed, please refer to the method part in the embodiment of the present invention. The terminal device may be any device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, and the like.
Fig. 5 is a block diagram illustrating a partial structure related to a terminal device provided in an embodiment of the present invention. Referring to fig. 5, the terminal device includes: a Radio Frequency (RF) circuit 510, a memory 520, an input unit 530, a display unit 540, a sensor 550, an audio circuit 560, a wireless fidelity (WiFi) module 570, a processor 580, a power supply 590, and a camera 5110. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 5 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following specifically describes each constituent component of the terminal device with reference to fig. 5:
RF circuit 510 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, for processing downlink information of a base station after receiving the downlink information to processor 580; in addition, the data for designing uplink is transmitted to the base station. In general, RF circuit 510 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuit 510 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to global system for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 520 may be used to store software programs and modules, and the processor 580 executes various functional applications and data processing of the terminal device by operating the software programs and modules stored in the memory 520. The memory 520 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal device, and the like. Further, the memory 520 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 530 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the input unit 530 may include a touch panel 531 and other input devices 532. The touch panel 531, also called a touch screen, can collect touch operations of a user on or near the touch panel 531 (for example, operations of the user on or near the touch panel 531 by using any suitable object or accessory such as a finger or a stylus pen), and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 531 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 580, and can receive and execute commands sent by the processor 580. In addition, the touch panel 531 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 530 may include other input devices 532 in addition to the touch panel 531. In particular, other input devices 532 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 540 may be used to display information input by a user or information provided to the user and various menus of the terminal device. The Display unit 540 may include a Display panel 541, and optionally, the Display panel 541 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 531 may cover the display panel 541, and when the touch panel 531 detects a touch operation on or near the touch panel 531, the touch panel is transmitted to the processor 580 to determine the type of the touch event, and then the processor 580 provides a corresponding visual output on the display panel 541 according to the type of the touch event. Although in fig. 5, the touch panel 531 and the display panel 541 are implemented as two separate components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 531 and the display panel 541 may be integrated to implement the input and output functions of the terminal device.
The terminal device may also include at least one sensor 550, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 541 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 541 and/or a backlight when the terminal device is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration) for recognizing the attitude of the terminal device, and related functions (such as pedometer and tapping) for vibration recognition; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal device, detailed description is omitted here.
Audio circuitry 560, speaker 561, and microphone 562 may provide an audio interface between a user and a terminal device. The audio circuit 560 may transmit the electrical signal converted from the received audio data to the speaker 561, and convert the electrical signal into a sound signal by the speaker 561 for output; on the other hand, the microphone 562 converts the collected sound signal into an electric signal, is received by the audio circuit 560 and converted into audio data, and then outputs the audio data to the processor 580 for processing, and then passes through the RF circuit 510 to be transmitted to, for example, another terminal device, or outputs the audio data to the memory 520 for further processing.
WiFi belongs to short distance wireless transmission technology, and the terminal device can help the user send and receive e-mail, browse web page and access streaming media, etc. through the WiFi module 570, which provides wireless broadband internet access for the user. Although fig. 5 shows the WiFi module 570, it is understood that it does not belong to the essential constitution of the terminal device, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 580 is a control center of the terminal device, connects various parts of the entire terminal device by various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 520 and calling data stored in the memory 520, thereby performing overall monitoring of the terminal device. Alternatively, processor 580 may include one or more processing units; preferably, the processor 580 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 580.
The terminal device also includes a power supply 590 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 580 via a power management system to manage charging, discharging, and power consumption via the power management system.
The camera 5110 may perform a photographing function.
Although not shown, the terminal device may further include a bluetooth module or the like, which is not described in detail herein.
An embodiment of the present invention further provides a terminal device, including: one or more processors; and one or more machine readable media having instructions stored thereon, which when executed by the one or more processors, cause the terminal device to perform a live based data analytics method as described in one or more of the embodiments of the present invention.
One or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause a terminal device to perform a live-based data analysis method as described in one or more of the embodiments of the present invention are also provided.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The embodiment of the invention discloses A1 and a live broadcast-based data analysis method, which is applied to a live broadcast system, and comprises the following steps:
generating and displaying preset object animation according to the first user operation;
acquiring indication information of a second user, and selecting a target object from preset objects corresponding to the preset object animation according to the indication information;
and determining an object corresponding to the target object, and sending the object to the second user account.
A2, the method of A1, the generating preset object animation according to the first user operation, comprising:
acquiring a first user operation, and determining preset object information according to the operation, wherein the preset object information comprises: the method comprises the following steps of presetting objects, the number of the presetting objects and target objects corresponding to the presetting objects;
and generating the preset object animation according to the preset object information.
A3, the method as in A1, the obtaining the indication information of the second user, comprising at least one of:
receiving indication information corresponding to a second user;
extracting audio data from the live video data of the second user, and identifying indication information according to the audio data;
and identifying user characteristics from the live video data of the second user, and determining corresponding indication information according to the user characteristics.
A4, the method as in A3, identifying user characteristics from the second user's live video data, determining corresponding indicating information according to the user characteristics, comprising:
extracting corresponding frame image data from the live video data of the second user;
respectively carrying out image recognition on each frame of image data, and recognizing user characteristics in corresponding image data, wherein the user characteristics comprise: a position characteristic of the hand;
determining a preset object matched with the position characteristics of the hand according to the preset object animation;
and generating indication information of a second user by adopting the matched preset object corresponding number.
A5, the method according to A4, wherein the selecting the target object from the preset objects corresponding to the preset object animation according to the indication information includes:
and acquiring a number from the indication information, and selecting a preset object corresponding to the number as a target object.
A6, the method of A1, the determining the target object corresponding to the target object, comprising:
and determining a corresponding preset object animation according to the target object, and determining a corresponding target object according to the preset object animation.
A7, the method according to any one of a1-a6, wherein the preset objects comprise at least one of: red envelope data, golden egg data and turntable data.
A8, the method according to any one of a1-a6, wherein the preset object animation is implemented based on an animation engine, and the animation engine comprises: a 3D animation engine and a 2D animation engine.
The embodiment of the invention also discloses B9, a data analysis device based on live broadcast, comprising:
the animation generation module is used for generating and displaying a preset object animation according to the first user operation;
the object selection module is used for acquiring the indication information of the second user and selecting a target object from preset objects corresponding to the preset object animation according to the indication information;
and the data sending module is used for determining a target object corresponding to the target object and sending the target object to the second user account.
B10, the apparatus according to B9, wherein the animation generation module is specifically configured to obtain a first user operation, and determine preset object information according to the operation, where the preset object information includes: the method comprises the following steps of presetting objects, the number of the presetting objects and target objects corresponding to the presetting objects; and generating the preset object animation according to the preset object information.
B11, the apparatus as described in B9, the object selection module comprising:
the receiving submodule is used for receiving the indication information corresponding to the second user;
the extraction submodule is used for extracting audio data from the live video data of the second user and identifying indication information according to the audio data;
and the identification submodule is used for identifying user characteristics from the live video data of the second user and determining corresponding indication information according to the user characteristics.
B12, the apparatus according to B11, wherein the recognition sub-module is specifically configured to extract corresponding frames of image data from the live video data of the second user; respectively carrying out image recognition on each frame of image data, and recognizing user characteristics in corresponding image data, wherein the user characteristics comprise: a position characteristic of the hand; determining a preset object matched with the position characteristics of the hand according to the preset object animation; and generating indication information of a second user by adopting the matched preset object corresponding number.
The apparatus of B13, as defined in B12, the object selection module is configured to obtain a number from the indication information, and select a preset object corresponding to the number as a target object.
The apparatus of B14, as set forth in B9, the data sending module is configured to determine a corresponding preset object animation according to the target object, and determine a corresponding object according to the animation data.
B15, the device according to any one of B9-B14, wherein the preset objects include at least one of: red envelope data, golden egg data and turntable data.
B16, the apparatus as in any one of B9-B14, the animation data being implemented based on an animation engine comprising: a 3D animation engine and a 2D animation engine.
The embodiment of the invention also discloses C17 and a terminal device, which comprises:
one or more processors; and
one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the terminal device to perform a live based data analytics method as described in one or more of a 1-A8.
Embodiments of the present invention also disclose D18, one or more machine readable media having instructions stored thereon that, when executed by one or more processors, cause a terminal device to perform a live based data analysis method as described in one or more of a1-a 8.

Claims (18)

1. A data analysis method based on live broadcast is applied to a live broadcast system, and comprises the following steps:
generating and displaying preset object animation according to the first user operation;
acquiring indication information of a second user, and selecting a target object from preset objects corresponding to the preset object animation according to the indication information;
determining a target object corresponding to the target object, and sending the target object to the second user account;
returning the object corresponding to the preset object which is not selected by the second user to the account of the first user;
and if the indication information of the second user is not acquired within a preset time interval, determining that the interaction between the first user and the second user fails, and returning the information of the interaction failure to the first user and the second user.
2. The method of claim 1, wherein generating a preset object animation according to the first user operation comprises:
acquiring a first user operation, and determining preset object information according to the operation, wherein the preset object information comprises: the method comprises the following steps of presetting objects, the number of the presetting objects and target objects corresponding to the presetting objects;
and generating the preset object animation according to the preset object information.
3. The method of claim 1, wherein the obtaining of the indication information of the second user comprises at least one of:
receiving indication information corresponding to a second user;
extracting audio data from the live video data of the second user, and identifying indication information according to the audio data;
and identifying user characteristics from the live video data of the second user, and determining corresponding indication information according to the user characteristics.
4. The method of claim 3, wherein identifying a user characteristic from the second user's live video data from which to determine corresponding indication information comprises:
extracting corresponding frame image data from the live video data of the second user;
respectively carrying out image recognition on each frame of image data, and recognizing user characteristics in corresponding image data, wherein the user characteristics comprise: a position characteristic of the hand;
determining a preset object matched with the position characteristics of the hand according to the preset object animation;
and generating indication information of a second user by adopting the matched preset object corresponding number.
5. The method of claim 4, wherein the selecting the target object from the preset objects corresponding to the preset object animation according to the indication information comprises:
and acquiring a number from the indication information, and selecting a preset object corresponding to the number as a target object.
6. The method of claim 1, wherein the determining the target object to which the target object corresponds comprises:
and determining a corresponding preset object animation according to the target object, and determining a corresponding target object according to the preset object animation.
7. The method of any one of claims 1-6, wherein the preset object comprises at least one of: red envelope data, golden egg data and turntable data.
8. The method of any of claims 1-6, wherein the preset object animation is implemented based on an animation engine comprising: a 3D animation engine and a 2D animation engine.
9. A live-based data analysis apparatus, comprising:
the animation generation module is used for generating and displaying a preset object animation according to the first user operation;
the object selection module is used for acquiring the indication information of a second user and selecting a target object from preset objects corresponding to the preset object animation according to the indication information;
the data sending module is used for determining a target object corresponding to the target object and sending the target object to the second user account;
the device is also used for returning the object corresponding to the preset object which is not selected by the second user to the account of the first user;
the device is further used for determining that the interaction between the first user and the second user fails if the indication information of the second user is not acquired within a preset time interval, and returning the information of the interaction failure to the first user and the second user.
10. The apparatus of claim 9,
the animation generation module is specifically configured to obtain a first user operation, and determine preset object information according to the operation, where the preset object information includes: the method comprises the following steps of presetting objects, the number of the presetting objects and target objects corresponding to the presetting objects; and generating the preset object animation according to the preset object information.
11. The apparatus of claim 9, wherein the object selection module comprises:
the receiving submodule is used for receiving the indication information corresponding to the second user;
the extraction submodule is used for extracting audio data from the live video data of the second user and identifying indication information according to the audio data;
and the identification submodule is used for identifying user characteristics from the live video data of the second user and determining corresponding indication information according to the user characteristics.
12. The apparatus of claim 11,
the identification submodule is specifically used for extracting corresponding frame image data from the live video data of the second user; respectively carrying out image recognition on each frame of image data, and recognizing user characteristics in corresponding image data, wherein the user characteristics comprise: a position characteristic of the hand; determining a preset object matched with the position characteristics of the hand according to the preset object animation; and generating indication information of a second user by adopting the matched preset object corresponding number.
13. The apparatus of claim 12,
and the object selection module is used for acquiring a number from the indication information and selecting a preset object corresponding to the number as a target object.
14. The apparatus of claim 9,
and the data sending module is used for determining a corresponding preset object animation according to the target object and determining a corresponding target object according to the animation.
15. The apparatus of any one of claims 9-14, wherein the preset object comprises at least one of: red envelope data, golden egg data and turntable data.
16. The apparatus of any of claims 9-14, wherein the animation data is implemented based on an animation engine, the animation engine comprising: a 3D animation engine and a 2D animation engine.
17. A terminal device, comprising:
one or more processors; and
one or more machine-readable media having instructions stored thereon that, when executed by the one or more processors, cause the terminal device to perform the live-based data analytics method of any of claims 1-8.
18. A computer-readable medium having stored thereon instructions, which when executed by one or more processors, cause a terminal device to perform the live based data analytics method of any of claims 1-8.
CN201711093043.9A 2017-11-08 2017-11-08 Live broadcast-based data analysis method and device and terminal equipment Active CN108012179B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711093043.9A CN108012179B (en) 2017-11-08 2017-11-08 Live broadcast-based data analysis method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711093043.9A CN108012179B (en) 2017-11-08 2017-11-08 Live broadcast-based data analysis method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN108012179A CN108012179A (en) 2018-05-08
CN108012179B true CN108012179B (en) 2020-08-21

Family

ID=62051294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711093043.9A Active CN108012179B (en) 2017-11-08 2017-11-08 Live broadcast-based data analysis method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN108012179B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110166842B (en) * 2018-11-19 2020-10-16 深圳市腾讯信息技术有限公司 Video file operation method and device and storage medium
CN111385663B (en) * 2018-12-28 2021-06-15 广州市百果园信息技术有限公司 Live broadcast interaction method, device, equipment and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2884407C (en) * 2012-09-06 2017-11-21 Decision-Plus M.C. Inc. System and method for broadcasting interactive content
CN104243463B (en) * 2014-09-09 2017-09-15 广州华多网络科技有限公司 A kind of method and apparatus for showing virtual objects
CN105654354A (en) * 2016-03-11 2016-06-08 武汉斗鱼网络科技有限公司 User interaction optimization method and system in live video
CN106028170A (en) * 2016-07-04 2016-10-12 天脉聚源(北京)传媒科技有限公司 Interaction method and apparatus in video live broadcasting process
CN106231435B (en) * 2016-07-26 2019-08-02 广州华多网络科技有限公司 The method, apparatus and terminal device of electronics present are given in network direct broadcasting
CN106878822A (en) * 2016-12-31 2017-06-20 天脉聚源(北京)科技有限公司 Reward Interaction View many method and apparatus in TV programme
CN106981015A (en) * 2017-03-29 2017-07-25 武汉斗鱼网络科技有限公司 The implementation method of interactive present
CN107197319B (en) * 2017-05-19 2019-06-18 武汉斗鱼网络科技有限公司 Turntable interactive approach and device
CN107277559B (en) * 2017-06-20 2020-02-07 武汉斗鱼网络科技有限公司 Turntable interaction method and device

Also Published As

Publication number Publication date
CN108012179A (en) 2018-05-08

Similar Documents

Publication Publication Date Title
CN105828145B (en) Interactive approach and device
CN109905754B (en) Virtual gift receiving method and device and storage equipment
CN106534941B (en) Method and device for realizing video interaction
US10750223B2 (en) System, method, and device for displaying content item
WO2018126885A1 (en) Game data processing method
CN108984087B (en) Social interaction method and device based on three-dimensional virtual image
WO2018192415A1 (en) Data live broadcast method, and related device and system
US11178450B2 (en) Image processing method and apparatus in video live streaming process, and storage medium
CN108024134B (en) Live broadcast-based data analysis method and device and terminal equipment
CN111491197B (en) Live content display method and device and storage medium
CN105208458B (en) Virtual screen methods of exhibiting and device
CN103402190B (en) Method and device for selecting network as well as terminal
CN106303733B (en) Method and device for playing live special effect information
CN108055490B (en) Video processing method and device, mobile terminal and storage medium
CN108388637A (en) A kind of method, apparatus and relevant device for providing augmented reality service
CN107438200A (en) The method and apparatus of direct broadcasting room present displaying
CN107770596A (en) A kind of special efficacy synchronous method, device and mobile terminal
CN104125511A (en) Method and device for multimedia data push
CN105933739A (en) Program interactive system and method, client side and background server
CN104159159A (en) Interactive method and system based on video as well as terminal and server
CN108055567B (en) Video processing method and device, terminal equipment and storage medium
CN106254910B (en) Method and device for recording image
CN108259990A (en) A kind of method and device of video clipping
CN107911708A (en) Barrage display methods, live broadcasting method and relevant apparatus
CN107734376A (en) The method and device that a kind of multi-medium data plays

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant