CN110119700B - Avatar control method, avatar control device and electronic equipment - Google Patents

Avatar control method, avatar control device and electronic equipment Download PDF

Info

Publication number
CN110119700B
CN110119700B CN201910358491.XA CN201910358491A CN110119700B CN 110119700 B CN110119700 B CN 110119700B CN 201910358491 A CN201910358491 A CN 201910358491A CN 110119700 B CN110119700 B CN 110119700B
Authority
CN
China
Prior art keywords
avatar
control instruction
anchor
virtual machine
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910358491.XA
Other languages
Chinese (zh)
Other versions
CN110119700A (en
Inventor
吴施祈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Information Technology Co Ltd
Original Assignee
Guangzhou Huya Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Information Technology Co Ltd filed Critical Guangzhou Huya Information Technology Co Ltd
Priority to CN201910358491.XA priority Critical patent/CN110119700B/en
Publication of CN110119700A publication Critical patent/CN110119700A/en
Priority to US17/605,476 priority patent/US20220214797A1/en
Priority to PCT/CN2020/087139 priority patent/WO2020221186A1/en
Priority to SG11202111640RA priority patent/SG11202111640RA/en
Application granted granted Critical
Publication of CN110119700B publication Critical patent/CN110119700B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides an avatar control method, an avatar control device and electronic equipment, and relates to the technical field of live broadcast. The virtual image control method comprises the following steps: analyzing a video stream obtained by shooting a main broadcast to generate an action control instruction; judging whether a virtual machine position control instruction generated based on the anchor is obtained or not; and if the virtual machine position control instruction is obtained, controlling the virtual image according to the virtual machine position control instruction and the action control instruction. By the method, the interestingness of displaying the virtual image is improved.

Description

Avatar control method, avatar control device and electronic equipment
Technical Field
The application relates to the technical field of live broadcast, in particular to an avatar control method, an avatar control device and electronic equipment.
Background
In the prior art, in order to improve the interest of live broadcasting, an avatar can be used to replace the actual avatar of a main broadcast for displaying in a live broadcasting picture. However, the control precision of the virtual image in the existing live broadcast technology is generally low, so that the display of the virtual image has the problem of insufficient interestingness.
Disclosure of Invention
In view of the above, an object of the present application is to provide an avatar control method, an avatar control apparatus and an electronic device, so as to enhance the interest of avatar display.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
an avatar control method applied to a live device for controlling an avatar displayed in a live picture, the method comprising:
analyzing a video stream obtained by shooting a main broadcast to generate an action control instruction;
judging whether a virtual machine position control instruction generated based on the anchor is obtained or not;
and if the virtual machine position control instruction is obtained, controlling the virtual image according to the virtual machine position control instruction and the action control instruction.
In a preferred option of the embodiment of the present application, in the above-mentioned avatar control method, the step of controlling the avatar according to the virtual machine position control instruction and the motion control instruction includes:
controlling the display posture of the virtual image in the live broadcast picture according to the action control instruction;
and controlling the display size and/or the display angle of the virtual image in the live broadcast picture according to the virtual machine position control instruction.
In a preferred option of the embodiment of the present application, in the above-mentioned avatar control method, the virtual machine position control instruction includes angle information, and the step of controlling the display size and/or display angle of the avatar in the live broadcast picture according to the virtual machine position control instruction includes:
and determining the display angle of the virtual image in the live broadcast picture according to the angle information, and acquiring new three-dimensional image data generated by the virtual image at the display angle based on the action control instruction and the pre-constructed three-dimensional image data.
In a preferred option of the embodiment of the present application, in the above-mentioned avatar control method, the virtual machine position control instruction includes proportion information, and the step of controlling the display size and/or display angle of the avatar in the live broadcast picture according to the virtual machine position control instruction includes:
and determining the size of the avatar displayed in the live broadcast picture according to the proportion information and the initial size of the avatar.
In a preferred option of the embodiment of the present application, in the above-mentioned avatar control method, the step of determining whether to obtain a virtual machine position control command generated based on the anchor includes:
and judging whether a virtual machine position control instruction generated based on the operation of the anchor is obtained or not.
In a preferred option of the embodiment of the present application, in the avatar control method, the step of determining whether to obtain a virtual machine position control command generated based on the operation of the anchor includes:
and when receiving voice information generated based on the operation of the anchor, judging whether the voice information has preset information, and when the voice information has the preset information, judging to acquire a virtual machine position control instruction generated based on the operation of the anchor.
In a preferred selection of the embodiment of the application, in the above-mentioned avatar control method, the preset information includes keyword information and/or melody feature information.
In a preferred option of the embodiment of the present application, in the above-mentioned avatar control method, the step of determining whether to obtain a virtual machine position control command generated based on the anchor includes:
and judging whether a virtual machine position control instruction generated based on the anchor is obtained or not based on a result obtained by analyzing a video frame obtained by shooting the anchor.
In a preferred option of the embodiment of the present application, in the avatar control method, the step of determining whether to obtain a virtual machine position control command generated based on a main broadcast based on a result obtained by analyzing a video frame obtained by shooting the main broadcast includes:
judging whether the image information has preset information or not based on the image information obtained by extracting the information of the video frame obtained by shooting the anchor, and judging to obtain a virtual machine position control instruction generated based on the anchor when the image information has the preset information.
In a preferred option of the embodiment of the present application, in the above-mentioned avatar control method, the preset information includes motion information, depth information, identification object information and/or identification color information.
In a preferred option of the embodiment of the present application, in the above-mentioned avatar control method, the step of determining whether to obtain a virtual machine position control command generated based on the anchor includes:
and judging whether a virtual machine position control instruction generated based on the anchor is obtained or not based on a preset condition, wherein the preset condition is determined based on historical live broadcast data of the anchor.
In a preferred option of the embodiment of the present application, in the above-mentioned avatar control method, the step of analyzing the video stream obtained by shooting the anchor to generate the action control command includes:
performing image analysis on each video frame in the video stream obtained by shooting the anchor, and generating an action control instruction according to the image analysis result of each video frame; or
Extracting a current video frame in a video stream obtained by shooting the anchor at intervals of a preset period, carrying out image analysis on the current video frame, and generating an action control instruction according to an image analysis result of the current video frame.
The embodiment of the application further provides an avatar control device, which is applied to a live broadcast device and used for controlling the avatar displayed in a live broadcast picture, and the device comprises:
the control instruction generation module is used for analyzing the video stream obtained by shooting the anchor broadcast to generate an action control instruction;
the control instruction judging module is used for judging whether a virtual machine position control instruction generated based on the anchor is obtained or not;
and the virtual image control module is used for controlling the virtual image according to the virtual machine position control instruction and the action control instruction when the virtual machine position control instruction is obtained.
On the basis, the embodiment of the present application further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and when the computer program runs on the processor, the computer program implements the avatar control method described above.
On the basis of the above, the embodiment of the present application further provides a computer-readable storage medium, and the program, when executed, implements the avatar control method described above.
The application provides an avatar control method, an avatar control device and an electronic device, on the basis that the avatar is controlled by video stream obtained based on shooting anchor, if an avatar position control instruction generated based on the anchor is obtained, the avatar can be controlled together by combining the avatar position control instruction to show the avatar under different machine positions, thereby creating the effect of stage performance, further improving the interest of avatar display, and improving the user experience in the live broadcast process of the avatar.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Fig. 2 is a schematic flowchart of an avatar control method according to an embodiment of the present application.
Fig. 3 is a system block diagram of a live broadcast system provided in an embodiment of the present application.
Fig. 4 is a schematic diagram illustrating an effect of controlling an avatar based on scale information according to an embodiment of the present application.
Fig. 5 is a schematic diagram illustrating another effect of controlling an avatar based on scale information according to an embodiment of the present application.
Fig. 6 is a schematic diagram illustrating an effect of controlling an avatar based on angle information according to an embodiment of the present application.
Fig. 7 is a block diagram illustrating functional modules included in an avatar control apparatus according to an embodiment of the present disclosure.
Icon: 10-an electronic device; 12-a memory; 14-a processor; 100-avatar control means; 110-a control instruction generation module; 130-control instruction judging module; 150-avatar control module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. In the description of the present application, the terms "first," "second," and the like are used solely to distinguish one from another and are not to be construed as merely or implying relative importance.
As shown in fig. 1, an embodiment of the present application provides an electronic device 10. The electronic device 10 may be a live device, for example, a background server communicatively connected to a terminal device used by a main broadcast during live broadcast.
In detail, the electronic device 10 may include a memory 12, a processor 14, and an avatar control apparatus 100. The memory 12 and the processor 14 are electrically connected, directly or indirectly, to enable the transfer or interaction of data. For example, they may be electrically connected to each other via one or more communication buses or signal lines. The avatar control apparatus 100 includes at least one software function module which may be stored in the memory 12 in the form of software or firmware (firmware). The processor 14 is configured to execute executable computer programs stored in the memory 12, such as software function modules and computer programs included in the avatar control apparatus 100, so as to perform high-precision control on an avatar in a live-action screen.
The Memory 12 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor 14 may be an integrated circuit chip having signal processing capabilities. The Processor 14 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), a System on Chip (SoC), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
It is understood that the structure shown in fig. 1 is only an illustration, and the electronic device 10 may further include more or fewer components than those shown in fig. 1, or have a different configuration from that shown in fig. 1, for example, may further include a communication unit for information interaction with other live devices (e.g., a terminal device used by a main broadcast, a terminal device used by a viewer, etc.).
With reference to fig. 2, an avatar control method applicable to the electronic device 10 is also provided in the embodiments of the present application. Wherein the method steps defined by the process related to the avatar control method may be implemented by the electronic device 10. The specific process shown in fig. 2 will be described in detail below.
In step S110, the video stream obtained by shooting the anchor is analyzed to generate an operation control command.
In this embodiment, a video stream obtained by shooting a main broadcast may be acquired first, and then, analysis processing (such as image analysis) may be performed on the video stream, and a motion control instruction may be generated based on the analysis result.
Step S130, judging whether a virtual machine position control instruction generated based on the anchor is obtained.
In this embodiment, after the motion control command is generated in step S110, it may be determined whether a virtual machine bit control command generated based on the anchor is obtained. And, when it is determined that the virtual machine position control instruction is obtained, step S150 may be performed.
And S150, controlling the virtual image according to the virtual machine position control instruction and the action control instruction.
In this embodiment, when it is determined that the virtual machine position control instruction is obtained through step S130, the avatar may be controlled based on the virtual machine position control instruction and the motion control instruction. That is, the avatar may be controlled in combination with the virtual machine position control command on the basis of controlling the avatar based on the motion control command, thereby improving the accuracy of control.
And, owing to adopt be virtual machine position control command, can also make the show of avatar demonstrate the state under the different machine positions to build the effect of stage performance in the live broadcast room, make the live broadcast present the sense of experience stronger, and then improve the interest of avatar show, promote user experience.
Optionally, the method for acquiring the video stream analyzed when step S110 is executed is not limited.
For example, in an alternative example, in conjunction with fig. 3, the electronic device 10 may be a background server, which is communicatively connected with a first terminal, and the first terminal is also communicatively connected with an image capture device (such as a camera). The first terminal is a terminal device (such as a mobile phone, a tablet computer, a computer and the like) used by a main broadcast during live broadcasting, and the image acquisition device is used for acquiring images of the main broadcast during live broadcasting of the main broadcast to obtain a video stream and sending the video stream to the background server through the first terminal.
It should be noted that the image capturing device may be a single device, or may be integrated with the first terminal, for example, the image capturing device may be a camera carried by a terminal device such as a mobile phone, a tablet computer, or a computer.
Also, the manner of performing the analysis of the video stream in step S110 is not limited. For example, in an alternative example, video frames may be randomly extracted from the video stream, and corresponding motion control instructions may be generated based on the extracted video frames.
For another example, in another alternative example, step S110 may include the steps of: extracting a current video frame in a video stream obtained by shooting the anchor at intervals of a preset period, carrying out image analysis on the current video frame, and generating an action control instruction according to an image analysis result of the current video frame.
That is, after a video stream obtained by shooting a main broadcast is acquired, a video frame (i.e., a current video frame) may be extracted from the video stream at preset intervals, then image analysis processing (such as feature extraction) is performed on the extracted video frame, and finally, a corresponding action control instruction may be generated based on a result of the analysis processing.
Since the video frames are extracted according to a certain period, when the action of the avatar is controlled according to the action control instruction generated by the extracted video frames, the real action of the anchor can be reflected to a large extent, the data processing amount can be reduced, the pressure of the corresponding processor 14 can be relieved, and the real-time performance of the live broadcast can be better.
It should be noted that, the specific content of the preset period is not limited, for example, the preset period may be a preset time duration (e.g., 0.1s, 0.2s, 0.3s, etc.), that is, a video frame may be obtained by performing a video frame extraction operation every interval of the preset time duration; or may be a preset number of frames (1 frame, 2 frames, 3 frames, etc.), that is, a video frame may be extracted once every preset number of frames, so as to obtain a video frame.
For another example, in another alternative example, step S110 may include the steps of: and carrying out image analysis on each video frame in the video stream obtained by shooting the anchor, and generating an action control instruction according to the image analysis result of each video frame.
That is, after a video stream obtained by capturing a main broadcast is acquired, each video frame in the video stream may be extracted, then, image analysis processing (such as feature extraction) is performed on each extracted video frame, and finally, a corresponding action control instruction may be generated based on an image analysis result of each video frame.
Because the corresponding action control instructions are respectively generated according to each video frame in the video stream, when the virtual image is controlled based on the action control instructions, the action of the virtual image can completely reflect the real action of the anchor, so that the display of the virtual image is more flexible, the connection between the actions is smoother, and the experience of audiences is further improved.
In step S110, when performing image analysis, feature extraction, and other processing, a trained neural network may be used to recognize a video frame in the video stream, recognize and obtain an action gesture of a main broadcasting in the video frame, and generate an action control command based on the action gesture.
Optionally, when step S130 is executed to determine whether to obtain the virtual machine position control instruction generated based on the anchor, the determination manner may be different according to the generation manner of the virtual machine position control instruction.
For example, in an alternative example (example one), the virtual machine bit control instructions may be generated based on operations of a host. In detail, the first terminal may generate a corresponding virtual machine position control instruction in response to an operation of the anchor, and send the virtual machine position control instruction to the background server. And the background server may determine to obtain the virtual machine position control instruction when receiving the virtual machine position control instruction.
The operation mode of the first terminal by the anchor is not limited, and may include, but not limited to, operations of the first terminal by the anchor on input devices such as keys (e.g., physical keys or screen virtual keys), a keyboard, a mouse, and a microphone. For example, the anchor may input a piece of text information through a keyboard or a piece of voice information through a microphone (e.g., "amplify 2 times" or "show the back", or may also be some simple numbers or words, e.g., "1" represents amplify 1 time, "2" represents amplify 2 times, only a correspondence needs to be established in advance), or may execute a specific action through a mouse (e.g., after clicking the avatar shown by the first terminal, moving the mouse in the left, right, etc. directions, and after the first terminal recognizes the action, a corresponding virtual machine position control instruction may be generated based on the correspondence established in advance).
That is, upon receiving voice information generated based on an operation of the anchor (operation of the first terminal device by a microphone), it may be determined whether or not preset information is present in the voice information, and upon having the preset information, it may be determined to acquire a virtual machine position control instruction generated based on the operation of the anchor.
The preset information may be keyword information or other information. For example, when the voice information is a song (such as played by a device or played by a host), the preset information may also be melody feature information. That is, the trained neural network may be used to recognize the melody characteristics of the voice message sent by the first terminal, and determine the virtual machine position control command according to the melody characteristics obtained by recognition. For example, in a gentle melody, a control command may be generated that the head is getting farther away. In the melody of the climax or the refrain, a control instruction for face machine position enlargement may be generated.
For another example, in another alternative example (example two), the virtual machine bit control instruction may also be generated based on a result obtained by performing the analysis on the video frame in the video stream in step S110. That is, whether to obtain the virtual machine position control instruction generated based on the anchor can be determined based on the result of analyzing the video frame obtained by shooting the anchor.
In detail, information extraction may be performed on a video frame in the video stream to determine whether the obtained image information has preset information, and when the obtained image information has the preset information, a corresponding virtual machine position control instruction may be generated based on the preset information, and it is determined to acquire the virtual machine position control instruction.
The specific content of the preset information is not limited, and for example, the preset information may include, but is not limited to, action information, depth information, or other information. In detail, in an alternative example, the preset information may be action information.
That is, a corresponding virtual machine position control instruction may be generated based on a specific action of the anchor, for example, when the anchor extends out of the left hand, a control instruction showing the left side surface of the avatar may be generated; when the anchor extends out of the right hand, a control instruction for displaying the right side surface of the virtual image can be generated; when the anchor contacts the left hand and the right hand, a control instruction for displaying the back of the virtual image can be generated; the anchor may generate a control command that shows the top of the avatar when squatting.
In another alternative example, the other information may be information identifying an object or identifying a color. That is, the anchor may carry the identification object or wear the clothing or accessories with the identification color, and obtain the virtual machine position control instruction by identifying the identification object or the identification color.
For example, the gradual approach control command may be generated according to the size of the identified object or the color of the identified object being red, orange, yellow, green, cyan, blue or purple. That is to say, when different parts of the anchor are carried with various identification objects with different sizes, or worn with clothes or accessories with different colors, so that the anchor has different actions at different times, the identified identification objects or identification colors are different, and the virtual image can show a stage effect from a long shot to a short shot or from the long shot to the short shot.
Further, in order to improve the enthusiasm of the anchor for live broadcasting, in this embodiment, when step S130 is executed, it may be further determined whether to obtain the virtual machine position control instruction based on the historical live broadcasting data of the anchor.
In detail, in the first example, after receiving the virtual machine position control instruction sent by the first terminal, or in the second example, after generating the corresponding virtual machine position control instruction based on the preset information, it may be further determined whether the virtual machine position control instruction meets a preset condition determined based on the historical live broadcast data of the anchor, and only when the virtual machine position control instruction meets the preset condition determined based on the historical live broadcast data of the live broadcast, it may be determined that the virtual machine position control instruction is obtained.
In an alternative example, the historical live data may be a level of the anchor, and the higher the level, the greater the number of virtual machine position control instructions that can be obtained can be determined. For example, if the level of the anchor is less than 5, it may be determined that no virtual machine bit control instruction is available; if the level of the anchor is greater than or equal to 5 levels and less than or equal to 10 levels, judging that part of the virtual machine position control instructions can be obtained; if the level of the anchor is greater than 10, it may be determined that any virtual machine bit control instructions are available.
In the above example, whether or not to obtain the virtual machine bit control instruction is determined according to a certain level range, and in some other examples, it may be determined that a different virtual machine bit control instruction can be obtained for each level.
The historical live broadcast data can further include the number or value of gifts received by the anchor in live broadcast, the bullet screen amount of audiences of the anchor in live broadcast, the maximum audience number of the anchor in live broadcast and the like. For example, the larger the number of received gifts or the higher the value, the larger the amount of barrage, or the larger the maximum number of viewers, the more virtual machine position control instructions that are determined to be available may be.
After the step S130 is executed to determine whether to obtain the virtual machine bit control command, on the one hand, when it is determined that the virtual machine bit control command is obtained, the step S150 may be executed. On the other hand, when it is determined that the virtual machine bit control instruction is not obtained, a specific processing manner is not limited, and in this embodiment, the following steps may be performed: and controlling the virtual image according to the action control instruction.
That is, when the anchor performs live broadcasting, if the virtual machine position control instruction is obtained, the virtual image is controlled according to the virtual machine position control instruction and the action control instruction; and if the virtual machine position control instruction is not obtained, controlling the virtual image only according to the action control instruction.
Optionally, the manner of executing step S150 to control the avatar according to the virtual machine position control instruction and the motion control instruction is not limited, and may be selected according to the actual application requirements, such as the performance of the processor 14, the control precision of the avatar, and the like.
For example, in an alternative example, step S150 may include the steps of: controlling the display posture of the virtual image in the live broadcast picture according to the action control instruction; and controlling the display size and/or the display angle of the virtual image in the live broadcast picture according to the virtual machine position control instruction.
That is, on the one hand, the presentation attitude of the avatar may be controlled according to the motion control instruction. On the other hand, on the basis of controlling the display posture of the virtual image, the display size and/or the display angle of the virtual image in the display posture can be controlled based on the obtained virtual machine position control instruction.
For example, if the anchor is currently dancing, then on the one hand the avatar may be controlled to dance based on the motion control instructions. At this time, if the virtual machine position control instruction is obtained, different display sizes and/or display angles of the virtual image in the dancing state can be controlled according to the virtual machine position control instruction.
Wherein, the display gestures can include, but are not limited to, kicking, clapping, bending, shaking shoulders, shaking head, and the like, and expression such as frown, laugh, smile, anger, and the like. Also, the manner of controlling the avatar is not limited, and in an alternative example, the control may be performed based on a predetermined feature point.
In detail, a preset number of feature points (e.g., 500) may be determined in advance on the three-dimensional model of the avatar, and coordinates of each feature point may be determined. Then, after the video stream of the anchor is acquired, the coordinates of the feature points on the three-dimensional model are adjusted according to the position change of the corresponding feature points (also 500) in the extracted video frame, for example, the coordinates of the hand feature points of the avatar may be adjusted according to the position change of the hand feature points of the anchor, and the coordinates of the face feature points of the avatar may be adjusted according to the position change of the face feature points of the anchor. And finally, obtaining a new three-dimensional model based on the new coordinates obtained by adjustment, thereby realizing the control of the virtual image.
And with the change of time, new three-dimensional models can be continuously generated, so that live broadcast streams of the virtual images are generated, and therefore, the live broadcast streams only need to be pushed to a first terminal and a second terminal which are in communication connection for displaying. The second terminal may be a terminal device (such as a mobile phone, a tablet computer, a computer, etc.) used by the viewer for watching the live broadcast of the avatar (playing the live stream).
Furthermore, when the display size of the virtual image in the live broadcast picture is controlled, the specific control mode is not limited, and the display size can be selected according to the actual application requirements.
For example, in an alternative example, the avatar's presentation size is enlarged or reduced by a certain factor, such as 0.5, 1.5, 2, or other factor, whenever the virtual machine position control instructions are obtained. Or, as long as the obtained control information about the display size exists in the virtual machine position control instruction, the display size of the virtual image is enlarged or reduced by a specific multiple.
For another example, in another alternative example, after the virtual machine position control instruction is obtained, the display size of the avatar may be controlled based on the scale information in the virtual machine position control instruction.
In detail, when the virtual machine position control instruction includes the ratio information, the step of controlling the display size and/or the display angle of the avatar in the live broadcast picture according to the virtual machine position control instruction includes: and determining the display size of the virtual image in the live broadcast picture according to the proportion information and the initial size of the virtual image.
For example, if the scale information is 2, the avatar is controlled to be enlarged by 2 times on the basis of the initial size (as shown in fig. 4), thereby creating an effect of close-up photographing. For another example, if the scale information is 0.5, the avatar is controlled to be reduced by 0.5 times based on the initial size (as shown in fig. 5), thereby creating the effect of the long shot.
And when the display angle of the virtual image in the live broadcast picture is controlled, the specific control mode is not limited, and the display angle can be selected according to the actual application requirements.
For example, in an alternative example, as long as the virtual machine position control instruction is obtained, the avatar is controlled to be displayed at a specific display angle, such as left side, right side, back side, or the like. Or, as long as the obtained virtual machine position control instruction contains control information about the display angle, the virtual image is controlled to be displayed at a specific display angle.
For another example, in another alternative example, after the virtual machine position control instruction is obtained, the display angle of the avatar may be controlled based on the angle information in the virtual machine position control instruction.
In detail, when the virtual machine position control instruction includes angle information, the step of controlling the display size and/or the display angle of the avatar in the live broadcast picture according to the virtual machine position control instruction includes: and determining the display angle of the virtual image in the live broadcast picture according to the angle information, and acquiring new three-dimensional image data generated by the virtual image at the display angle based on the action control instruction and the pre-constructed three-dimensional image data.
That is, after the angle information is acquired, the feature information of the back, side, and other angles of the anchor can be calculated by an inverse motion algorithm based on the acquired front feature information of the anchor in the current video frame in the video stream. Then, the three-dimensional image data (three-dimensional model) pre-constructed for the avatar may be adjusted based on the obtained feature information of each angle, resulting in new three-dimensional image data. And finally, acquiring the three-dimensional image data of the part corresponding to the angle information from the new three-dimensional image data, and sending the three-dimensional image data to the second terminal for rendering and displaying.
For example, if the angle information is 90 °, controlling the avatar to display the left side; if the angle information is 180 degrees, controlling the back of the virtual image display (as shown in fig. 6); and if the angle information is 270 degrees, controlling the virtual image to display the right side surface.
It should be noted that, when switching from the current presentation angle to another presentation angle, the switching result may be directly presented, or the switching process may be presented. For example, if the angle information corresponding to the current video frame of the avatar is 0 ° (front), the angle information is 90 °, the left side of the avatar may be displayed at the next video frame; the side face corresponding to the angle information of 10 degrees, the side face corresponding to the angle information of 20 degrees, the side face corresponding to the angle information of 30 degrees, the side face corresponding to the angle information of 70 degrees, the side face corresponding to the angle information of 80 degrees and the side face corresponding to the angle information of 90 degrees can be displayed in sequence in the subsequent video frame, and the side face corresponding to the angle information of 90 degrees is maintained, so that the process of rotating the virtual machine position is created, and the reality of stage performance is improved.
With reference to fig. 7, the embodiment of the present application further provides an avatar control apparatus 100 applicable to the electronic device 10. The avatar control apparatus 100 may include a control instruction generation module 110, a control instruction judgment module 130, and an avatar control module 150.
The control instruction generating module 110 is configured to analyze a video stream obtained by shooting a main broadcast to generate an action control instruction. In this embodiment, the control instruction generating module 110 may be configured to execute step S110 shown in fig. 2, and reference may be made to the foregoing description of step S110 for relevant contents of the control instruction generating module 110.
The control instruction determining module 130 is configured to determine whether to obtain a virtual machine position control instruction generated based on the anchor. In this embodiment, the control instruction determining module 130 may be configured to execute step S130 shown in fig. 2, and reference may be made to the foregoing description of step S130 for relevant contents of the control instruction determining module 130.
The avatar control module 150 is configured to control the avatar according to the virtual machine position control instruction and the action control instruction when the virtual machine position control instruction is obtained. In this embodiment, the avatar control module 150 may be configured to perform step S150 shown in fig. 2, and reference may be made to the description of step S150 with respect to the relevant contents of the avatar control module 150.
When the control instruction determining module 130 determines that the virtual machine position control instruction is not obtained, the avatar control module 150 is further configured to control the avatar according to the action control instruction.
In an embodiment of the present application, there is also provided a computer-readable storage medium having stored therein a computer program for executing the steps of the avatar control method when the computer program runs, corresponding to the avatar control method described above.
The steps executed when the computer program runs are not described in detail herein, and the explanation of the avatar control method may be referred to in the foregoing.
To sum up, the avatar control method, the avatar control apparatus 100 and the electronic device 10 provided by the present application can control the avatar based on the video stream obtained by shooting the anchor on the basis of controlling the avatar, if the avatar control instruction generated based on the anchor is also obtained, the avatar can be controlled together with the avatar control instruction to show the avatar under different machine positions, thereby creating the effect of stage performance, and further improving the interest of avatar display and improving the user experience in the live broadcast process of the avatar.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (15)

1. An avatar control method, applied to a live device, for controlling an avatar displayed in a live picture, the method comprising:
analyzing a video stream obtained by shooting a main broadcast to generate an action control instruction;
judging whether a virtual machine position control instruction generated based on the anchor is obtained or not;
if the virtual machine position control instruction is obtained, controlling the virtual image according to the virtual machine position control instruction and the action control instruction;
the virtual machine position control instruction is used for controlling the display size and/or the display angle of the virtual image, and the action control instruction is used for controlling the action of the virtual image.
2. The avatar control method of claim 1, wherein said step of controlling said avatar according to said avatar position control command and said motion control command comprises:
and controlling the display posture of the virtual image in the live broadcast picture according to the action control instruction.
3. The avatar control method according to claim 2, wherein said avatar control command includes angle information, and said step of controlling the size and/or angle of the avatar displayed in said live broadcast according to said avatar control command comprises:
and determining the display angle of the virtual image in the live broadcast picture according to the angle information, and acquiring new three-dimensional image data generated by the virtual image at the display angle based on the action control instruction and the pre-constructed three-dimensional image data.
4. The avatar control method according to claim 2, wherein the avatar control command includes scale information, and the step of controlling the avatar's display size and/or display angle in the live view according to the avatar control command comprises:
and determining the display size of the virtual image in the live broadcast picture according to the proportion information and the initial size of the virtual image.
5. The avatar control method of claim 1, wherein said step of determining whether to obtain a virtual machine position control command generated based on said anchor comprises:
and judging whether a virtual machine position control instruction generated based on the operation of the anchor is obtained or not.
6. The avatar control method of claim 5, wherein said step of determining whether to obtain a virtual machine position control command generated based on an operation of said anchor comprises:
and when receiving voice information generated based on the operation of the anchor, judging whether the voice information has preset information, and when the voice information has the preset information, judging to acquire a virtual machine position control instruction generated based on the operation of the anchor.
7. The avatar control method of claim 6, wherein the preset information includes keyword information and/or melody feature information.
8. The avatar control method of claim 1, wherein said step of determining whether to obtain a virtual machine position control command generated based on said anchor comprises:
and judging whether a virtual machine position control instruction generated based on the anchor is obtained or not based on a result obtained by analyzing a video frame obtained by shooting the anchor.
9. The avatar control method of claim 8, wherein said step of determining whether to obtain virtual machine position control commands generated based on said anchor based on the results of analyzing video frames captured by said anchor comprises:
judging whether the image information has preset information or not based on the image information obtained by extracting the information of the video frame obtained by shooting the anchor, and judging to obtain a virtual machine position control instruction generated based on the anchor when the image information has the preset information.
10. The avatar control method of claim 9, wherein the preset information includes motion information, depth information, identification object information and/or identification color information.
11. The avatar control method of any one of claims 1-10, wherein said step of determining whether to obtain a virtual machine position control command generated based on said anchor comprises:
and judging whether a virtual machine position control instruction generated based on the anchor is obtained or not based on a preset condition, wherein the preset condition is determined based on historical live broadcast data of the anchor.
12. The avatar control method of any one of claims 1-10, wherein said step of analyzing a video stream obtained from the shooting of the anchor to generate motion control commands comprises:
performing image analysis on each video frame in the video stream obtained by shooting the anchor, and generating an action control instruction according to the image analysis result of each video frame; or
Extracting a current video frame in a video stream obtained by shooting the anchor at intervals of a preset period, carrying out image analysis on the current video frame, and generating an action control instruction according to an image analysis result of the current video frame.
13. An avatar control apparatus applied to a live device for controlling an avatar displayed in a live picture, the apparatus comprising:
the control instruction generation module is used for analyzing the video stream obtained by shooting the anchor broadcast to generate an action control instruction;
the control instruction judging module is used for judging whether a virtual machine position control instruction generated based on the anchor is obtained or not;
the virtual image control module is used for controlling the virtual image according to the virtual machine position control instruction and the action control instruction when the virtual machine position control instruction is obtained;
the virtual machine position control instruction is used for controlling the display size and/or the display angle of the virtual image, and the action control instruction is used for controlling the action of the virtual image.
14. An electronic device, comprising a memory, a processor, and a computer program stored in the memory and capable of running on the processor, the computer program, when running on the processor, implementing the avatar control method of any of claims 1-12.
15. A computer-readable storage medium on which a computer program is stored, characterized in that the program, when executed, implements the avatar control method of any of claims 1-12.
CN201910358491.XA 2019-04-30 2019-04-30 Avatar control method, avatar control device and electronic equipment Active CN110119700B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201910358491.XA CN110119700B (en) 2019-04-30 2019-04-30 Avatar control method, avatar control device and electronic equipment
US17/605,476 US20220214797A1 (en) 2019-04-30 2020-04-27 Virtual image control method, apparatus, electronic device and storage medium
PCT/CN2020/087139 WO2020221186A1 (en) 2019-04-30 2020-04-27 Virtual image control method, apparatus, electronic device and storage medium
SG11202111640RA SG11202111640RA (en) 2019-04-30 2020-04-27 Virtual image control method, apparatus, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910358491.XA CN110119700B (en) 2019-04-30 2019-04-30 Avatar control method, avatar control device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110119700A CN110119700A (en) 2019-08-13
CN110119700B true CN110119700B (en) 2020-05-15

Family

ID=67521670

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910358491.XA Active CN110119700B (en) 2019-04-30 2019-04-30 Avatar control method, avatar control device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110119700B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110662083B (en) 2019-09-30 2022-04-22 北京达佳互联信息技术有限公司 Data processing method and device, electronic equipment and storage medium
CN110850983B (en) * 2019-11-13 2020-11-24 腾讯科技(深圳)有限公司 Virtual object control method and device in video live broadcast and storage medium
CN111246225B (en) * 2019-12-25 2022-02-08 北京达佳互联信息技术有限公司 Information interaction method and device, electronic equipment and computer readable storage medium
CN111265879B (en) * 2020-01-19 2023-08-08 百度在线网络技术(北京)有限公司 Avatar generation method, apparatus, device and storage medium
CN111312240A (en) * 2020-02-10 2020-06-19 北京达佳互联信息技术有限公司 Data control method and device, electronic equipment and storage medium
CN112637622A (en) * 2020-12-11 2021-04-09 北京字跳网络技术有限公司 Live broadcasting singing method, device, equipment and medium
CN113099298B (en) * 2021-04-08 2022-07-12 广州华多网络科技有限公司 Method and device for changing virtual image and terminal equipment
CN117221465B (en) * 2023-09-20 2024-04-16 北京约来健康科技有限公司 Digital video content synthesis method and system
CN117395510B (en) * 2023-12-12 2024-02-06 湖南快乐阳光互动娱乐传媒有限公司 Virtual machine position control method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103961869A (en) * 2014-04-14 2014-08-06 林云帆 Device control method
CN106227417A (en) * 2015-09-01 2016-12-14 深圳创锐思科技有限公司 A kind of three-dimensional user interface exchange method, device, display box and system thereof
CN106569771A (en) * 2015-10-09 2017-04-19 百度在线网络技术(北京)有限公司 Object control method and apparatus
CN108197589A (en) * 2018-01-19 2018-06-22 北京智能管家科技有限公司 Semantic understanding method, apparatus, equipment and the storage medium of dynamic human body posture

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6516642B2 (en) * 2015-09-17 2019-05-22 アルパイン株式会社 Electronic device, image display method and image display program
CN106445131B (en) * 2016-09-18 2018-10-02 腾讯科技(深圳)有限公司 Virtual target operating method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103961869A (en) * 2014-04-14 2014-08-06 林云帆 Device control method
CN106227417A (en) * 2015-09-01 2016-12-14 深圳创锐思科技有限公司 A kind of three-dimensional user interface exchange method, device, display box and system thereof
CN106569771A (en) * 2015-10-09 2017-04-19 百度在线网络技术(北京)有限公司 Object control method and apparatus
CN108197589A (en) * 2018-01-19 2018-06-22 北京智能管家科技有限公司 Semantic understanding method, apparatus, equipment and the storage medium of dynamic human body posture

Also Published As

Publication number Publication date
CN110119700A (en) 2019-08-13

Similar Documents

Publication Publication Date Title
CN110119700B (en) Avatar control method, avatar control device and electronic equipment
US11722727B2 (en) Special effect processing method and apparatus for live broadcasting, and server
US20210312161A1 (en) Virtual image live broadcast method, virtual image live broadcast apparatus and electronic device
WO2020221186A1 (en) Virtual image control method, apparatus, electronic device and storage medium
CN106664376B (en) Augmented reality device and method
US9747495B2 (en) Systems and methods for creating and distributing modifiable animated video messages
US8269722B2 (en) Gesture recognition system and method thereof
US20180088663A1 (en) Method and system for gesture-based interactions
CN107786549B (en) Adding method, device, system and the computer-readable medium of audio file
CN104618803A (en) Information push method, information push device, terminal and server
US20140223474A1 (en) Interactive media systems
TW202304212A (en) Live broadcast method, system, computer equipment and computer readable storage medium
CN111580652A (en) Control method and device for video playing, augmented reality equipment and storage medium
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
CN107610239B (en) Virtual try-on method and device for facial makeup
CN113487709A (en) Special effect display method and device, computer equipment and storage medium
CN111638784A (en) Facial expression interaction method, interaction device and computer storage medium
US11169603B2 (en) Electronic apparatus and method for recognizing view angle of displayed screen thereof
WO2022206304A1 (en) Video playback method and apparatus, device, storage medium, and program product
CN111768729A (en) VR scene automatic explanation method, system and storage medium
CN111507139A (en) Image effect generation method and device and electronic equipment
CN115278084A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114237800A (en) File processing method, file processing device, electronic device and medium
CN114245193A (en) Display control method and device and electronic equipment
EP3876543A1 (en) Video playback method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant