CN114049418A - Live broadcasting method and system based on virtual anchor - Google Patents

Live broadcasting method and system based on virtual anchor Download PDF

Info

Publication number
CN114049418A
CN114049418A CN202111346631.5A CN202111346631A CN114049418A CN 114049418 A CN114049418 A CN 114049418A CN 202111346631 A CN202111346631 A CN 202111346631A CN 114049418 A CN114049418 A CN 114049418A
Authority
CN
China
Prior art keywords
data
virtual anchor
accelerometer
acceleration
attitude
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111346631.5A
Other languages
Chinese (zh)
Inventor
段保莉
王鹏
徐业
李小超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toswim Beijing Technology Development Co ltd
Original Assignee
Toswim Beijing Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toswim Beijing Technology Development Co ltd filed Critical Toswim Beijing Technology Development Co ltd
Priority to CN202111346631.5A priority Critical patent/CN114049418A/en
Publication of CN114049418A publication Critical patent/CN114049418A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • H04N21/4415Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to a live broadcasting method and system based on a virtual anchor, wherein the method comprises the following steps: acquiring dynamic capture data generated by capturing the actions of the real model by dynamic capture equipment; filtering and fusing data collected by an accelerometer, data collected by a gyroscope and data collected by a magnetometer to obtain attitude quaternions of each joint of the real-person model; generating joint point coordinate information according to the attitude quaternion; performing action matching by using the joint point coordinate information and the paving data and a preset digital human model to obtain a virtual anchor; and carrying out real-time live broadcast by utilizing the virtual anchor. According to the method and the device, the dynamic capture data are obtained by using the dynamic capture equipment, accurate and stable motion capture can be realized, the dynamic capture data are subjected to motion matching with the digital human model to obtain the virtual anchor, and the virtual anchor can make real expressions and motions of the anchor with low delay.

Description

Live broadcasting method and system based on virtual anchor
Technical Field
The invention relates to the technical field of network live broadcast, in particular to a live broadcast method and system based on a virtual anchor.
Background
With the continuous development of the mobile internet, the network live broadcast technology is also rapidly improved. In many live broadcast modes at present, live broadcast by using virtual characters becomes a popular live broadcast mode. However, in the current virtual live broadcast mode, because a real person is required to interact with audiences, the virtual character makes a corresponding action, but in the interaction process, the anchor action cannot be perfectly fused into a virtual scene due to the limitation of the virtual character or the motion capture equipment, so that the picture harmony is not high, the watching substitution feeling of the audiences is poor, and the overall live broadcast interaction effect is poor.
Disclosure of Invention
In order to solve the above problem, embodiments of the present invention provide a live broadcasting method and system based on a virtual anchor, which aim to solve the problem of low coordination of a virtual live broadcasting picture.
A live broadcasting method based on a virtual anchor comprises the following steps:
step 1: acquiring dynamic capture data generated by capturing the actions of the real model by dynamic capture equipment; the dynamic capture data comprises data collected by an accelerometer, data collected by a gyroscope, data collected by a magnetometer and paving data;
step 2: filtering and fusing the data acquired by the accelerometer, the data acquired by the gyroscope and the data acquired by the magnetometer to obtain attitude quaternions of each joint of the real-person model;
and step 3: generating joint point coordinate information according to the attitude quaternion;
and 4, step 4: performing action matching by using the joint point coordinate information and the paving data and a preset digital human model to obtain a virtual anchor;
and 5: and carrying out real-time live broadcast by utilizing the virtual anchor.
Preferably, the step 4: and performing action matching by using the joint point coordinate information and the paving data and a preset digital human model to obtain a virtual anchor, wherein the action matching comprises the following steps:
step 4.1: performing expression matching by using the paving data and a preset digital human model to obtain a digital human with the expression matching completed;
step 4.2: and matching the joint point coordinate information with the digital human body joints matched with the expressions to obtain the virtual anchor.
Preferably, the step 2: filtering and fusing the data collected by the accelerometer, the data collected by the gyroscope and the data collected by the magnetometer to obtain attitude quaternion of each joint of the real-person model, and the method comprises the following steps:
step 2.1: obtaining a transfer formula of a rotation matrix according to the data acquired by the accelerometer, the data acquired by the gyroscope and the data acquired by the magnetometer;
step 2.2: constructing a fusion attitude matrix according to the filter characteristics;
step 2.3: bringing the transfer formula of the rotation matrix into the fusion attitude matrix to obtain a filter fusion model;
step 2.4: performing attitude calculation by using the filter fusion model to obtain a calibrated angular velocity vector;
step 2.5: and obtaining the attitude quaternion of each joint of the real human model according to the calibrated angular velocity vector.
Preferably, the step 2.1: obtaining a transfer formula of a rotation matrix according to the data acquired by the accelerometer, the data acquired by the gyroscope and the data acquired by the magnetometer, wherein the transfer formula comprises:
the formula is adopted:
Figure BDA0003354383370000021
obtaining a transfer formula of the rotation matrix; wherein
Figure BDA0003354383370000031
Representing a rotation matrix, ωxRepresenting angular velocity, omega, in the direction of the x-axisyIndicating the angular velocity, omega, in the direction of the y-axiszIndicating the angular velocity in the z-axis direction.
Preferably, the step 2.2: constructing a fusion attitude matrix according to the filter characteristics, comprising:
the formula is adopted:
Figure BDA0003354383370000032
constructing a fusion attitude matrix; wherein the content of the first and second substances,
Figure BDA0003354383370000033
representing the fused attitude rotation matrix, Ram(s) an attitude rotation matrix, R, observed by the accelerometer and magnetometerω(s) represents the attitude rotation matrix obtained by the gyroscope, R(s) represents the actual attitude rotation matrix, muHRepresents high frequency noise, μLRepresents the low frequency accumulated error, and c(s) represents the ideal PID controller transfer function.
Preferably, the step 2.4: performing attitude calculation by using the filter fusion model to obtain a calibrated angular velocity vector, wherein the attitude calculation comprises the following steps:
step 2.4.1: obtaining an acceleration error amount according to data collected by an accelerometer and data collected by a gyroscope;
step 2.4.2: obtaining a magnetic vector error according to the magnetic vector of the actual geographic position and the magnetic vector output by the magnetometer;
step 2.4.3: acquiring an acceleration filtering threshold and a magnetic vector filtering threshold;
step 2.4.4: and denoising the acceleration error quantity and the magnetic vector error by using the acceleration filtering threshold and the magnetic vector filtering threshold to obtain the calibrated angular velocity vector.
Preferably, said step 2.4.1: obtaining an acceleration error amount according to data collected by an accelerometer and data collected by a gyroscope, comprising:
the formula is adopted:
Figure BDA0003354383370000041
obtaining an acceleration error amount; wherein, axRepresenting the acceleration of the accelerometer in the x-direction, ayRepresenting the acceleration of the accelerometer in the y-direction, azRepresenting the acceleration, v, in the z direction acquired by the accelerometerxRepresenting acceleration, v, in the x direction acquired by the accelerometeryRepresenting acceleration, v, in the y direction acquired by the accelerometerzRepresenting the acceleration in the z direction acquired by the accelerometer.
Preferably, said step 2.4.4: denoising the acceleration error quantity and the magnetic vector error by using the acceleration filtering threshold and the magnetic vector filtering threshold to obtain a calibrated angular velocity vector, comprising:
the formula is adopted:
Figure BDA0003354383370000042
denoising the acceleration error quantity and the magnetic vector error to obtain a calibrated angular velocity vector; wherein k isp1Representing a first acceleration filter threshold, kL1Representing a second acceleration filter threshold, kp2Representing a first magnetic vector filter threshold, kL2Representing a second magnetic vector filter threshold, eψRepresenting the magnetic vector error.
The invention also provides a live broadcast system based on the virtual anchor, which comprises the following components:
the dynamic capture data acquisition module is used for acquiring dynamic capture data generated by capturing the actions of the real model by the dynamic capture equipment; the dynamic capture data comprises data collected by an accelerometer, data collected by a gyroscope, data collected by a magnetometer and paving data;
the filtering fusion module is used for filtering and fusing the data acquired by the accelerometer, the data acquired by the gyroscope and the data acquired by the magnetometer to obtain attitude quaternions of each joint of the real-person model;
the joint point coordinate calculation module is used for generating joint point coordinate information according to the posture quaternion;
the virtual anchor generating module is used for performing action matching by using the joint point coordinate information and the paving data and a preset digital human model to obtain a virtual anchor;
and the real-time live broadcast module is used for utilizing the virtual anchor to carry out real-time live broadcast.
Preferably, the virtual anchor generating module includes:
the expression matching unit is used for carrying out expression matching on the surface paving data and a preset digital human model to obtain a digital human with the expression matching completed;
and the virtual anchor generating unit is used for matching the joint point coordinate information with the digital human body joints matched with the expressions to obtain the virtual anchor.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention relates to a live broadcasting method and system based on a virtual anchor, wherein the method comprises the following steps: acquiring dynamic capture data generated by capturing the actions of the real model by dynamic capture equipment; filtering and fusing data collected by an accelerometer, data collected by a gyroscope and data collected by a magnetometer to obtain attitude quaternions of each joint of the real-person model; generating joint point coordinate information according to the attitude quaternion; performing action matching by using the joint point coordinate information and the paving data and a preset digital human model to obtain a virtual anchor; and carrying out real-time live broadcast by utilizing the virtual anchor. According to the method and the device, the dynamic capture data are obtained by using the dynamic capture equipment, accurate and stable motion capture can be realized, the dynamic capture data are subjected to motion matching with the digital human model to obtain the virtual anchor, and the virtual anchor can make real expressions and motions of the anchor with low delay.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a live broadcast method based on a virtual anchor in an embodiment of the present invention;
fig. 2 is a flowchart of a live broadcast method based on a virtual anchor in an embodiment of the present invention.
Detailed Description
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the present invention, unless otherwise expressly specified or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The embodiment of the invention aims to provide a live broadcast method and system based on a virtual anchor, and aims to solve the problem of low coordination of virtual live broadcast pictures.
Referring to fig. 1-2, a live broadcasting method based on a virtual anchor includes:
s1: acquiring dynamic capture data generated by capturing the actions of the real model by dynamic capture equipment; the dynamic capture data comprises data collected by an accelerometer, data collected by a gyroscope, data collected by a magnetometer and paving data;
in practical application, the kinetic capture device adopts inertia + optical mixing device (6 optical devices, 1 set of inertia kinetic capture device and a pair of gloves). The inertial hybrid sensor collects motion capture data by combining a reverse dynamics algorithm, action mode recognition and behavior prediction, is more accurate compared with the traditional pure inertial or pure optical equipment, and can realize low-delay and anti-interference motion capture; when the action of the real-person model is captured, the receiver of the moving capture device is connected with a computer, and then the optical devices are placed at the positions, 1 in the left and right directions, of the front and the back of the real-person model respectively, and 1.5 meters away from the real-person model, and are used for collecting paving data; the inertial moving and capturing device consists of a plurality of accelerometers, gyroscopes and magnetometers and is divided into body wearing equipment and gloves, wherein the gloves are worn firstly, and then the body wearing equipment is worn; when the real person model is worn, the action can be captured.
S2: filtering and fusing data collected by an accelerometer, data collected by a gyroscope and data collected by a magnetometer to obtain attitude quaternions of each joint of the real-person model;
further, S2 specifically includes:
s2.1: obtaining a transfer formula of a rotation matrix according to data acquired by the accelerometer, data acquired by the gyroscope and data acquired by the magnetometer;
specifically, a formula is adopted:
Figure BDA0003354383370000071
obtaining a transfer formula of the rotation matrix; wherein
Figure BDA0003354383370000072
Representing a rotation matrix, ωxRepresenting angular velocity, omega, in the direction of the x-axisyIndicating the angular velocity, omega, in the direction of the y-axiszIndicating the angular velocity in the z-axis direction.
S2.2: constructing a fusion attitude matrix according to the filter characteristics;
wherein, S2.2 specifically includes:
the formula is adopted:
Figure BDA0003354383370000073
constructing a fusion attitude matrix; wherein the content of the first and second substances,
Figure BDA0003354383370000074
representing the fused attitude rotation matrix, Ram(s) an attitude rotation matrix, R, observed by the accelerometer and magnetometerω(s) represents the attitude rotation matrix obtained by the gyroscope, R(s) represents the actual attitude rotation matrix, muHRepresents high frequency noise, μLDenotes the low frequency accumulated error, c(s) denotes the ideal PID controller transfer function, s ═ σ + j ω, and s denotes a point on the complex frequency domain.
It should be noted that the three sensors, accelerometer, gyroscope and magnetometer, have different interference rejection capabilities. Long-term use of a gyroscope can cause drift and thus cumulative errors. The dynamic performance of accelerometers and magnetometers is relatively poor, and although accumulated errors are not generated in the using process, the accelerometers and magnetometers are easily interfered by high-frequency noise. Based on the method, the data of the three sensing devices can be mutually compensated in the frequency domain, so the data of the three sensing devices can be effectively fused by adopting the filter, the interference of high-frequency noise on the accelerometer and the magnetometer is reduced, meanwhile, the low-frequency accumulated error generated by the gyro drift is compensated, and the attitude calculation precision is improved.
S2.3: bringing a transfer formula of the rotation matrix into a fusion attitude matrix to obtain a filter fusion model;
s2.4: performing attitude calculation by using a filter fusion model to obtain a calibrated angular velocity vector;
in the present invention, S2.4 specifically includes:
s2.4.1: obtaining an acceleration error amount according to data collected by an accelerometer and data collected by a gyroscope;
further, S2.4.1 includes:
the formula is adopted:
Figure BDA0003354383370000081
obtaining an acceleration error amount; wherein, axRepresenting the acceleration of the accelerometer in the x-direction, ayRepresenting the acceleration of the accelerometer in the y-direction, azRepresenting the acceleration, v, in the z direction acquired by the accelerometerxRepresenting acceleration, v, in the x direction acquired by the accelerometeryRepresenting acceleration, v, in the y direction acquired by the accelerometerzRepresenting the acceleration in the z direction acquired by the accelerometer.
S2.4.2: obtaining a magnetic vector error according to the magnetic vector of the actual geographic position and the magnetic vector output by the magnetometer; in the invention, the magnetic vector error can be obtained by cross-multiplying the magnetic vector of the actual geographic position and the magnetic vector output by the magnetometer.
S2.4.3: acquiring an acceleration filtering threshold and a magnetic vector filtering threshold;
in order to eliminate high-frequency noise and low-frequency accumulated errors generated by the sensor, the invention needs to select a proper threshold value according to actual conditions.
S2.4.4: and denoising the acceleration error quantity and the magnetic vector error by using the acceleration filtering threshold and the magnetic vector filtering threshold to obtain the calibrated angular velocity vector.
S2.4.4 specifically includes:
the formula is adopted:
Figure BDA0003354383370000091
denoising the acceleration error quantity and the magnetic vector error to obtain a calibrated angular velocity vector; wherein k isp1Representing a first acceleration filter threshold, kL1Representing a second acceleration filter threshold, kp2Representing a first magnetic vector filter threshold, kL2Representing a second magnetic vector filter threshold, eψRepresenting the magnetic vector error. It should be noted that the filtering threshold needs to satisfy the following relationship: k (c) ═ kp+kLS, and kpIs generally greater than kL10-80 times larger.
S2.5: and obtaining the attitude quaternion of each joint of the real human model according to the calibrated angular velocity vector.
S3: generating joint point coordinate information according to the attitude quaternion;
s4: performing action matching by using the joint point coordinate information and the paving data and a preset digital human model to obtain a virtual anchor;
s4 specifically includes:
s4.1: performing expression matching by using the paving data and a preset digital human model to obtain a digital human with the expression matching completed;
furthermore, 52 facial expressions with different postures are firstly made (the number of the facial expressions can be adjusted in real time according to actual requirements), the facial expressions are packaged into a dynamic deformation package by using 3D software, an FBX file is exported, the FBX file is imported into a Unity editor, an ARKit Face Actor Face capturing BS control component is added, and meanwhile, a Face capturing data binding template Face Mapper is created by using Live Capture. It should be noted that the template supports rough name retrieval, can perform keyword matching by one key, and also supports manual custom adjustment of the binding relationship when capturing a key point and a model BS control point on the binding surface. After the binding is determined to be correct, a virtual camera is created through Live Capture, an ARKit Face Actor of a model is added into a Capture drive, a mobile phone program Face Capture is opened, a computer and a mobile phone are in the same local area network, the mobile phone program Face Capture is paired with a Unity editor, after the Face Capture is paired with the Unity editor, Face Capture can be opened, real-time data transmission is carried out, and therefore real-time facial expression drive of a digital person in Unity is achieved.
In order to solve the problems of high requirements on Face capturing technology and equipment, difficult butt joint of captured data, disordered market pricing and the like at the present stage, Face capturing uses a Live Capture plug-in which is newly introduced by Unity and takes Apple ARKit as a technology bottom layer, matches with ios end mobile phone program Face Capture introduced by Unity, and makes 52 Blend Shape (BS) expressions which are consistent with ARKit technical standards, so that the Face capturing data and a digital human model BS are quickly and conveniently bound, the capturing data are transmitted to a Unity editor in real time, and the digital human Face expression can be driven to change without delay.
S4.2: and matching the digital human body joints matched with the expressions by using the joint point coordinate information to obtain the virtual anchor.
S5: and carrying out real-time live broadcast by utilizing the virtual anchor.
Furthermore, the Unity-acquisition mobile capturing device is used for acquiring the joint point coordinate information returned by the mobile capturing device to drive the body joints and the hand joints to do real actions, and meanwhile, the Live Capture is matched to enable the digital people to make the real expressions of the anchor.
The invention can realize the final real-time live broadcast effect by a screen capture technology. Specifically, after the virtual anchor is completed, a Game window of Unity software is split into screens in a computer double-screen mode to obtain a screen B, the screen B is connected with a live broadcast computer through an HDMI junction box, the live broadcast computer captures images of the screen B through the live broadcast software to conduct live broadcast, and the problem that hardware pressure is too high due to the fact that a single computer is used for simultaneously conducting action, facial data capture and picture rendering at the present stage can be avoided.
The invention uses the motion Capture equipment to transmit the Capture data to the unity through third-party motion Capture software, and simultaneously uses the Face Capture (apple Mobile app) to transmit the Face Capture data to the unity, so that the unity can simultaneously Capture the body, both hands and facial expression data of a real person model, and real-time live broadcast of a digital person is realized. According to the invention, the digital person can be driven to live broadcast only by using the Unity official plug-in without an additional technical mode, the technical complexity of research and development is greatly reduced, the fluency of live broadcast is ensured, and the effects of real-time self-defining of lens position, picture filter, character material, property animation and the like in the live broadcast process can be realized.
The invention also provides a live broadcast system based on the virtual anchor, which comprises the following components:
the dynamic capture data acquisition module is used for acquiring dynamic capture data generated by capturing the actions of the real model by the dynamic capture equipment; the dynamic capture data comprises data collected by an accelerometer, data collected by a gyroscope, data collected by a magnetometer and paving data;
the filtering fusion module is used for filtering and fusing the data acquired by the accelerometer, the data acquired by the gyroscope and the data acquired by the magnetometer to obtain attitude quaternion of each joint of the real-person model;
the joint point coordinate calculation module is used for generating joint point coordinate information according to the posture quaternion;
the virtual anchor generating module is used for performing action matching on the joint point coordinate information and the paving data and a preset digital human model to obtain a virtual anchor;
and the real-time live broadcasting module is used for carrying out real-time live broadcasting by utilizing the virtual anchor.
Preferably, the virtual anchor generation module includes:
the expression matching unit is used for carrying out expression matching on the surface paving data and a preset digital human model to obtain a digital human with the expression matching completed;
and the virtual anchor generating unit is used for matching the digital human body joints matched with the expressions by using the joint point coordinate information to obtain the virtual anchor.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
according to the method and the device, the dynamic capture data are obtained by using the dynamic capture equipment, accurate and stable motion capture can be realized, the dynamic capture data are subjected to motion matching with the digital human model to obtain the virtual anchor, and the virtual anchor can make real expressions and motions of the anchor with low delay.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present invention, and the present invention shall be covered by the claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A live broadcasting method based on a virtual anchor is characterized by comprising the following steps:
step 1: acquiring dynamic capture data generated by capturing the actions of the real model by dynamic capture equipment; the dynamic capture data comprises data collected by an accelerometer, data collected by a gyroscope, data collected by a magnetometer and paving data;
step 2: filtering and fusing the data acquired by the accelerometer, the data acquired by the gyroscope and the data acquired by the magnetometer to obtain attitude quaternions of each joint of the real-person model;
and step 3: generating joint point coordinate information according to the attitude quaternion;
and 4, step 4: performing action matching by using the joint point coordinate information and the paving data and a preset digital human model to obtain a virtual anchor;
and 5: and carrying out real-time live broadcast by utilizing the virtual anchor.
2. The virtual anchor-based live broadcasting method according to claim 1, wherein the step 4: and performing action matching by using the joint point coordinate information and the paving data and a preset digital human model to obtain a virtual anchor, wherein the action matching comprises the following steps:
step 4.1: performing expression matching by using the paving data and a preset digital human model to obtain a digital human with the expression matching completed;
step 4.2: and matching the joint point coordinate information with the digital human body joints matched with the expressions to obtain the virtual anchor.
3. The virtual anchor-based live broadcasting method according to claim 1, wherein the step 2: filtering and fusing the data collected by the accelerometer, the data collected by the gyroscope and the data collected by the magnetometer to obtain attitude quaternion of each joint of the real-person model, and the method comprises the following steps:
step 2.1: obtaining a transfer formula of a rotation matrix according to the data acquired by the accelerometer, the data acquired by the gyroscope and the data acquired by the magnetometer;
step 2.2: constructing a fusion attitude matrix according to the filter characteristics;
step 2.3: bringing the transfer formula of the rotation matrix into the fusion attitude matrix to obtain a filter fusion model;
step 2.4: performing attitude calculation by using the filter fusion model to obtain a calibrated angular velocity vector;
step 2.5: and obtaining the attitude quaternion of each joint of the real human model according to the calibrated angular velocity vector.
4. A virtual host-based live broadcasting method according to claim 3, characterized in that said step 2.1: obtaining a transfer formula of a rotation matrix according to the data acquired by the accelerometer, the data acquired by the gyroscope and the data acquired by the magnetometer, wherein the transfer formula comprises:
the formula is adopted:
Figure FDA0003354383360000021
obtaining a transfer formula of the rotation matrix; wherein
Figure FDA0003354383360000022
Representing a rotation matrix, ωxRepresenting angular velocity, omega, in the direction of the x-axisyIndicating the angular velocity, omega, in the direction of the y-axiszIndicating the angular velocity in the z-axis direction.
5. A virtual host based live broadcasting method according to claim 3, characterized in that said step 2.2: constructing a fusion attitude matrix according to the filter characteristics, comprising:
the formula is adopted:
Figure FDA0003354383360000023
constructing a fusion attitude matrix; wherein the content of the first and second substances,
Figure FDA0003354383360000024
representing the fused attitude rotation matrix, Ram(s) an attitude rotation matrix, R, observed by the accelerometer and magnetometerω(s) represents the attitude rotation matrix obtained by the gyroscope, R(s) represents the actual attitude rotation matrix, muHRepresents high frequency noise, μLRepresents the low frequency accumulated error, and c(s) represents the ideal PID controller transfer function.
6. The virtual anchor-based live broadcasting method according to claim 5, wherein the step 2.4: performing attitude calculation by using the filter fusion model to obtain a calibrated angular velocity vector, wherein the attitude calculation comprises the following steps:
step 2.4.1: obtaining an acceleration error amount according to data collected by an accelerometer and data collected by a gyroscope;
step 2.4.2: obtaining a magnetic vector error according to the magnetic vector of the actual geographic position and the magnetic vector output by the magnetometer;
step 2.4.3: acquiring an acceleration filtering threshold and a magnetic vector filtering threshold;
step 2.4.4: and denoising the acceleration error quantity and the magnetic vector error by using the acceleration filtering threshold and the magnetic vector filtering threshold to obtain the calibrated angular velocity vector.
7. The virtual anchor-based live broadcasting method according to claim 6, wherein the step 2.4.1: obtaining an acceleration error amount according to data collected by an accelerometer and data collected by a gyroscope, comprising:
the formula is adopted:
Figure FDA0003354383360000031
obtaining an acceleration error amount; wherein, axRepresenting the acceleration of the accelerometer in the x-direction, ayRepresenting the acceleration of the accelerometer in the y-direction, azRepresenting the acceleration, v, in the z direction acquired by the accelerometerxRepresenting acceleration, v, in the x direction acquired by the accelerometeryRepresenting acceleration, v, in the y direction acquired by the accelerometerzRepresenting the acceleration in the z direction acquired by the accelerometer.
8. The virtual anchor-based live broadcasting method according to claim 7, wherein the step 2.4.4: denoising the acceleration error quantity and the magnetic vector error by using the acceleration filtering threshold and the magnetic vector filtering threshold to obtain a calibrated angular velocity vector, comprising:
the formula is adopted:
Figure FDA0003354383360000041
denoising the acceleration error quantity and the magnetic vector error to obtain a calibrated angular velocity vector; wherein k isp1Representing a first acceleration filter threshold, kL1Representing a second acceleration filter threshold, kp2Representing a first magnetic vector filter threshold, kL2Representing a second magnetic vector filter threshold, eψRepresenting the magnetic vector error.
9. A virtual anchor-based live system comprising:
the dynamic capture data acquisition module is used for acquiring dynamic capture data generated by capturing the actions of the real model by the dynamic capture equipment; the dynamic capture data comprises data collected by an accelerometer, data collected by a gyroscope, data collected by a magnetometer and paving data;
the filtering fusion module is used for filtering and fusing the data acquired by the accelerometer, the data acquired by the gyroscope and the data acquired by the magnetometer to obtain attitude quaternions of each joint of the real-person model;
the joint point coordinate calculation module is used for generating joint point coordinate information according to the posture quaternion;
the virtual anchor generating module is used for performing action matching by using the joint point coordinate information and the paving data and a preset digital human model to obtain a virtual anchor;
and the real-time live broadcast module is used for utilizing the virtual anchor to carry out real-time live broadcast.
10. The virtual anchor-based live broadcasting system according to claim 9, wherein the virtual anchor generation module includes:
the expression matching unit is used for carrying out expression matching on the surface paving data and a preset digital human model to obtain a digital human with the expression matching completed;
and the virtual anchor generating unit is used for matching the joint point coordinate information with the digital human body joints matched with the expressions to obtain the virtual anchor.
CN202111346631.5A 2021-11-15 2021-11-15 Live broadcasting method and system based on virtual anchor Pending CN114049418A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111346631.5A CN114049418A (en) 2021-11-15 2021-11-15 Live broadcasting method and system based on virtual anchor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111346631.5A CN114049418A (en) 2021-11-15 2021-11-15 Live broadcasting method and system based on virtual anchor

Publications (1)

Publication Number Publication Date
CN114049418A true CN114049418A (en) 2022-02-15

Family

ID=80208984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111346631.5A Pending CN114049418A (en) 2021-11-15 2021-11-15 Live broadcasting method and system based on virtual anchor

Country Status (1)

Country Link
CN (1) CN114049418A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546366A (en) * 2022-11-23 2022-12-30 北京蔚领时代科技有限公司 Method and system for driving digital person based on different people

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546366A (en) * 2022-11-23 2022-12-30 北京蔚领时代科技有限公司 Method and system for driving digital person based on different people
CN115546366B (en) * 2022-11-23 2023-02-28 北京蔚领时代科技有限公司 Method and system for driving digital person based on different people

Similar Documents

Publication Publication Date Title
Tanskanen et al. Live metric 3D reconstruction on mobile phones
CN107646126A (en) Camera Attitude estimation for mobile device
CN109522280B (en) Image file format, image file generating method, image file generating device and application
CN111738220A (en) Three-dimensional human body posture estimation method, device, equipment and medium
CN109671141B (en) Image rendering method and device, storage medium and electronic device
CN111161422A (en) Model display method for enhancing virtual scene implementation
CN107548470A (en) Nip and holding gesture navigation on head mounted display
CN104536579A (en) Interactive three-dimensional scenery and digital image high-speed fusing processing system and method
CN108008817B (en) Method for realizing virtual-actual fusion
CN106569591A (en) Tracking method and system based on computer vision tracking and sensor tracking
CN104216533B (en) A kind of wear-type virtual reality display based on DirectX9
CN108733206A (en) A kind of coordinate alignment schemes, system and virtual reality system
CN105516579B (en) A kind of image processing method, device and electronic equipment
CN104392045A (en) Real-time enhanced virtual reality system and method based on intelligent mobile terminal
CN106843507A (en) A kind of method and system of virtual reality multi-person interactive
CN105376484A (en) Image processing method and terminal
CN109668545A (en) Localization method, locator and positioning system for head-mounted display apparatus
CN109040525B (en) Image processing method, image processing device, computer readable medium and electronic equipment
CN114049418A (en) Live broadcasting method and system based on virtual anchor
CN114051148A (en) Virtual anchor generation method and device and electronic equipment
TWI680005B (en) Movement tracking method and movement tracking system
WO2024103805A1 (en) Processing method and system for motion capture data
CN116188742A (en) Virtual object control method, device, equipment and storage medium
CN111161335A (en) Virtual image mapping method, virtual image mapping device and computer readable storage medium
US20230368422A1 (en) Interactive dynamic fluid effect processing method and device, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination