CN111010586B - Live broadcast method, device, equipment and storage medium based on artificial intelligence - Google Patents

Live broadcast method, device, equipment and storage medium based on artificial intelligence Download PDF

Info

Publication number
CN111010586B
CN111010586B CN201911319847.5A CN201911319847A CN111010586B CN 111010586 B CN111010586 B CN 111010586B CN 201911319847 A CN201911319847 A CN 201911319847A CN 111010586 B CN111010586 B CN 111010586B
Authority
CN
China
Prior art keywords
audio data
image
data packet
text
virtual anchor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911319847.5A
Other languages
Chinese (zh)
Other versions
CN111010586A (en
Inventor
朱绍明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911319847.5A priority Critical patent/CN111010586B/en
Priority to CN202110184746.2A priority patent/CN112995706B/en
Publication of CN111010586A publication Critical patent/CN111010586A/en
Application granted granted Critical
Publication of CN111010586B publication Critical patent/CN111010586B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234336Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by media transcoding, e.g. video is transformed into a slideshow of still pictures or audio is converted into text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a live broadcast method, a live broadcast device, live broadcast equipment and a storage medium based on artificial intelligence; the method comprises the following steps: receiving a given text for the virtual anchor performance, and acquiring audio data and facial features of the corresponding virtual anchor in real time according to the given text to form at least one media data packet; performing special effect rendering processing on the facial features in the media data packet to obtain an image frame set corresponding to the virtual anchor, and forming a plug flow data packet corresponding to the media data packet by combining audio data; the method and the device can automatically synthesize the text data into the video in real time and push the video to the client, thereby effectively improving the real-time performance of live broadcasting and reducing the labor cost of live broadcasting.

Description

Live broadcast method, device, equipment and storage medium based on artificial intelligence
Technical Field
The present invention relates to artificial intelligence technology, and in particular, to a live broadcast method, apparatus, device, and storage medium based on artificial intelligence.
Background
Artificial Intelligence (AI) is a theory, method and technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results.
With the development of communication technology, network communication bandwidth is greatly improved, live video technology is mature day by day, and is applied in various aspects, and with the development of artificial intelligence technology, text-to-speech technology and image synthesis technology also become research hotspots, and the combination of live video technology and artificial intelligence technology can play a role in many places, such as replacing real people to perform virtual news broadcast, replacing game anchor to perform virtual commentary, and the like, and has a wide application prospect.
Disclosure of Invention
The embodiment of the invention provides a live broadcast method, a live broadcast device, live broadcast equipment and a storage medium based on artificial intelligence, which can automatically synthesize text data into a video in real time and push the video to a client, thereby effectively improving the real-time performance of video playing and reducing the labor cost of live broadcast.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides a live broadcast method based on artificial intelligence, which comprises the following steps:
receiving a given text for a virtual anchor to perform, and acquiring audio data and facial features corresponding to the virtual anchor in real time according to the given text to form at least one media data packet;
performing special effect rendering processing on the facial features in the media data packet in real time to obtain an image frame set corresponding to the virtual anchor, and combining the audio data to form a plug flow data packet corresponding to the media data packet;
extracting the image frame set and the audio data in the plug flow data packet in real time and pushing the data to a client so as to enable the data to be transmitted to the client
And the client presents the image frames of the virtual anchor and the corresponding audio data in real time according to the received image frame set.
The embodiment of the invention provides a live broadcast device based on artificial intelligence, which comprises:
the system comprises a text-to-speech request module, a speech recognition module and a speech recognition module, wherein the text-to-speech request module is used for receiving a given text for a virtual anchor to perform, and acquiring audio data and facial features corresponding to the virtual anchor in real time according to the given text to form at least one media data packet;
the rendering module is used for performing special effect rendering processing on the facial features in the media data packet in real time to obtain an image frame set corresponding to the virtual anchor, and forming a plug flow data packet corresponding to the media data packet by combining the audio data;
a video stream pushing module for extracting the image frame set and audio data in the stream pushing data packet in real time and pushing to the client end to make the client end push the video data packet
And the client presents the image frames of the virtual anchor and the corresponding audio data in real time according to the received image frame set.
In the foregoing solution, the text-to-speech request module is further configured to:
when the given text is received, converting the given text into a word vector corresponding to the given text in real time;
carrying out encoding processing and decoding processing on the word vector to obtain audio features corresponding to the word vector;
and synthesizing the audio features to obtain audio data corresponding to the virtual anchor.
In the foregoing solution, the text-to-speech request module is further configured to:
predicting a mouth key point of the virtual anchor according to the audio data corresponding to the given text, and carrying out normalization processing on the mouth key point so as to enable the mouth key point to be adaptive to a standard face template;
performing dimension reduction processing on the mouth key points subjected to normalization processing to obtain mouth shape features corresponding to the virtual anchor;
performing semantic analysis on the given text to obtain the semantic meaning represented by the given text;
and determining facial expression characteristics matched with the semantics according to the semantics represented by the given text, and combining the mouth shape characteristics and the facial expression characteristics to form facial characteristics corresponding to the virtual anchor.
In the foregoing solution, the text-to-speech request module is further configured to:
sending a query transaction to a blockchain network, wherein the query transaction indicates a smart contract for querying a ledger in the blockchain network such that
A consensus node in the block chain network queries the ledger in a mode of executing the intelligent contract to obtain the standard face template stored in the ledger; or
According to the identification of the standard face template, inquiring the standard face template corresponding to the identification from a standard face template database, and determining the hash value of the inquired standard face template;
and inquiring a hash value corresponding to the identifier from the blockchain network, and when the inquired hash value is consistent with the determined hash value, determining that the inquired standard face template is not tampered.
In the foregoing solution, the text-to-speech request module is further configured to:
sending the given text to a blockchain network such that
A consensus node in the blockchain network carries out compliance check on the given text in a mode of executing an intelligent contract;
and when compliance confirmation returned by more than a preset number of consensus nodes is received, determining that the given text passes the compliance check.
In the foregoing solution, the text-to-speech request module is further configured to:
dividing the given text into at least two language segments, and dividing the audio data into audio data respectively matched with the at least two language segments on the basis of the at least two language segments;
classifying the facial features based on the at least two language segments to obtain facial features respectively matched with the at least two language segments;
combining the facial features and the audio data corresponding to the same language segment to obtain a media data packet corresponding to the language segment.
In the foregoing solution, the rendering module is further configured to:
extracting audio data in the media data packet, and determining the product of the playing duration of the audio data in the media data packet and a frame rate parameter as the number of image frames corresponding to the audio data;
extracting facial features in the media data packet, and performing special effect rendering processing on the facial features to obtain a facial image comprising the facial features;
synthesizing the face image and the background image to obtain an image frame corresponding to the virtual anchor, and combining a plurality of synthesized image frames into an image frame set corresponding to the audio data;
wherein the number of image frames in the set of image frames is the number of image frames.
In the foregoing solution, the rendering module is further configured to:
converting the facial features into feature vectors corresponding to the facial features;
extracting the feature vectors with the same number as the image frames from the feature vectors corresponding to the facial features;
and carrying out special effect rendering processing based on the extracted feature vectors to obtain the face images with the same number as the image frames.
In the foregoing solution, the rendering module is further configured to:
analyzing the media data packet to obtain audio data contained in the media data packet, and determining a speech segment corresponding to the audio data;
extracting a background image corresponding to the semantics based on the semantics represented by the language segments;
the background image comprises a model, an action and a live scene of the virtual anchor;
and synthesizing the facial image and the background image to obtain an image frame of the virtual anchor, wherein the virtual anchor is in the live scene and has the action and corresponding facial expression and mouth shape in the facial image.
In the foregoing solution, the rendering module is further configured to:
sending a query transaction to a blockchain network, wherein the query transaction indicates an intelligent contract for querying a ledger in the blockchain network and query parameters corresponding to the semantics, such that
A consensus node in the block chain network inquires the ledger by executing the intelligent contract to obtain a background image which accords with the inquiry parameters in the ledger; or
Inquiring a background image corresponding to the semantics from a background image database according to the semantics, and determining a hash value of the inquired background image;
and inquiring the hash value corresponding to the semantics from the blockchain network, and when the inquired hash value is consistent with the determined hash value, determining that the inquired background image is not tampered.
In the foregoing solution, the rendering module is further configured to:
determining a phoneme order characterized by the audio data;
determining the sequence of the image frames containing the mouth shape features based on the phoneme sequence so that the mouth shape change characterized by the sequence of the image frames is matched with the phoneme sequence.
In the above solution, the apparatus further comprises:
the streaming media service module is used for responding to a live broadcast request sent by a client, distributing a streaming media playing address for the live broadcast request and returning the streaming media playing address to the client;
the video plug-flow module is further configured to:
and pushing the image frame set and the audio data to a streaming media interface corresponding to the streaming media playing address, so that the client pulls the image frame set and the audio data based on the streaming media playing address, the image frames in the image frame set are presented in real time through the playing interface of the client, and the audio data is synchronously played.
The embodiment of the invention provides live broadcast equipment based on artificial intelligence, which comprises:
a memory for storing executable instructions;
and the processor is used for realizing the live broadcasting method based on artificial intelligence provided by the embodiment of the invention when the executable instructions stored in the memory are executed.
The embodiment of the invention provides a storage medium, which stores executable instructions and is used for causing a processor to execute so as to realize the live broadcast method based on artificial intelligence provided by the embodiment of the invention.
The embodiment of the invention has the following beneficial effects:
by the live broadcasting method based on artificial intelligence, text data can be automatically synthesized into a video in real time and pushed to a client, so that the real-time performance of video playing is effectively improved, and the labor cost of live broadcasting is reduced.
Drawings
Fig. 1 is an alternative structural diagram of a live broadcast system architecture based on artificial intelligence provided by an embodiment of the present invention;
fig. 2 is an alternative structural diagram of an artificial intelligence based live broadcast device provided by an embodiment of the present invention;
3A-3D are schematic flow diagrams of an alternative artificial intelligence based live broadcast method provided by an embodiment of the invention;
fig. 4 is an implementation architecture diagram of a live broadcast method based on artificial intelligence provided in an embodiment of the present invention;
5A-5B are block diagrams of an overall framework of a virtual video live broadcast service provided by an embodiment of the invention;
fig. 6 is an implementation framework of a virtual video push streaming service provided by an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the attached drawings, the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
Before further detailed description of the embodiments of the present invention, terms and expressions mentioned in the embodiments of the present invention are explained, and the terms and expressions mentioned in the embodiments of the present invention are applied to the following explanations.
1) Text To Speech (Text To Speech): text is intelligently converted to natural speech streams through artificial intelligence (e.g., neural networks).
2) Virtual video live broadcast: the video live broadcast synthesized by the virtual roles plays the role of a virtual main broadcast and carries out live broadcast in scenes such as news, games and the like.
With the development of communication technology, the network communication bandwidth is greatly improved, and the video live broadcast technology is mature day by day and is applied in various aspects. Meanwhile, with the development of the artificial intelligence technology, the text-to-speech technology and the image synthesis technology also become research hotspots of people and develop rapidly. The combination of the live video technology and the artificial intelligence technology can play a role in many places, for example, the live video technology can replace a real person to perform virtual news broadcasting, the game anchor can replace a game to perform virtual commentary, and the like, and the live video technology and the artificial intelligence technology have wide application prospects. In the virtual video live broadcast technology, a large amount of computing time is consumed for generating audio and images, in order to ensure real-time performance of virtual video live broadcast, and realization of real-time live stream pushing of virtual video becomes an important factor influencing final live video quality, the live broadcast methods in the related technologies mostly aim at existing stable audio and image data input (such as local video stream pushing) or application scenes capable of rapidly acquiring audio and image data (such as data acquired by a camera) and the like, and cannot be well applied in virtual video live broadcast, and specifically, the technical schemes in the related technologies have the following defects in practical application: the live broadcast method based on the artificial intelligence solves the problem that virtual video is live broadcast in real time on the premise that large computing power is required to be consumed for obtaining audio and video data through a parallel processing mode, and can effectively enhance real-time performance of virtual video live broadcast and improve fluency of live broadcast video.
The embodiment of the invention provides a live broadcast method, a live broadcast device, live broadcast equipment and a storage medium based on artificial intelligence, which can solve the problem of real-time live broadcast of virtual videos on the premise of consuming large computing power for acquiring audio and video data. In the following, an exemplary application will be explained when the device is implemented as a server.
Referring to fig. 1, fig. 1 is an alternative architecture diagram of an artificial intelligence based live broadcast system 100 provided by an embodiment of the present invention, in which a terminal 400 (illustratively, a terminal 400-1 and a terminal 400-2) is connected to a server 200 through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of both. The terminal 400-1 and the terminal 400-2 are respectively provided with a live client 410-1 and a live client 410-2, wherein the live client has a function of providing live content and a function of presenting live video, the server 200 comprises a text-to-speech request module 2551, a rendering module 2552 and a video stream pushing module 2553, the text-to-speech request module 2551 obtains corresponding mouth shape features and audio data in a streaming manner based on given texts returned by the terminals 400-1 and 400-2 to form a media data packet, the rendering module 2552 obtains expression images of virtual characters through rendering processing on the mouth shape features in the media data packet obtained each time to form a stream pushing data packet, the video stream pushing module 2553 synthesizes the expression images (image frame set) obtained each time and the audio data into virtual video in real time and pushes the virtual video to the client, the text-to-speech request module 2551, the rendering module 2552 and the video plug-flow module 2553 are mutually independent and assist in parallel to process received data, can automatically synthesize text data into videos in real time and push the videos to the live broadcast clients 410-1 and 410-2, and in the process that the rendering module 2552 renders the acquired mouth shape features each time, image resources in the database 500 in the live broadcast system 100 based on artificial intelligence can be called to achieve richer live broadcast video presentation.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an artificial intelligence-based live broadcast server 200 according to an embodiment of the present invention, where the server 200 shown in fig. 2 includes: at least one processor 210, memory 250, at least one network interface 220, and a user interface 230. The various components in server 200 are coupled together by a bus system 240. It is understood that the bus system 240 is used to enable communications among the components. The bus system 240 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 240 in fig. 2.
The Processor 210 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 230 includes one or more output devices 231, including one or more speakers and/or one or more visual display screens, that enable the presentation of media content. The user interface 230 also includes one or more input devices 232, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 250 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 250 optionally includes one or more storage devices physically located remotely from processor 210.
The memory 250 includes volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 250 described in embodiments of the invention is intended to comprise any suitable type of memory.
In some embodiments, memory 250 is capable of storing data, examples of which include programs, modules, and data structures, or a subset or superset thereof, to support various operations, as exemplified below.
An operating system 251 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 252 for communicating to other computing devices via one or more (wired or wireless) network interfaces 220, exemplary network interfaces 220 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 253 to enable presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 231 (e.g., a display screen, speakers, etc.) associated with the user interface 230;
an input processing module 254 for detecting one or more user inputs or interactions from one of the one or more input devices 232 and translating the detected inputs or interactions.
In some embodiments, the apparatus provided by the embodiments of the present invention may be implemented in software, and fig. 2 shows an artificial intelligence based live device 255 stored in a memory 250, which may be software in the form of programs and plug-ins, etc., and includes the following software modules: a text-to-speech request module 2551, a rendering module 2552, a video plug-streaming module 2553 and a streaming media service module 2554, which are logical and thus can be arbitrarily combined or further split according to the implemented functions, which will be described below.
In other embodiments, the artificial intelligence based live broadcasting Device 255 provided by the embodiments of the present invention may be implemented in hardware, for example, the artificial intelligence based live broadcasting Device 255 provided by the embodiments of the present invention may be a processor in the form of a hardware decoding processor, which is programmed to execute the artificial intelligence based live broadcasting method provided by the embodiments of the present invention, for example, the processor in the form of the hardware decoding processor may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
The artificial intelligence based live broadcasting method provided by the embodiment of the invention will be described in conjunction with the exemplary application and implementation of the terminal provided by the embodiment of the invention.
Referring to fig. 3A, fig. 3A is an optional flowchart of a live broadcasting method based on artificial intelligence according to an embodiment of the present invention, which will be described with reference to step 101 and step 106 shown in fig. 3A.
In step 101, the server receives given text from the client for the virtual anchor show.
In some embodiments, the client sends a given text for the virtual anchor to the server, where the given text may be input by a user or crawled through a network, where the given text is from a live content provider, where the given text may also be from a user watching a live broadcast, and the virtual anchor may perform in real time based on the text input by the user watching the live broadcast through the client, and in addition, during the live broadcast, the user watching the live broadcast may interact with the virtual anchor, and during the live broadcast performance of the virtual anchor, after the user inputs the interactive text, the client may query a corresponding dialect according to the interactive text to generate a given text corresponding to the dialect, or send the interactive text to the server, and the server queries the corresponding dialect according to the interactive text to generate a given text corresponding to the dialect, the virtual anchor performs live performance according to the given text corresponding to the dialect, and interaction between the watching user and the virtual anchor is realized.
In step 102, the server obtains audio data and facial features of the corresponding virtual anchor in real time according to the given text to form at least one media data packet and continues to process the next given text in real time.
In some embodiments, in step 102, the server obtains the audio data and facial features of the corresponding virtual anchor in real time according to the given text to form at least one media data packet and continues to process the next given text in real time, the text-to-speech request module may request a text-to-speech service to obtain the media data packet, the text-to-speech service is configured to obtain the audio data and facial features of the corresponding virtual anchor in real time according to the given text to form at least one media data packet and continue to process the next given text in real time, or the text-to-speech module may obtain the audio data and facial features of the corresponding virtual anchor in real time directly according to the given text to form at least one media data packet and continue to process the next given text in real time, where processing the next given text in real time means that the text-to-speech request module does not wait for the completion of the entire plug flow process before processing the next given text, the method includes that a text-to-speech request module receives a text packet, and sends the text packet to a client, wherein the text packet is a media packet, the media packet is a media packet, and the media packet is a media packet. In the processing process, the given text can be divided, media data packets corresponding to the respective language segments are generated after the division, and after each language segment is processed to generate one media data packet, the next language segment is processed to generate the next media data packet, or a plurality of media data packets are generated for a plurality of language segments simultaneously based on the parallel processing capability configured by the processor.
Referring to fig. 3B, fig. 3B is an optional flowchart of the artificial intelligence based live broadcasting method according to the embodiment of the present invention, and the server obtains the audio data of the corresponding virtual anchor in real time according to the given text in step 102, which may be specifically implemented through step 1021 and step 1023.
In step 1021, as the given text is received, the server converts the given text into a word vector corresponding to the given text in real time.
In step 1022, the server performs encoding processing and decoding processing on the word vector to obtain the audio feature of the corresponding word vector.
In step 1023, the server performs synthesis processing on the audio features to obtain audio data corresponding to the virtual anchor.
In some embodiments, an end-to-end deep learning text-to-speech model may be constructed, a text-to-speech model may be trained directly by a deep learning method, after the model training is completed, the model may generate corresponding audio for a given text, a text-to-speech request module in the server may convert the received given text into word vectors corresponding to the given text in real time, for the text-to-speech model, the text needs to be first converted into corresponding word vectors, where the given text may be a sentence, the given text is segmented to determine word vectors corresponding to each word, the word vectors may be determined by querying a dictionary, subscripts of the dictionary are used as identifiers corresponding to each word in the dictionary, and then each text may be converted into its corresponding word vector by traversing the dictionary; and coding the word vectors to obtain intermediate semantic representation, decoding to obtain audio features corresponding to the word vectors, and synthesizing the finally obtained audio features based on each audio synthesis algorithm to obtain audio data corresponding to the virtual anchor.
In some embodiments, the server in step 102 obtains the facial features of the corresponding virtual anchor in real time according to the given text, which can be implemented according to the following technical scheme that according to the audio data of the corresponding given text, mouth key points of the virtual anchor are predicted, and normalization processing is performed on the mouth key points, so that the mouth key points are adapted to the standard face template; performing dimension reduction processing on the mouth key points subjected to normalization processing to obtain mouth shape features corresponding to the virtual anchor; performing semantic analysis on the given text to obtain the semantic meaning represented by the given text; and determining facial expression characteristics matched with the semantics according to the semantics represented by the given text, and combining the mouth shape characteristics and the facial expression characteristics to form facial characteristics corresponding to the virtual anchor.
In some embodiments, the mouth shape representation is predicted from the corresponding audio data of a given text, where the audio is represented using spectral features, and for calculating the mouth shape representation, mouth key points need to be extracted from the face and normalized so as not to be affected by image size, face position, face rotation, face size, normalization being important in this process because it enables the generated mouth key points to be compatible with any image, so that the normalized mouth key points can be adapted to a standard face template, where the standard face template is a standard face template of a virtual anchor, different virtual anchor characters correspond to different standard face templates, the virtual anchor can be an animal, the standard face template can be an animal avatar, the virtual anchor can also be a cartoon character, the standard face template can be a cartoon character avatar, performing dimension reduction processing on the mouth key points after normalization processing, performing decorrelation on the mouth key point characteristics, using the most important parts as mouth shape characterization, regarding facial characteristics, facial expression characteristics besides the mouth shape characteristics, wherein the mouth shape characteristics are related to audio data, the facial expression characteristics are related to given texts, performing semantic analysis on the given texts, regarding semantics represented by the given texts, displaying facial expression characteristics adapted to the semantics by a virtual anchor, and forming the facial characteristics of the virtual anchor by combining the mouth shape characteristics and the facial expression characteristics.
In some embodiments, before normalization processing is performed on the mouth key points, a standard face template may be obtained by the following technical scheme, and an inquiry transaction is sent to the blockchain network, where the inquiry transaction indicates an intelligent contract for inquiring an account book in the blockchain network, so that a consensus node in the blockchain network inquires the account book by executing the intelligent contract, and the standard face template stored in the account book is obtained; or according to the identification of the standard face template, inquiring the standard face template corresponding to the identification from the standard face template database, and determining the hash value of the inquired standard face template; and inquiring a hash value corresponding to the identifier from the blockchain network, and when the inquired hash value is consistent with the determined hash value, determining that the inquired standard face template is not tampered.
In some embodiments, any user may submit a standard face template to the blockchain network or the standard face template database through the live broadcast client, the submission of the standard face template to the blockchain network may be implemented by a query transaction, the query transaction queries an account book in the consensus node by executing an intelligent contract corresponding to the query, so as to obtain the standard face template stored in the account book, in addition, the standard face template may be queried from the standard face template database according to the identifier of the standard face template, the identifier corresponds to the standard face template one to one, the standard face template is queried according to the unique identifier, and the hash value of the queried standard face template is determined, so as to perform hash verification on the queried standard face template, when the queried hash value is consistent with the preset determined hash value, the queried standard face template is not tampered, thus, the mouth key points are normalized based on this standard face template.
In some embodiments, some standard face templates uploaded by users may have elements such as violence and the like which do not conform to social public order customs, and therefore, the consensus node invokes an intelligent contract corresponding to compliance check to perform compliance check on the standard face templates, the intelligent contract may perform compliance check by using an artificial intelligence model, trains the artificial intelligence model according to compliance, and identifies the input standard face templates by using the artificial intelligence model by executing the intelligent contract to identify the standard face templates which do not conform to the compliance, and for the standard face templates which do not conform to the compliance, the standard face templates which do not conform to the compliance will not be stored on the blockchain network.
In some embodiments, before forming at least one media data packet, a technical solution may be further performed, in which a given text is sent to the blockchain network, so that a consensus node in the blockchain network performs compliance check on the given text by executing an intelligent contract; and when the compliance confirmation returned by more than the preset number of the consensus nodes is received, determining that the given text passes the compliance check.
In some embodiments, in addition to performing compliance checking on the standard face template, the compliance checking may be performed on the given text, the given text may be sent to the blockchain network for compliance checking before forming the media data packet, the given text may be subjected to consensus verification by the consensus nodes, the consensus verification may be performed based on compliance to implement the compliance checking, the given script may be determined to pass the compliance checking when consensus verification acknowledgements returned by more than a threshold number of consensus nodes are received, subsequent processing may be performed to avoid delay of the compliance checking, the subsequent processing may be performed when an acknowledgement of one consensus node is received, the processing may be terminated and the reason returned when a sufficient number of acknowledgements returned acknowledgements of consensus nodes are not received within a subsequent waiting time, where the timing of terminating the processing may occur when forming the media data packet, or when special effect rendering processing is performed and video plug-streaming is performed, the reason may be that the compliance check on the given text fails, and the client is required to resubmit the given text.
Referring to fig. 3C, based on fig. 3A, fig. 3C is an optional flowchart of the artificial intelligence based live broadcasting method according to the embodiment of the present invention, and the step 102 of forming at least one media data packet and continuously processing the next given text in real time may be specifically implemented by steps 1024 and 1026.
In step 1024, the server divides the given text into at least two speech segments, and divides the audio data into audio data respectively matched with the at least two speech segments based on the at least two speech segments.
In step 1025, based on the at least two language segments, the server classifies the facial features to obtain facial features respectively matching the at least two language segments.
In step 1026, the server combines the facial features and audio data corresponding to the same speech segment to obtain a media data packet corresponding to the speech segment.
In some embodiments, the given text is divided into at least two language segments, for example, for the given text "great family, i.e. my be anchor small a", it may be divided into two language segments "great family" and "my be anchor small a", or three language segments "great family", "i is" and "anchor small a", corresponding to different language segments, and the audio data of "great family, i.e. anchor small a" is divided into two parts of audio data of "great family" and audio data of "my be anchor small a", and also facial features are classified based on the language segments obtained after division, so as to obtain facial features corresponding to "great family" and facial features corresponding to "my be anchor small a", the facial features of "great family" include mouth features and expression features corresponding to "great family" and the facial features of "my be anchor small a", the facial features of "my be anchor small a" include mouth features and expression features corresponding to "great family small a", and combining the facial features and the audio data corresponding to the 'great family goodness' into a media data packet corresponding to the 'great family goodness'.
In step 103, the server performs special effect rendering processing on the facial features in the media data packet in real time to obtain an image frame set corresponding to the virtual anchor, and forms a push stream data packet corresponding to the media data packet by combining with the audio data, and continues to process the next media data packet in real time.
In some embodiments, in step 103, the server performs special-effect rendering processing on the facial features in the media data packet in real time to obtain an image frame set corresponding to the virtual anchor, and forms a stream pushing data packet corresponding to the media data packet in combination with the audio data, and continues to process a next media data packet in real time, which is implemented by the rendering module, where the real-time processing of the next media data packet means that the rendering module does not wait for the completion of the whole stream pushing process to process the next media data packet, that is, as long as one media data packet is processed, the next media data packet starts to be processed, the media data packets may be live in the same field, the received different media data packets may also be for different live broadcast, the correspondingly formed live broadcast stream packet may carry live broadcast identifiers, and different live broadcast identifiers correspond to different live broadcast identifiers, further, the rendering module, based on the processing power configured by the processor, may itself have parallel processing power corresponding to the processing power configured by the processor, i.e. process multiple media data packets simultaneously, which may be media data packets for different live broadcasts.
Referring to fig. 3D, based on fig. 3A, fig. 3D is an optional flowchart of the live broadcasting method based on artificial intelligence according to the embodiment of the present invention, in step 103, the server performs special effect rendering processing on the facial features in the media data packet in real time to obtain an image frame set corresponding to the virtual anchor, which may be specifically implemented by step 1031 and step 1033.
In step 1031, the server extracts the audio data in the media data packet, and determines the product of the playing duration of the audio data in the media data packet and the frame rate parameter as the number of image frames corresponding to the audio data.
In step 1032, the server extracts the facial features in the media data packet, and performs special effect rendering processing on the facial features to obtain a facial image including the facial features.
In step 1033, the server synthesizes the face image and the background image to obtain an image frame corresponding to the virtual anchor, and combines a plurality of synthesized image frames into an image frame set corresponding to the audio data, where the number of image frames in the image frame set is the number of image frames.
In some embodiments, a rendering module in the server extracts audio data in the media data packet, and determines a product of a playing duration of the audio data in the media data packet and a frame rate parameter as a number of image frames of the corresponding audio data; extracting facial features in the media data packet, and performing special effect rendering processing on the facial features to obtain a facial image comprising the facial features; and synthesizing the face image and the background image to obtain an image frame corresponding to the virtual anchor, and combining a plurality of synthesized image frames into an image frame set corresponding to the audio data, wherein the number of the image frames in the image frame set is the number of the image frames.
In some embodiments, the media data packet has audio data and facial features that match each other, and the rendering module is responsible for generating an image frame set based on the facial features, where the number of image frames in the image frame set is related to the length of the corresponding audio data, for example, the length of the audio data is 1 second, the frame rate is 25 frames per second, and the number of image frames in the image frame set in the media data packet corresponding to the audio data is 25 frames. The generation process of the image frame mainly comprises the steps of carrying out special effect rendering processing on facial features to enable the obtained facial images to represent corresponding facial features, wherein the facial features comprise mouth shape features and facial expression features, for example, for a media data packet corresponding to the language field of 'good family', the mouth shapes on the obtained facial images are mouth shapes corresponding to 'good family', the expressions on the obtained facial images are expressions corresponding to 'good family', the mouth shape features can be suitable for standard facial templates, the standard facial templates comprise real character facial templates, cartoon character facial templates, animal facial templates or graphic templates, in virtual live broadcast, a common cuboid opening can be used for live broadcast, the facial images and background images are synthesized to obtain the image frame of the virtual live broadcast, during the special effect rendering process of the facial features, a plurality of facial images are generated, so that a plurality of image frames are formed by combining with a plurality of background images, the plurality of image frames form an image frame set, and the number of the facial images, the number of the background images and the number of the image frames are consistent.
In some embodiments, the process of extracting facial features in the media data packet and performing special effect rendering processing on the facial features to obtain a facial image corresponding to the facial features may be implemented by the following technical scheme, and the facial features are converted into feature vectors corresponding to the facial features; extracting feature vectors with the same number as the number of image frames from the feature vectors corresponding to the facial features; and carrying out special effect rendering processing based on the extracted feature vectors to obtain the face images with the same number as the image frames.
In some embodiments, the number of image frames in the image frame set is determined according to audio data, the length of the audio data is 1 second, the frame rate is 25 frames per second, and the number of image frames in the image frame set in the media data packet corresponding to the audio data is 25 frames, which means that 25 image frames need to be synthesized, that is, 25 facial images need to be obtained, where the facial features are mathematically expressed by feature vectors, all facial features in the media data packet in which the 1 second audio data is located are converted into feature vectors, and different feature vectors can represent different mouth shapes, because the mouth shape change is a dynamic process, even if only 1 second audio data, the mouth shape change is a dynamic process, wherein 25 image frames are needed to present the dynamic process, and therefore, the same number of feature vectors as the number of image frames are extracted from the feature vectors, and carrying out special effect rendering processing based on the extracted feature vectors to obtain the face images with the same number as the image frames.
In some embodiments, the process of synthesizing the face image and the background image in the above steps can be implemented by the following technical scheme, analyzing the media data packet to obtain audio data contained in the media data packet, and determining a speech segment corresponding to the audio data; extracting a background image corresponding to the semantics based on the semantics represented by the language segments; the background image comprises a model, an action and a live broadcast scene of a virtual anchor; and synthesizing the face image and the background image to obtain an image frame of the virtual anchor, wherein the virtual anchor is in a live scene and has actions and corresponding facial expressions and mouth shapes in the face image.
In some embodiments, in the process of synthesizing, it is required to determine a background image to which a face image is adapted, where the background image includes a virtual anchor model, a motion and a live broadcast scene, the virtual anchor model may be a three-dimensional model, different virtual anchor models correspond to different body postures and different overall images, the motion in the background image is a motion that the virtual anchor will present, such as a hand-waving motion or a dancing motion, the live broadcast scene is an environment in which the virtual anchor is located, such as a room in the field, a forest or a specific environment, and the determination of the background image is also related to semantics, audio data or facial features in a media data packet are in one-to-one correspondence with language segments, each language segment has a semantic meaning representing the language segment, the background image adapted to the semantic meaning is selected based on the semantic meaning, and after the face image and the background image are synthesized, the obtained image frame of the virtual anchor can present the following information, the virtual anchor is in a live scene, and the virtual anchor has actions and corresponding facial expressions and mouth shapes in the facial images.
In some embodiments, the process of extracting the background image corresponding to the semantics may be implemented by sending a query transaction to the blockchain network, where the query transaction indicates an intelligent contract for querying an account book in the blockchain network and a query parameter corresponding to the semantics, so that a consensus node in the blockchain network queries the account book by executing the intelligent contract to obtain the background image in the account book, where the background image meets the query parameter; or inquiring a background image corresponding to the semantics from a background image database according to the semantics, and determining a hash value of the inquired background image; and inquiring a hash value corresponding to the semantics from the blockchain network, and determining that the inquired background image is not tampered when the inquired hash value is consistent with the determined hash value.
In some embodiments, the background image may be provided by a special art content provider, or by a live broadcast initiating user, and the background image may be uploaded to a blockchain network, so that the background image is not easily tampered, and it is prevented that content with bad social procedures appears in a synthesized image frame or content of the live broadcast initiating user is intentionally damaged in a process of live broadcast generation, and thus, the process of obtaining the background image may be implemented by initiating a query transaction to the blockchain network, where an intelligent contract for querying an account book in the blockchain network and a query parameter corresponding to semantics are indicated in the query transaction, and the query parameter may be an identifier of the background image or upload information of the background image, and the background image in the account book according with the query parameter may be obtained by executing the intelligent contract corresponding to the query transaction, in another embodiment, inquiring the background image and the hash value of the background image from the background image database according to the semantics, performing hash verification on the inquired hash value, comparing the hash value of the background image with the hash value of the semantics and the hash value of the background image stored in the blockchain network, and if the comparison result is consistent, indicating that the inquired background image is not tampered.
In some embodiments, after obtaining the image frame set, an order of the image frames in the image frame set may be determined, which is implemented by determining a phoneme order represented by the audio data; determining the sequence of the image frames containing the mouth shape features based on the phoneme sequence so that the mouth shape change characterized by the sequence of the image frames is matched with the phoneme sequence.
In some embodiments, the image frames in the image frame set themselves have no sequence attribute, but the image frames are presented in sequence, from the perspective of the mouth shape, the mouth shape change and the audio output should be consistent, otherwise, the mouth shape and the audio are not consistent, therefore, the phoneme sequence in the audio data is determined, the sequence of the mouth shape features is determined based on the phoneme sequence, since the mouth shape features correspond to the image frames, the sequence of the image frames containing the mouth shape features can be determined, each image frame is sequentially numbered, the mouth shape change represented by the sequence of the image frames is matched with the phoneme sequence, and then, when the audio is output, the image frames are presented in the numbered sequence.
In some embodiments, in addition to considering the mouth shape change to the ordering of the image frames, actions in the background images may also be considered to the ordering of the image frames, for the background images of the same media data packet, the background images of the same media data packet may represent the same group of actions, and the presentation of the group of actions may also be a dynamic process, and the background images of the group of actions and the facial images containing the mouth shape feature are both ordered according to the phoneme sequence, so that the presentation of the mouth shape change and the actions are both consistent with the output of the audio data, so that the image frames and the audio data are matched and synchronized.
In some embodiments, the sorting process may also occur in other stages, where the sorting is performed on the image frames, and in practice, the sorting process may also be implemented by a text-to-speech request module, and when the media data packet is obtained, the mouth shape features in the facial features are sorted so that the mouth shape features and the phonemes correspond one to one, and the mouth shape change and the phoneme output order represented by the mouth shape features are matched.
In step 104, the server extracts the set of image frames and the audio data in the push stream data packet in real time.
In step 105, the server pushes the set of image frames and the audio data to the client.
In step 106, the client renders the image frames of the virtual anchor, and the corresponding audio data, in real-time from the received set of image frames.
In some embodiments, the video streaming module in the server may directly push the extracted image frame set and audio data to the client for live broadcasting, and may further allocate a streaming media playing address to the live broadcasting request in response to a live broadcasting request sent by the client, and return the streaming media playing address to the client.
In some embodiments, when the client sends a live request including a given text to the server, the server may allocate a streaming server address to the client, so that the client continuously pulls a live video from the address, and the server selects an appropriate streaming server address to send to the live client, where the selection manner of the streaming server address may be fixed, or a pre-allocation selection range, or the like.
In some embodiments, in steps 105 and 106, the server pushes the image frame set and the audio data to the client, and the client presents the image frame of the virtual anchor and the corresponding audio data in real time according to the received image frame set, which may be implemented by pushing the image frame set and the audio data to a streaming media interface corresponding to a streaming media playing address, so that the client pulls the image frame set and the audio data based on the streaming media playing address, presents the image frames in the image frame set in real time through a playing interface of the client, and plays the audio data synchronously.
In some embodiments, during the live broadcast, the client may receive an interactive text from a live viewing user, where the interactive text refers to an interaction with a virtual host during the live broadcast, for example, for a diet live broadcast, the content of the interactive text may be an address or a dish of an inquiry restaurant, at this time, the client or the server queries an image resource and a dialog resource matching the interactive text, where the dialog resource may be pre-configured and may be used for answering a dialog asking questions of the viewing user, so that the live viewing user may obtain a timely response, and meanwhile, the dialog may set a quick reply dialog, which may respond to the interactive text in real time, and while responding to the interactive text through the quick reply dialog, the client may obtain a given text with substantial content uploaded by a content provider specifically for answering the interactive text, or the server directly crawls a given text with substantive content for specifically answering the interactive text on the internet according to the interactive text, carries out voice synthesis processing on the interactive text, and determines audio data and facial features for responding to the interactive text by combining with image resources, and the subsequent processing process is the same as the live broadcast method based on artificial intelligence provided by the embodiment of the invention.
In some embodiments, the conversational resources and the image resources of the virtual anchor can be configured, the image resources of the virtual anchor uploaded by the art resource provider are received, and the conversational resources of the virtual anchor are created; wherein the image resource comprises at least one of: scene resources, model resources, and action resources; when receiving the image resource of the new virtual anchor, distributing a new version identifier for the received image resource, and generating configuration information corresponding to the new version, wherein the configuration information comprises at least one of the following: a scene resource configuration project, a model resource configuration project, an action resource configuration project and a conversational resource configuration project; and when the received avatar resource of the virtual anchor is the updated resource of the existing virtual anchor, updating the avatar resource and the configuration information of the corresponding version of the existing virtual anchor.
Referring to fig. 4, fig. 4 is an implementation architecture diagram of a live broadcast method based on artificial intelligence according to an embodiment of the present invention, a text-to-speech request module obtains facial features and audio data based on a given text (corresponding to step 101 and 102 in fig. 4, in step 101, a server receives the given text from a client for a virtual anchor performance, in step 102, the server obtains the audio data and facial features corresponding to the virtual anchor in real time according to the given text to form at least one media data packet and continues to process a next given text in real time), a rendering module performs three-dimensional rendering on the facial features obtained each time (corresponding to step 103 in fig. 4, in step 103, the server performs rendering on the facial features in the media data packet in real time to obtain an image frame set corresponding to the virtual anchor, and combining the audio data to form a stream pushing data packet corresponding to the media data packet, and continuing to process the next media data packet in real time), acquiring a face image of the virtual character, synthesizing the face image and the audio data acquired each time into a virtual video by the video stream pushing module, and pushing the virtual video to the client (corresponding to steps 104 and 105 in fig. 4, in step 104, the server extracts an image frame set and audio data in the stream pushing data packet in real time, and in step 105, the server pushes the image frame set and the audio data to the client), so that in step 106, the client presents an image frame of the virtual anchor and corresponding audio data in real time according to the received image frame set. The text-to-speech request module, the rendering module and the video stream pushing module are mutually independent and assisted in parallel, and text data can be automatically synthesized into a video in real time and pushed to a client. The three modules are mutually independent and parallel, the mutually independent and parallel representation text-to-speech request module sends the obtained first media data packet to the rendering module after the first given text is processed, and continuing to process the second given text to obtain a second media data packet, the rendering module processing the received first media data packet to generate a first push stream data packet, at this time, the rendering module continuing to process the received second media data packet to generate a second push stream data packet, that is, the text-to-speech request module will not wait for the first given text to complete the whole plug-flow process before proceeding to the next processing, but continues to process the received given text, and the rendering module will not be influenced by the processing progress of other modules and will continue to process the next media data packet received by the rendering module.
An exemplary application of the artificial intelligence based live broadcasting method provided by the embodiment of the present invention in an actual application scenario will be described below.
The live broadcasting method based on artificial intelligence provided by the embodiment of the invention can be applied to a plurality of items and product applications including virtual news live broadcasting, virtual game commentary and the like, so that the real-time performance of live broadcasting is effectively improved.
In a virtual live broadcast scene, the artificial intelligence-based live broadcast method provided by the embodiment of the invention can be used for acquiring audio data and image data in parallel according to an input text (a given text) and pushing the audio data and the image data to a client in real time so as to realize real-time live broadcast.
Fig. 5A-5B are schematic diagrams of an overall framework of a virtual video live broadcast service provided in an embodiment of the present invention, and referring to fig. 5A-5B, the overall framework of the virtual video live broadcast service includes a live broadcast client, a text-to-speech service, and a virtual video push streaming service, where an implementation method of the virtual video push streaming service is a main innovation of the present invention. In fig. 5A, a live broadcast client sends a live broadcast request to a virtual video push streaming service, where the live broadcast request carries a given text, the virtual video push streaming service requests a text-to-speech service to acquire audio data and facial features, the text-to-speech service returns the audio data and the facial features as responses, and the virtual video push streaming service executes a live broadcast method provided by the embodiment of the present invention according to the acquired responses to push an image frame set and audio data to the live broadcast client. In fig. 5B, a live broadcast client sends a live broadcast request to a virtual video push streaming service, where the live broadcast request carries a given text, the virtual video push streaming service returns a streaming media service address to the client as a response, and obtains audio data and facial features from the text-to-speech service request, and the text-to-speech service returns the audio data and facial features as a response, and the virtual video push streaming service executes the live broadcast method provided in the embodiment of the present invention according to the obtained response, pushes an image frame set and audio data to the streaming media service, and the live broadcast client pulls a video from the streaming media service address corresponding to the streaming media service.
Fig. 6 is a framework for implementing a virtual video push streaming service according to an embodiment of the present invention, and referring to fig. 6, the virtual video push streaming service includes a text-to-speech request module, a three-dimensional rendering module, and a video push streaming module. A live broadcast client sends a text request to a virtual video live broadcast server, wherein a given text in the text request sent by the client is a character which is spoken by a virtual role of a live broadcast video; the text-to-speech request module initiates a request to a text-to-speech service, acquires audio data and mouth shape characteristics corresponding to a given text in a streaming mode, pushes a data packet to the three-dimensional rendering module when acquiring the data packet containing the audio and mouth shape characteristics from the text-to-speech service, and sends an end packet to the three-dimensional rendering module until receiving the end packet of the text-to-speech service; the method comprises the steps that every time a three-dimensional rendering module obtains a text-to-voice data packet, mouth shape features in the data packet are extracted to conduct spatial three-dimensional rendering to obtain a group of corresponding face images, meanwhile, one face image and one background image are combined into a complete image to obtain a group of complete live video image frames, the live video image frames and audio data are packaged and pushed to a video plug-flow module together, and when an end packet is received, the end packet is sent to the video plug-flow module; the video stream pushing module extracts audio data and image frame data from a data packet pushed by a three-dimensional rendering module every time the data packet is obtained, synchronously pushes the audio and image frame data to a client through a Fast Forwarding Motion Picture Expert Group (FFMPEG) tool, and finishes pushing the video until an end packet is received.
The text-to-speech request module, the three-dimensional rendering module and the video stream pushing module are independently operated modules, and the mutually independent data packet parallel processing mode is a key point for realizing real-time stream pushing of virtual videos. Before acquiring streaming data of a text-to-speech service, the text-to-speech service needs to be requested to acquire the duration of audio finally generated by a given text, so that the duration of live video finally generated is estimated, audio and mouth shape characteristics are acquired from the text-to-speech service in a streaming manner, audio and video data can be quickly acquired, and real-time live broadcasting is realized.
The method for live broadcasting based on artificial intelligence determines the video length according to a given text, in the process of rendering, n groups of proper background images are selected from pre-stored general background images to be matched and synthesized with facial expressions, each group of background images is a complete action, the n groups of background images can exactly complete n actions when the video is finished, the video length is also related to the frame rate, and if the frame rate is 25 frames per second, the video in 10 seconds needs 250 frames of images.
The video stream pushing module mainly uses an FFMPEG tool to push the video stream, when a first data packet is received, the stream pushing module carries out stream pushing initialization and pushes audio and video, when an ending packet is received, the stream pushing process is ended, and at the moment, a complete video stream pushing process is completed.
The processing performed by the text-to-speech request module, the three-dimensional rendering module and the video stream pushing module is a very time-consuming process, if the processing is realized in a serial manner, the data processing time duration is greater than the video generation time duration, and the live broadcast of the virtual video cannot be realized, so that the text-to-speech request module, the three-dimensional rendering module and the video stream pushing module are independent and cooperate in parallel, as long as the processing time of each module is less than the video time duration, and the client only needs to wait for a fixed delay after sending the request, so that the real-time live broadcast of the virtual video can be realized, and the fixed delay is equal to the time consumed for transmitting the first data acquired by the text-to-speech request module to the video stream pushing module and pushing the stream successfully.
Continuing with the exemplary architecture of the artificial intelligence based live device 255 as implemented as software modules provided by embodiments of the present invention, in some embodiments, as shown in fig. 2, the software modules stored in the artificial intelligence based live device 255 of the memory 250 may include: a text-to-speech request module 2551, configured to receive a given text for a virtual anchor performance, and obtain, in real time, audio data and facial features corresponding to the virtual anchor according to the given text to form at least one media data packet; a rendering module 2552, configured to perform special effect rendering processing on the facial features in the media data packet in real time to obtain an image frame set corresponding to the virtual anchor, and form a push stream data packet corresponding to the media data packet in combination with the audio data; the video stream pushing module 2553 is configured to extract the image frame set and the audio data in the stream pushing data packet in real time, and push the extracted image frame set and the audio data to the client, so that the client presents the image frame and the corresponding audio data of the virtual anchor in real time according to the received image frame set.
In some embodiments, the text-to-speech request module 2551 is further configured to: when the given text is received, converting the given text into a word vector corresponding to the given text in real time; carrying out encoding processing and decoding processing on the word vector to obtain audio features corresponding to the word vector; and synthesizing the audio features to obtain audio data corresponding to the virtual anchor.
In some embodiments, the text-to-speech request module 2551 is further configured to: predicting a mouth key point of the virtual anchor according to the audio data corresponding to the given text, and carrying out normalization processing on the mouth key point so as to enable the mouth key point to be adaptive to a standard face template; performing dimension reduction processing on the mouth key points subjected to normalization processing to obtain mouth shape features corresponding to the virtual anchor; performing semantic analysis on the given text to obtain the semantic meaning represented by the given text; and determining facial expression characteristics matched with the semantics according to the semantics represented by the given text, and combining the mouth shape characteristics and the facial expression characteristics to form facial characteristics corresponding to the virtual anchor.
In some embodiments, the text-to-speech request module 2551 is further configured to: sending a query transaction to a blockchain network, wherein the query transaction indicates an intelligent contract for querying a ledger in the blockchain network, so that a consensus node in the blockchain network queries the ledger by executing the intelligent contract, and the standard face template stored in the ledger is obtained; or according to the identification of the standard face template, inquiring the standard face template corresponding to the identification from a standard face template database, and determining the hash value of the inquired standard face template; and inquiring a hash value corresponding to the identifier from the blockchain network, and when the inquired hash value is consistent with the determined hash value, determining that the inquired standard face template is not tampered.
In some embodiments, the text-to-speech request module 2551 is further configured to: sending the given text to a blockchain network so that a consensus node in the blockchain network performs compliance check on the given text by executing an intelligent contract; and when compliance confirmation returned by more than a preset number of consensus nodes is received, determining that the given text passes the compliance check.
In some embodiments, the text-to-speech request module 2551 is further configured to: dividing the given text into at least two language segments, and dividing the audio data into audio data respectively matched with the at least two language segments on the basis of the at least two language segments; classifying the facial features based on the at least two language segments to obtain facial features respectively matched with the at least two language segments; combining the facial features and the audio data corresponding to the same language segment to obtain a media data packet corresponding to the language segment.
In some embodiments, the rendering module 2552 is further configured to: extracting audio data in the media data packet, and determining the product of the playing duration of the audio data in the media data packet and a frame rate parameter as the number of image frames corresponding to the audio data; extracting facial features in the media data packet, and performing special effect rendering processing on the facial features to obtain a facial image comprising the facial features; synthesizing the face image and the background image to obtain an image frame corresponding to the virtual anchor, and combining a plurality of synthesized image frames into an image frame set corresponding to the audio data; wherein the number of image frames in the set of image frames is the number of image frames.
In some embodiments, the rendering module 2552 is further configured to: converting the facial features into feature vectors corresponding to the facial features; extracting the feature vectors with the same number as the image frames from the feature vectors corresponding to the facial features; and carrying out special effect rendering processing based on the extracted feature vectors to obtain the face images with the same number as the image frames.
In some embodiments, the rendering module 2552 is further configured to: analyzing the media data packet to obtain audio data contained in the media data packet, and determining a speech segment corresponding to the audio data; extracting a background image corresponding to the semantics based on the semantics represented by the language segments; the background image comprises a model, an action and a live scene of the virtual anchor; and synthesizing the facial image and the background image to obtain an image frame of the virtual anchor, wherein the virtual anchor is in the live scene and has the action and corresponding facial expression and mouth shape in the facial image.
In some embodiments, the rendering module 2552 is further configured to: sending a query transaction to a blockchain network, wherein the query transaction indicates an intelligent contract for querying an account book in the blockchain network and a query parameter corresponding to the semantics, so that a consensus node in the blockchain network queries the account book by executing the intelligent contract to obtain a background image in the account book, wherein the background image meets the query parameter; or inquiring a background image corresponding to the semantics from a background image database according to the semantics, and determining a hash value of the inquired background image; and inquiring the hash value corresponding to the semantics from the blockchain network, and when the inquired hash value is consistent with the determined hash value, determining that the inquired background image is not tampered.
In some embodiments, the rendering module 2552 is further configured to: determining a phoneme order characterized by the audio data; determining the sequence of the image frames containing the mouth shape features based on the phoneme sequence so that the mouth shape change characterized by the sequence of the image frames is matched with the phoneme sequence.
In some embodiments, the artificial intelligence based live device 255 further comprises: the streaming media service module 2554 is configured to respond to a live broadcast request sent by a client, allocate a streaming media play address to the live broadcast request, and return the streaming media play address to the client; the video plug-streaming module 2553 is further configured to: and pushing the image frame set and the audio data to a streaming media interface corresponding to the streaming media playing address, so that the client pulls the image frame set and the audio data based on the streaming media playing address, the image frames in the image frame set are presented in real time through the playing interface of the client, and the audio data is synchronously played.
Embodiments of the present invention provide a storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to perform methods provided by embodiments of the present invention, for example, artificial intelligence based live broadcast methods as shown in fig. 3A-3D.
In some embodiments, the storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
In summary, according to the live broadcast method based on artificial intelligence provided by the embodiment of the invention, the problem of live broadcast of virtual video in real time on the premise that great computing power is required to be consumed for acquiring audio and video data is solved in a parallel processing mode, the real-time performance of the live broadcast of virtual video is effectively enhanced, the fluency of live broadcast video is improved, the development of artificial intelligence in live broadcast service is promoted, live broadcast of real people is liberated, and the labor cost is reduced.
The above description is only an example of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present invention are included in the protection scope of the present invention.

Claims (14)

1. An artificial intelligence based live broadcasting method is characterized by comprising the following steps:
receiving a given text for a virtual anchor to perform, and acquiring audio data and facial features corresponding to the virtual anchor in real time according to the given text to form at least one media data packet;
performing special effect rendering processing on the facial features in the media data packet to obtain a facial image comprising the facial features;
synthesizing the face image and the background image to obtain an image frame corresponding to the virtual anchor, and combining a plurality of synthesized image frames into an image frame set corresponding to the audio data to form a push stream data packet corresponding to the media data packet in combination with the audio data;
wherein, the number of the image frames in the image frame set is the product of the playing duration of the audio data in the media data packet and the frame rate parameter;
extracting the image frame set and the audio data in the plug flow data packet in real time and pushing the data to a client so as to enable the data to be transmitted to the client
And the client presents the image frames of the virtual anchor and the corresponding audio data in real time according to the received image frame set.
2. The method of claim 1, wherein the obtaining audio data corresponding to the virtual anchor in real-time according to the given text comprises:
when the given text is received, converting the given text into a word vector corresponding to the given text in real time;
carrying out encoding processing and decoding processing on the word vector to obtain audio features corresponding to the word vector;
and synthesizing the audio features to obtain audio data corresponding to the virtual anchor.
3. The method of claim 1, wherein the obtaining facial features corresponding to the virtual anchor in real-time from the given text comprises:
predicting a mouth key point of the virtual anchor according to the audio data corresponding to the given text, and carrying out normalization processing on the mouth key point so as to enable the mouth key point to be adaptive to a standard face template;
performing dimension reduction processing on the mouth key points subjected to normalization processing to obtain mouth shape features corresponding to the virtual anchor;
performing semantic analysis on the given text to obtain the semantic meaning represented by the given text;
and determining facial expression characteristics matched with the semantics according to the semantics represented by the given text, and combining the mouth shape characteristics and the facial expression characteristics to form facial characteristics corresponding to the virtual anchor.
4. The method of claim 3, wherein prior to normalizing the mouth keypoints, the method further comprises:
sending a query transaction to a blockchain network, wherein the query transaction indicates a smart contract for querying a ledger in the blockchain network such that
A consensus node in the block chain network queries the ledger in a mode of executing the intelligent contract to obtain the standard face template stored in the ledger; or
According to the identification of the standard face template, inquiring the standard face template corresponding to the identification from a standard face template database, and determining the hash value of the inquired standard face template;
and inquiring a hash value corresponding to the identifier from the blockchain network, and when the inquired hash value is consistent with the determined hash value, determining that the inquired standard face template is not tampered.
5. The method of claim 1, wherein prior to forming at least one media data packet, the method further comprises:
sending the given text to a blockchain network such that
A consensus node in the blockchain network carries out compliance check on the given text in a mode of executing an intelligent contract;
and when compliance confirmation returned by more than a preset number of consensus nodes is received, determining that the given text passes the compliance check.
6. The method of claim 1, wherein forming at least one media data packet comprises:
dividing the given text into at least two language segments, and dividing the audio data into audio data respectively matched with the at least two language segments on the basis of the at least two language segments;
classifying the facial features based on the at least two language segments to obtain facial features respectively matched with the at least two language segments;
combining the facial features and the audio data corresponding to the same language segment to obtain a media data packet corresponding to the language segment.
7. The method of claim 1, wherein performing special effects rendering on the facial features in the media data packet to obtain a facial image including the facial features comprises:
extracting facial features in the media data packet in real time, and converting the facial features into feature vectors corresponding to the facial features;
extracting the feature vectors with the same number as the image frames from the feature vectors corresponding to the facial features;
and carrying out special effect rendering processing based on the extracted feature vectors to obtain the face images with the same number as the image frames.
8. The method according to claim 1, wherein the synthesizing the face image and the background image comprises:
analyzing the media data packet to obtain audio data contained in the media data packet, and determining a speech segment corresponding to the audio data;
extracting a background image corresponding to the semantics based on the semantics represented by the language segments;
the background image comprises a model, an action and a live scene of the virtual anchor;
and synthesizing the facial image and the background image to obtain an image frame of the virtual anchor, wherein the virtual anchor is in the live scene and has the action and corresponding facial expression and mouth shape in the facial image.
9. The method of claim 8, wherein the extracting the background image corresponding to the semantic meaning comprises:
sending a query transaction to a blockchain network, wherein the query transaction indicates an intelligent contract for querying a ledger in the blockchain network and query parameters corresponding to the semantics, such that
A consensus node in the block chain network inquires the ledger by executing the intelligent contract to obtain a background image which accords with the inquiry parameters in the ledger; or
Inquiring a background image corresponding to the semantics from a background image database according to the semantics, and determining a hash value of the inquired background image;
and inquiring the hash value corresponding to the semantics from the blockchain network, and when the inquired hash value is consistent with the determined hash value, determining that the inquired background image is not tampered.
10. The method of claim 8, further comprising:
determining a phoneme order characterized by the audio data;
determining the sequence of the image frames containing the mouth shape features based on the phoneme sequence so that the mouth shape change characterized by the sequence of the image frames is matched with the phoneme sequence.
11. The method according to any one of claims 1-10, further comprising:
responding to a live broadcast request sent by a client, distributing a streaming media playing address for the live broadcast request, and returning the streaming media playing address to the client;
the real-time pushing to a client so that the client presents the image frames of the virtual anchor and the corresponding audio data in real time according to the received image frame set comprises the following steps:
and pushing the image frame set and the audio data to a streaming media interface corresponding to the streaming media playing address, so that the client pulls the image frame set and the audio data based on the streaming media playing address, the image frames in the image frame set are presented in real time through the playing interface of the client, and the audio data is synchronously played.
12. A live device based on artificial intelligence, the device comprising:
the system comprises a text-to-speech request module, a speech recognition module and a speech recognition module, wherein the text-to-speech request module is used for receiving a given text for a virtual anchor to perform, and acquiring audio data and facial features corresponding to the virtual anchor in real time according to the given text to form at least one media data packet;
the rendering module is used for carrying out special effect rendering processing on the facial features in the media data packet to obtain a facial image comprising the facial features; synthesizing the face image and the background image to obtain an image frame corresponding to the virtual anchor, and combining a plurality of synthesized image frames into an image frame set corresponding to the audio data to form a push stream data packet corresponding to the media data packet in combination with the audio data; wherein, the number of the image frames in the image frame set is the product of the playing duration of the audio data in the media data packet and the frame rate parameter;
a video stream pushing module for extracting the image frame set and audio data in the stream pushing data packet in real time and pushing to the client end to make the client end push the video data packet
And the client presents the image frames of the virtual anchor and the corresponding audio data in real time according to the received image frame set.
13. A live device based on artificial intelligence, the device comprising:
a memory for storing executable instructions;
a processor configured to implement the artificial intelligence based live method of any one of claims 1 to 11 when executing executable instructions stored in the memory.
14. A storage medium having stored thereon executable instructions for causing a processor to, when executed, implement the artificial intelligence based live method of any one of claims 1 to 11.
CN201911319847.5A 2019-12-19 2019-12-19 Live broadcast method, device, equipment and storage medium based on artificial intelligence Active CN111010586B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911319847.5A CN111010586B (en) 2019-12-19 2019-12-19 Live broadcast method, device, equipment and storage medium based on artificial intelligence
CN202110184746.2A CN112995706B (en) 2019-12-19 2019-12-19 Live broadcast method, device, equipment and storage medium based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911319847.5A CN111010586B (en) 2019-12-19 2019-12-19 Live broadcast method, device, equipment and storage medium based on artificial intelligence

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202110184746.2A Division CN112995706B (en) 2019-12-19 2019-12-19 Live broadcast method, device, equipment and storage medium based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN111010586A CN111010586A (en) 2020-04-14
CN111010586B true CN111010586B (en) 2021-03-19

Family

ID=70116920

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201911319847.5A Active CN111010586B (en) 2019-12-19 2019-12-19 Live broadcast method, device, equipment and storage medium based on artificial intelligence
CN202110184746.2A Active CN112995706B (en) 2019-12-19 2019-12-19 Live broadcast method, device, equipment and storage medium based on artificial intelligence

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202110184746.2A Active CN112995706B (en) 2019-12-19 2019-12-19 Live broadcast method, device, equipment and storage medium based on artificial intelligence

Country Status (1)

Country Link
CN (2) CN111010586B (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111540055B (en) * 2020-04-16 2024-03-08 广州虎牙科技有限公司 Three-dimensional model driving method, three-dimensional model driving device, electronic equipment and storage medium
CN113691833B (en) * 2020-05-18 2023-02-03 北京搜狗科技发展有限公司 Virtual anchor face changing method and device, electronic equipment and storage medium
CN111787417B (en) * 2020-06-23 2024-05-17 刘叶 Audio and video transmission control method based on artificial intelligence AI and related equipment
CN111970521B (en) * 2020-07-16 2022-03-11 深圳追一科技有限公司 Live broadcast method and device of virtual anchor, computer equipment and storage medium
CN114513678A (en) * 2020-11-16 2022-05-17 阿里巴巴集团控股有限公司 Face information generation method and device
CN112543342B (en) * 2020-11-26 2023-03-14 腾讯科技(深圳)有限公司 Virtual video live broadcast processing method and device, storage medium and electronic equipment
CN112601100A (en) * 2020-12-11 2021-04-02 北京字跳网络技术有限公司 Live broadcast interaction method, device, equipment and medium
CN112616063B (en) * 2020-12-11 2022-10-28 北京字跳网络技术有限公司 Live broadcast interaction method, device, equipment and medium
CN114630135A (en) * 2020-12-11 2022-06-14 北京字跳网络技术有限公司 Live broadcast interaction method and device
CN113095206A (en) * 2021-04-07 2021-07-09 广州华多网络科技有限公司 Virtual anchor generation method and device and terminal equipment
CN113094698B (en) * 2021-04-21 2022-05-24 杭州天宽科技有限公司 Authority management method in android application virtualization environment
CN113194350B (en) * 2021-04-30 2022-08-19 百度在线网络技术(北京)有限公司 Method and device for pushing data to be broadcasted and method and device for broadcasting data
CN113923462A (en) * 2021-09-10 2022-01-11 阿里巴巴达摩院(杭州)科技有限公司 Video generation method, live broadcast processing method, video generation device, live broadcast processing device and readable medium
CN113891150B (en) * 2021-09-24 2024-10-11 北京搜狗科技发展有限公司 Video processing method, device and medium
CN113825009B (en) * 2021-10-29 2024-06-04 平安国际智慧城市科技股份有限公司 Audio and video playing method and device, electronic equipment and storage medium
CN114363652A (en) * 2022-01-04 2022-04-15 阿里巴巴(中国)有限公司 Video live broadcast method, system and computer storage medium
CN114610158B (en) * 2022-03-25 2024-09-27 Oppo广东移动通信有限公司 Data processing method and device, electronic equipment and storage medium
CN115002491A (en) * 2022-04-26 2022-09-02 未鲲(上海)科技服务有限公司 Network live broadcast method, device, equipment and storage medium based on intelligent machine
CN115426536B (en) * 2022-11-02 2023-01-20 北京优幕科技有限责任公司 Audio and video generation method and device
CN116389777A (en) * 2023-03-10 2023-07-04 启朔(深圳)科技有限公司 Cloud digital person live broadcasting method, cloud device, anchor terminal device and system
CN116095357B (en) * 2023-04-07 2023-07-04 世优(北京)科技有限公司 Live broadcasting method, device and system of virtual anchor
CN116668796B (en) * 2023-07-03 2024-01-23 佛山市炫新智能科技有限公司 Interactive artificial live broadcast information management system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106653052A (en) * 2016-12-29 2017-05-10 Tcl集团股份有限公司 Virtual human face animation generation method and device
CN109118562A (en) * 2018-08-31 2019-01-01 百度在线网络技术(北京)有限公司 Explanation video creating method, device and the terminal of virtual image
CN109637518A (en) * 2018-11-07 2019-04-16 北京搜狗科技发展有限公司 Virtual newscaster's implementation method and device
CN110365996A (en) * 2019-07-25 2019-10-22 深圳市元征科技股份有限公司 Management method, live streaming management platform, electronic equipment and storage medium is broadcast live
CN110505223A (en) * 2019-08-15 2019-11-26 腾讯科技(深圳)有限公司 Block chain multichain management method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8467443B2 (en) * 2005-04-01 2013-06-18 Korea Electronics Technology Institute Object priority order compositor for MPEG-4 player
US11249714B2 (en) * 2017-09-13 2022-02-15 Magical Technologies, Llc Systems and methods of shareable virtual objects and virtual objects as message objects to facilitate communications sessions in an augmented reality environment
CN107682328A (en) * 2017-09-26 2018-02-09 武汉斗鱼网络科技有限公司 A kind of data verification method and client
CN107920256B (en) * 2017-11-30 2020-01-10 广州酷狗计算机科技有限公司 Live broadcast data playing method and device and storage medium
CN108833810A (en) * 2018-06-21 2018-11-16 珠海金山网络游戏科技有限公司 The method and device of subtitle is generated in a kind of live streaming of three-dimensional idol in real time

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106653052A (en) * 2016-12-29 2017-05-10 Tcl集团股份有限公司 Virtual human face animation generation method and device
CN109118562A (en) * 2018-08-31 2019-01-01 百度在线网络技术(北京)有限公司 Explanation video creating method, device and the terminal of virtual image
CN109637518A (en) * 2018-11-07 2019-04-16 北京搜狗科技发展有限公司 Virtual newscaster's implementation method and device
CN110365996A (en) * 2019-07-25 2019-10-22 深圳市元征科技股份有限公司 Management method, live streaming management platform, electronic equipment and storage medium is broadcast live
CN110505223A (en) * 2019-08-15 2019-11-26 腾讯科技(深圳)有限公司 Block chain multichain management method and device

Also Published As

Publication number Publication date
CN111010586A (en) 2020-04-14
CN112995706A (en) 2021-06-18
CN112995706B (en) 2022-04-19

Similar Documents

Publication Publication Date Title
CN111010586B (en) Live broadcast method, device, equipment and storage medium based on artificial intelligence
CN111010589B (en) Live broadcast method, device, equipment and storage medium based on artificial intelligence
WO2022166709A1 (en) Virtual video live broadcast processing method and apparatus, and storage medium and electronic device
JP7479750B2 (en) Virtual video live broadcast processing method and device, electronic device
JP2020034895A (en) Responding method and device
CN107979763B (en) Virtual reality equipment video generation and playing method, device and system
CN113392201A (en) Information interaction method, information interaction device, electronic equipment, medium and program product
CN112333179A (en) Live broadcast method, device and equipment of virtual video and readable storage medium
US20230252785A1 (en) Video processing
CN111626049B (en) Title correction method and device for multimedia information, electronic equipment and storage medium
WO2021227707A1 (en) Audio synthesis method and apparatus, computer readable medium, and electronic device
CN113923462A (en) Video generation method, live broadcast processing method, video generation device, live broadcast processing device and readable medium
CN116756285A (en) Virtual robot interaction method, device and storage medium
CN114286154A (en) Subtitle processing method and device for multimedia file, electronic equipment and storage medium
CN116737883A (en) Man-machine interaction method, device, equipment and storage medium
CN113282791B (en) Video generation method and device
CN117292022A (en) Video generation method and device based on virtual object and electronic equipment
CN109241331B (en) Intelligent robot-oriented story data processing method
CN115690277A (en) Video generation method, system, device, electronic equipment and computer storage medium
CN116561294A (en) Sign language video generation method and device, computer equipment and storage medium
CN111966803A (en) Dialogue simulation method, dialogue simulation device, storage medium and electronic equipment
CN112380871A (en) Semantic recognition method, apparatus, and medium
CN118377882B (en) Accompanying intelligent dialogue method and electronic equipment
CN116843805B (en) Method, device, equipment and medium for generating virtual image containing behaviors
CN114827635B (en) Live cover processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40022538

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant