CN115514989B - Data transmission method, system and storage medium - Google Patents

Data transmission method, system and storage medium Download PDF

Info

Publication number
CN115514989B
CN115514989B CN202210981927.2A CN202210981927A CN115514989B CN 115514989 B CN115514989 B CN 115514989B CN 202210981927 A CN202210981927 A CN 202210981927A CN 115514989 B CN115514989 B CN 115514989B
Authority
CN
China
Prior art keywords
video
terminal
processed
server
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210981927.2A
Other languages
Chinese (zh)
Other versions
CN115514989A (en
Inventor
关巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
You Can See Beijing Technology Co ltd AS
Original Assignee
You Can See Beijing Technology Co ltd AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by You Can See Beijing Technology Co ltd AS filed Critical You Can See Beijing Technology Co ltd AS
Priority to CN202210981927.2A priority Critical patent/CN115514989B/en
Publication of CN115514989A publication Critical patent/CN115514989A/en
Application granted granted Critical
Publication of CN115514989B publication Critical patent/CN115514989B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Databases & Information Systems (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention provides a data transmission method, a system and a storage medium, which are applied to a data transmission system, wherein the data transmission system comprises: the method comprises the steps that an original video is obtained through the terminal, character data in the original video are separated from background data, the character data are extracted by the terminal, a video to be synthesized is obtained, the terminal sends the video to be synthesized to the server, the server synthesizes the video to be synthesized with a preset background, a video to be processed is obtained and sent to the terminal, and the terminal synthesizes the video to be processed and pre-obtained audio to be processed, so that live broadcast audio and video are obtained. By the mode, the live video is not required to be processed by the anchor, but the video to be synthesized is sent to the server for synthesis, so that the live video cost can be reduced on the premise of ensuring that users watch live video experience.

Description

Data transmission method, system and storage medium
Technical Field
The present invention relates to the field of data transmission technologies, and in particular, to a data transmission method, system, and storage medium.
Background
With the development of network technology, live broadcast has become one of the leisure and interaction scenes of people, and during the live broadcast process, a plurality of audiences watching the live broadcast can see the live broadcast and the actual background corresponding to the live broadcast at a live broadcast interface.
In order to enhance the experience of the audience, the actual background can be set as the virtual background, the current anchor can live in a live broadcast mode of a green curtain, namely the actual background during live broadcast of the anchor is the green curtain, after the live video is processed by the electronic equipment with the GPU (Graphics Processing Unit, the video comprising the anchor and the virtual background can be presented to the audience, and the time and the cost for arranging the actual scene can be saved by live broadcast of the green curtain.
However, currently, the realization of green curtain live broadcast requires that a plurality of electronic devices are arranged by a host to process live broadcast video, and the performance requirements on the electronic devices are higher, namely, the cost required for realizing green curtain live broadcast is higher.
Disclosure of Invention
The invention provides a data transmission method, a system and a storage medium, which are used for solving the defect of higher cost required by realizing green curtain live broadcast in the prior art and reducing the cost of live broadcast on the premise of ensuring the live broadcast watching experience of a user watching live broadcast.
The invention provides a data transmission method, which is applied to a data transmission system, wherein the data transmission system comprises the following steps: a terminal and a server;
the method comprises the following steps:
the terminal acquires an original video and separates character data from background data in the original video;
the terminal extracts the character data to obtain a video to be synthesized;
the terminal sends the video to be synthesized to the server;
the server synthesizes the video to be synthesized with a preset background to obtain a video to be processed and sends the video to the terminal;
and the terminal synthesizes the video to be processed and the audio to be processed which is acquired in advance to obtain live broadcast audio and video.
Optionally, the terminal includes a first terminal and a second terminal;
before the terminal synthesizes the video to be processed and the audio to be processed which is acquired in advance to obtain the live audio and video, the method further comprises the following steps:
the first terminal sends audio to be processed to the server;
and under the condition that the server sends the video to be processed to the second terminal, the server sends the audio to be processed to the second terminal.
Optionally, the terminal includes a first terminal and a second terminal;
before the terminal synthesizes the video to be processed and the audio to be processed which is acquired in advance to obtain the live audio and video, the method further comprises the following steps:
the first terminal sends the audio to be processed to the second terminal.
Optionally, the terminal includes a first terminal and a second terminal;
the terminal sends the video to be synthesized to the server, and the method comprises the following steps:
and the first terminal sends the video to be synthesized to the server based on a preset first push address.
Optionally, the server synthesizes the video to be synthesized with a preset background, obtains a video to be processed, and sends the video to the terminal, including:
the server acquires the video to be synthesized based on a first streaming address corresponding to the first push address;
and based on a preset second push address, sending the video to be synthesized and the video to be processed obtained by synthesizing the video to be synthesized with a preset background to the second terminal, wherein the server stores the corresponding relation between the first push address and the first push address.
Optionally, after sending the video to be synthesized and the video to be processed obtained by synthesizing the video to be synthesized with the preset background based on the preset second push address, the method further includes:
and the second terminal acquires the video to be processed based on a second streaming address corresponding to the second streaming address, wherein the second terminal stores the corresponding relation between the second streaming address and the second streaming address.
Optionally, after obtaining the live audio and video, the method further includes:
and the terminal receives the information to be processed and adds the information to be processed to the live audio and video.
The invention also provides a data transmission system, which comprises: a terminal and a server;
the terminal is used for acquiring an original video and separating character data from background data in the original video; extracting the character data to obtain a video to be synthesized; sending the video to be synthesized to the server;
the server is used for synthesizing the video to be synthesized with a preset background to obtain a video to be processed and sending the video to the terminal;
the terminal is also used for synthesizing the video to be processed and the audio to be processed which is acquired in advance to obtain live broadcast audio and video.
Optionally, the terminal includes a first terminal and a second terminal;
the first terminal is used for sending the audio to be processed to the server before the terminal synthesizes the video to be processed and the audio to be processed which is acquired in advance to obtain the live audio and video;
the server is used for sending the audio to be processed to the second terminal under the condition that the video to be processed is sent to the second terminal.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of any of the data transmission methods described above.
The invention also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of a data transmission method as described in any of the above.
The invention provides a data transmission method, a system and a storage medium, which are applied to a data transmission system, wherein the data transmission system comprises: the method comprises the steps that an original video is obtained through the terminal, character data in the original video are separated from background data, the character data are extracted by the terminal, a video to be synthesized is obtained, the terminal sends the video to be synthesized to the server, the server synthesizes the video to be synthesized with a preset background, a video to be processed is obtained and sent to the terminal, and the terminal synthesizes the video to be processed and pre-obtained audio to be processed, so that live broadcast audio and video are obtained. By the mode, the live broadcast video is not required to be processed by a plurality of electronic devices, and the video to be synthesized is sent to the server for synthesis, so that the live broadcast cost can be reduced on the premise of ensuring that the live broadcast watching experience of the live broadcast watching user is ensured.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a data transmission method according to the present invention;
FIG. 2 is a second flow chart of a data transmission method according to the present invention;
FIG. 3 is a third flow chart of a data transmission method according to the present invention;
fig. 4 is a flow chart of a data transmission method according to the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to reduce the cost of green curtain live broadcast on the premise of ensuring the user to watch live broadcast experience, the invention provides a data transmission method, a data transmission system and a non-temporary computer readable storage medium. A data transmission method of the present invention is described below with reference to fig. 1.
As shown in fig. 1, the present invention discloses a first data transmission method, which is applied to a data transmission system, wherein the data transmission system includes: the method comprises the steps of:
s101, the terminal acquires an original video and separates character data in the original video from background data.
When live broadcasting users need to live broadcasting, the video can be acquired through the image acquisition equipment of the terminal, so that the terminal can acquire the original video, the image acquisition equipment can acquire the video, and the image acquisition equipment transmits the acquired video to the terminal, so that the terminal can acquire the original video, and the method is reasonable. The terminal may be a computer, a tablet computer, a mobile phone, etc., which is not particularly limited herein.
After the original video is obtained, the terminal can separate the character data in the original video from the background data, wherein the character data is data corresponding to the live user in the original video, and the background data is data corresponding to the actual background (namely, a green screen) of the live user in the original video.
In one embodiment, the character data in the original video may be separated from the background data using image processing software installed at the terminal. For example, a vmix (Video Mixer) installed in the terminal may be used to separate the character data in the original Video from the background data, and the vmix may separate the character data in the original Video from the green screen background, so as to obtain the character data, where the character data includes alpha information, and the alpha information is an opacity parameter, and is used for synthesizing the character data and the preset background subsequently.
S102, the terminal extracts the character data to obtain a video to be synthesized.
In order to be able to extract character data after separating the character data from background data in an original video, a terminal may extract character data with a video containing only character data as a video to be synthesized. For example, after character data in an original video is separated from background data by vmix, a video containing only character data may be stored, thereby obtaining a video to be synthesized.
S103, the terminal sends the video to be synthesized to the server.
S104, the server synthesizes the video to be synthesized with a preset background, obtains the video to be processed and sends the video to the terminal.
After the video to be synthesized is obtained, in order to obtain the video synthesized by the video to be synthesized and the preset background, the terminal can send the video to be synthesized to the server, so that the server can synthesize the video to be synthesized and the preset background to obtain the video to be processed. Wherein the server is a device comprising a GPU.
In one embodiment, the video to be synthesized may be synthesized with a preset background using software installed on a server, for example, the video to be synthesized may be synthesized with a preset background using ue4 (universal Engine 4) installed on a server.
The preset background can be a background preset by a live user according to live content requirements, for example, according to live sales scenes, a propaganda report corresponding to a commodity can be used as the preset background, so that the user watching live can more intuitively know the commodity when watching live, the live user watching live experience is improved, and the commodity sales increase can be promoted.
After the server obtains the video to be processed, the video to be processed can be sent to the terminal. In one embodiment, the server may return the video to be processed to the terminal that sends the video to be synthesized, i.e., the terminal sends the video to be synthesized to the server, and the server returns the video to be processed to the terminal. In another embodiment, the server may send the video to be processed to other preset terminals. This is reasonable.
S105, the terminal synthesizes the video to be processed and the audio to be processed which is acquired in advance, and a live broadcast audio and video is obtained.
After receiving the video to be processed sent by the server, the terminal can synthesize the video to be processed and the audio to be processed which are acquired in advance, so that live broadcast audio and video can be obtained. In one embodiment, the terminal may align the time axes of the video to be processed and the audio to be processed so that the mouth shape of the anchor speaking corresponds to the content of the audio to be processed, so that a user watching live broadcast can see live audio and video in which the audio and video are synchronized.
In an embodiment, in order to further improve the experience of watching live broadcast by a user watching live broadcast, the terminal may receive information to be processed and add the information to be processed to the video-on-air, where the information to be processed may be input to the terminal by the user according to actual requirements. The information to be processed may include information such as titles, advertisements, etc. This is reasonable and is not particularly limited herein.
Therefore, the terminal can add information such as titles, advertisements and the like to the live audio and video, and further push the live audio and video with the added information to each live platform, and a user watching live broadcast can see the anchor, the preset background and the added information.
According to the scheme, the live broadcast video is not required to be processed by a plurality of electronic devices, but the video to be synthesized is sent to the server for synthesis, so that the cost of green curtain live broadcast is reduced on the premise that the live broadcast watching experience of the live broadcast watching user is ensured. And the live user does not need to set equipment containing the GPU, so that the purchase cost can be reduced, the field debugging time can be reduced, and the live access period can be shortened. And because ue4 installed on the server can be adopted for synthesis, engineers responsible for exception handling and updating of the ue4 application program and the model data can be only needed at the server side for operation, maintenance and management, and each anchor is not needed to be configured with a corresponding engineer, so that the manpower consumption is reduced. By arranging the server containing the GPU, strong dependence of green screen recording and live broadcast access multimachine deployment in the same network in a live broadcast scene can be decoupled, and live broadcast access can be conveniently carried out by a live broadcast user at any place through a terminal. The server containing the GPU can support live broadcasting requirements of different time periods, so that the cost can be reduced, and the competitiveness can be improved.
As an embodiment of the present invention, the above-mentioned terminals may include a first terminal and a second terminal, where the first terminal and the second terminal may be devices that do not include a GPU, for example, the first terminal and the second terminal may be a computer, a tablet computer, a mobile phone, etc., which are not limited herein specifically.
Before the terminal synthesizes the video to be processed and the audio to be processed which is acquired in advance to obtain the live audio and video, the method can further comprise the following steps:
the first terminal sends the audio to be processed to the second terminal, so that the second terminal can acquire the audio to be processed. And the second terminal can execute the steps of synthesizing the video to be processed and the pre-acquired audio to be processed after receiving the video to be processed sent by the server, so as to obtain the live broadcast audio and video.
Or the first terminal can send the audio to be processed to the server, and the server can send the audio to be processed to the second terminal under the condition that the server sends the video to be processed to the second terminal, so that the second terminal can execute the steps of synthesizing the video to be processed and the audio to be processed acquired in advance to obtain the live broadcast audio and video.
Therefore, when the server fails to synthesize the video to be synthesized with the preset background, the audio to be processed is not sent to the second terminal, so that the situation that the audience can hear only sound and cannot see the video is avoided, the live broadcast watching experience of a user watching live broadcast can be further improved, and the management of the video to be processed and the audio to be processed can be facilitated.
As an embodiment of the present invention, before the step of sending the audio to be processed to the server by the first terminal, the method may further include:
the first terminal acquires the original audio, wherein the first terminal can acquire the original audio through the voice acquisition equipment of the first terminal, and can also acquire the audio through the voice acquisition equipment, and then the voice acquisition equipment sends the acquired audio to the first terminal, so that the first terminal can acquire the original audio. This is reasonable.
In order to obtain the audio meeting the preset frequency condition, the first terminal resamples the original audio to obtain the audio to be processed, namely the audio to be processed is the audio meeting the preset frequency condition. The preset frequency condition may be set by the anchor according to actual requirements, and the resampling process may include an upsampling process and a downsampling process. Therefore, the audio meeting the preset frequency condition can be obtained, so that the first terminal can transmit the audio to be processed subsequently, and a user watching live broadcast can watch live broadcast.
As an embodiment of the present invention, the terminal may include a first terminal and a second terminal.
The sending, by the terminal, the video to be synthesized to the server may include:
and the first terminal sends the video to be synthesized to a server based on a preset first push address so that a subsequent server can acquire the video to be synthesized. In one embodiment, a live user may input a first push address at the first terminal, so that the first terminal may obtain the first push address, and further, the first terminal may send the video to be synthesized based on the preset first push address.
As an embodiment of the present invention, the synthesizing, by the server, the video to be synthesized with a preset background, to obtain a video to be processed, and sending the video to the terminal may include:
the server acquires the video to be synthesized based on a first streaming address corresponding to the first streaming address, and sends the video to be processed obtained by synthesizing the video to be synthesized and a preset background to the second terminal based on a preset second streaming address. The server stores the corresponding relation between the first push address and the first pull address.
As an embodiment of the present invention, after sending the video to be synthesized with the preset background to obtain the video to be processed based on the preset second push address, the method may further include:
and the second terminal acquires the video to be processed based on a second streaming address corresponding to the second streaming address, wherein the second terminal stores the corresponding relation between the second streaming address and the second streaming address.
The live broadcast user can input the first push address into the first terminal, so that the first terminal can acquire the first push address, and further, the first terminal can send the video to be synthesized based on the preset first push address.
For example, a live user may initialize an obs (Open Broadcaster Software, live video software) output plug-in, and further input a first related parameter, i.e., a first push address, of trtc (Tencent Real-Time Communication) at the obs. The obs may initiate plug flow.
Further, as shown in fig. 2, the first terminal may perform the steps of:
s201, the first terminal acquires a first related parameter of trtc, wherein the first terminal can enter a corresponding first room based on the first related parameter.
S202, the first terminal judges whether to enter a first room. If not, the first terminal does not enter the first room, and the step S201 is returned; if so, the first terminal is instructed to enter the first room, and step S203 is performed.
When the first terminal fails to enter the first room, the first phase parameter is described as being incorrect, so that the first terminal can display a message of failure to enter the first room to report errors, and a subsequent user can input the correct first phase parameter.
S203, the first terminal judges whether the acquired data is original audio; if it is the original audio, step S204 is performed. If not the original audio, step S205 is performed.
After entering the first room, the first terminal can collect corresponding data through the image collecting device and the audio collecting device, or the first terminal can receive the data sent by the image collecting device and the audio collecting device, which is reasonable.
S204, the first terminal carries out resampling processing on the original audio, and sends the audio to be processed to the server through the first room.
Under the condition that the first terminal acquires the audio, namely under the condition that the first terminal acquires the original audio, the first terminal can carry out resampling processing on the audio, and then the audio to be processed is sent to the server through the first room, namely based on a preset first plug-flow address, the audio to be processed is sent.
S205, the first terminal separates character data from background data in the original video, extracts the character data to obtain a video to be synthesized, and sends the video to be synthesized to the server through the first room.
Under the condition that the video is acquired, namely, the first terminal acquires the original video, character data in the original video can be separated from background data, character data are extracted, so that the video to be synthesized is acquired, and then the video to be synthesized is sent to a server through a first room, namely, the video to be synthesized is sent based on a preset first push address.
The server can acquire the video to be synthesized sent by the terminal based on a first streaming address corresponding to the first streaming address, wherein the server stores the corresponding relation between the first streaming address and the first streaming address. And then, based on a preset second push address, sending the video to be synthesized and the video to be processed obtained by synthesizing the video to be synthesized and the preset background.
For example, the user corresponding to the server may initialize the ue4 plug-in, and further input the second related parameter trtc to the ue4 installed in the server, that is, the first pull stream address corresponding to the first push stream address, and the ue4 installed in the server may initiate the sink stream. The user corresponding to the server may input the third related parameter trtc, that is, the second push address, into ue4 installed in the server, and then ue4 installed in the server may start push.
Further, as shown in fig. 3, the server may perform the steps of:
s301, the server acquires relevant parameters of the trtc, wherein the relevant parameters of the trtc acquired by the server comprise second relevant parameters and third relevant parameters, the server can enter a corresponding first room based on the second relevant parameters, and the server can enter a corresponding second room based on the third relevant parameters.
S302, the server judges whether the first room and the second room are entered, if not, the server returns to the step S301 when the server does not enter the first room and/or the second room. If so, the server is instructed to enter the first room and the second room, and step S303 is performed.
When the server does not enter the first room and/or the second room, the second related parameter or the third related parameter is described as being wrong, so that the server can display the information of the failure in entering to report the error, so that the subsequent user can input the correct related parameter trtc corresponding to the information of the failure in entering based on the information of the failure in entering.
S303, the server judges whether the data sent by the first terminal is audio to be processed. If it is audio to be processed, step S304 is performed. If it is not audio to be processed, step S305 is performed.
After entering the first room and the second room, the server may register an audio-video receive callback based on the data received through the first room, thereby performing the receive callback on the received data.
S304, the server sends the audio to be processed to the second room.
In the case where the server receives the audio to be processed through the first room, and in the case where the video to be processed is transmitted to the second room, the server may transmit the audio to be processed to the second room.
S305, the server synthesizes the video to be synthesized with the preset background to obtain the video to be processed, and sends the video to be processed to the second room.
When the server receives the video to be synthesized through the first room, the video frame of the video to be synthesized can be copied, the video frame of the copied video to be synthesized is sent to the ue4 video processing thread, after the ue4 video processing thread receives the video frame of the copied video to be synthesized, the video frame of the copied video to be synthesized is synthesized with a preset background by adopting a callback function and stored, the video to be processed is obtained, and then the server sends the video to be processed to the second room.
The second terminal may obtain the video to be processed based on a second pull stream address corresponding to the second push stream address, where the second terminal stores a correspondence between the second push stream address and the second pull stream address.
For example, a live user may initialize the obs source plug-in, and further input a fourth related parameter trtc in the obs, that is, a second pull stream address corresponding to the second push stream address, so as to start stream receiving.
Further, as shown in fig. 4, the second terminal may perform the steps of:
s401, the second terminal acquires a fourth related parameter of trtc, wherein the second terminal can enter a corresponding second room based on the fourth related parameter.
S402, the second terminal judges whether to enter a second room. If not, the second terminal does not enter the second room, and the step S401 is returned; if so, the second terminal is interpreted as entering the second room, and step S403 is performed.
When the second terminal fails to enter the second room, the fourth related parameter is described as being wrong, so that the second terminal can display a message of failure to enter the second room to report errors, and a subsequent user can input the correct fourth related parameter.
S403, the second terminal judges whether the data sent by the server is audio to be processed; if it is audio to be processed, step S404 is performed. If it is not audio to be processed, step S405 is performed.
And S404, the second terminal sends the audio to be processed to the obs core.
And S405, the second terminal sends the video to be processed to the obs core.
After the second terminal enters the second room, the received video to be processed and the received audio to be processed can be sent to the obs core, and the obs core can synthesize the video to be processed and the pre-acquired audio to be processed to obtain the live broadcast audio and video. The obs can send live audio and video to a live tool in a virtual camera mode so that a user watching live can watch live.
Currently, obs only supports video delivery based on rtmp (Real Time Messaging Protocol, real-time messaging protocol), which is an audio-video transmission protocol based on tcp (Transmission Control Protocol ). The delay in transmitting video in this way is too high and the delay for 4k video transmission is not satisfactory for use. And ue4 does not have the ability to dock with the cloud audio video service. Aiming at the problems, the invention sends the audio and video in the obs in a cloud audio and video private protocol (udp (User Datagram Protocol, user datagram protocol)) mode, thereby being capable of reducing delay, reducing the transmission delay aiming at 4k video and meeting the use requirement. According to the invention, the cloud audio and video receiving and sending capability is supported in the ue4, so that the ue4 can process video stream data in the cloud.
Specifically, sdk (Software Development Kit ) is integrated in ue4, so that ue4 can quickly and efficiently acquire audio to be processed and video to be processed, and subsequent live broadcasting is facilitated.
As an implementation mode, the server can adopt a mode of carrying out time-sharing multiplexing or lease multiplexing on each terminal by adopting a preset rule, so that a live user can quickly access the synthesis capability of a rendering scene provided by an expensive display card without deploying equipment comprising a GPU, the live cost can be reduced, and the server adopts the preset rule to synthesize each terminal, and the operation cost of the server can be reduced.
The data transmission system provided by the invention is described below, and the data transmission system described below and a data transmission method described above can be referred to correspondingly.
The invention discloses a data transmission system, which can comprise a terminal and a server, wherein the terminal is used for acquiring an original video and separating character data from background data in the original video; extracting the character data to obtain a video to be synthesized; sending the video to be synthesized to the server; and after receiving the video to be processed sent by the server, synthesizing the video to be processed and the audio to be processed which is acquired in advance to obtain live broadcast audio and video.
The server is used for receiving the video to be synthesized sent by the terminal; synthesizing the video to be synthesized with a preset background to obtain the video to be processed; and sending the video to be processed to the terminal.
As an embodiment of the present invention, the terminal may include a first terminal and a second terminal.
The first terminal is configured to send the audio to be processed to the server before the terminal synthesizes the video to be processed and the audio to be processed acquired in advance to obtain the live audio and video.
And the server is used for sending the audio to be processed to the second terminal under the condition that the video to be processed is sent to the second terminal.
As an embodiment of the present invention, the terminal may include a first terminal and a second terminal;
and the first terminal is used for sending the audio to be processed to the second terminal before the terminal synthesizes the video to be processed and the audio to be processed which is acquired in advance to obtain the live broadcast audio and video.
As an embodiment of the present invention, the terminal may include a first terminal and a second terminal;
the terminal is specifically configured to send the video to be synthesized to the server based on a preset first push address by using a first terminal.
As an embodiment of the present invention, the server is specifically configured to obtain the video to be synthesized based on a first streaming address corresponding to the first push address. And based on a preset second push address, sending the video to be synthesized and the video to be processed obtained by synthesizing the video to be synthesized and a preset background to the second terminal.
The server stores the corresponding relation between the first push address and the first pull address.
As an embodiment of the present invention, the second terminal is specifically configured to obtain, after sending, based on a preset second push address, the video to be synthesized and a video to be processed obtained by synthesizing the video to be synthesized with a preset background, the video to be processed based on a second pull address corresponding to the second push address.
And the second terminal stores the corresponding relation between the second push stream address and the second pull stream address.
The terminal is further used for receiving information to be processed after the live audio and video is obtained, and adding the information to be processed to the live audio and video.
In another aspect, the present invention also provides a computer program product comprising a computer program storable on a non-transitory computer readable storage medium, the computer program, when executed by a processor, being capable of performing a data transmission method as provided by the methods described above.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform a data transmission method provided by the above methods.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. A data transmission method, characterized by being applied to a data transmission system, the data transmission system comprising: a terminal and a server; the terminal comprises a first terminal and a second terminal;
the method comprises the following steps:
the first terminal acquires a first related parameter of trtc and enters a first room based on the first related parameter;
the first terminal acquires an original video and separates character data from background data in the original video;
the first terminal extracts the character data to obtain a video to be synthesized;
the first terminal sends the video to be synthesized to the server through the first room;
the server acquires a second related parameter and a third related parameter of trtc, the server enters the first room based on the second related parameter, the server also enters a corresponding second room based on the third related parameter, the server receives the video to be synthesized through the first room, synthesizes the video to be synthesized with a preset background, obtains a video to be processed, and sends the video to the second room;
the second terminal obtains a fourth related parameter of trtc, enters the second room based on the fourth related parameter, receives the video to be processed through the second room, synthesizes the video to be processed and pre-obtained audio to be processed, and obtains live broadcast audio and video;
the first correlation parameter is used for representing a first push address of the trtc, the second correlation parameter is used for representing a first pull address of the trtc, the first push address corresponds to the first pull address, and the correspondence between the first push address and the first pull address is stored in the server;
the third related parameter is used for representing a second push address of the trtc, the fourth related parameter is used for representing a second pull address of the trtc, the second push address corresponds to the second pull address, and a corresponding relation between the second push address and the second pull address is stored in the second terminal.
2. The method of claim 1, wherein prior to said synthesizing the video to be processed and the pre-acquired audio to be processed to obtain a live audio video, the method further comprises:
the first terminal sends audio to be processed to the server;
and under the condition that the server sends the video to be processed to the second terminal, the server sends the audio to be processed to the second terminal.
3. The method of claim 1, wherein prior to said synthesizing the video to be processed and the pre-acquired audio to be processed to obtain a live audio video, the method further comprises:
the first terminal sends the audio to be processed to the second terminal.
4. A method according to any one of claims 1-3, wherein after obtaining the live audio-video, the method further comprises:
and the terminal receives the information to be processed and adds the information to be processed to the live audio and video.
5. A data transmission system, the data transmission system comprising: a terminal and a server; the terminal comprises a first terminal and a second terminal;
the first terminal is used for acquiring a first related parameter of trtc and entering a first room based on the first related parameter; acquiring an original video, and separating character data from background data in the original video; extracting the character data to obtain a video to be synthesized; sending the video to be synthesized to the server through the first room;
the server is used for acquiring a second related parameter and a third related parameter of trtc, entering the first room based on the second related parameter, entering a corresponding second room based on the third related parameter, receiving the video to be synthesized through the first room, synthesizing the video to be synthesized with a preset background, obtaining a video to be processed, and sending the video to be processed to the second room;
the second terminal is used for acquiring a fourth related parameter of trtc, entering the second room based on the fourth related parameter, receiving the video to be processed through the second room, synthesizing the video to be processed and pre-acquired audio to be processed, and acquiring live broadcast audio and video;
the first correlation parameter is used for representing a first push address of the trtc, the second correlation parameter is used for representing a first pull address of the trtc, the first push address corresponds to the first pull address, and the correspondence between the first push address and the first pull address is stored in the server;
the third related parameter is used for representing a second push address of the trtc, the fourth related parameter is used for representing a second pull address of the trtc, the second push address corresponds to the second pull address, and a corresponding relation between the second push address and the second pull address is stored in the second terminal.
6. The system of claim 5, wherein the first terminal is further configured to send audio to be processed to the server;
the server is used for sending the audio to be processed to the second terminal under the condition that the video to be processed is sent to the second terminal.
7. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the data transmission method according to any one of claims 1 to 4.
CN202210981927.2A 2022-08-16 2022-08-16 Data transmission method, system and storage medium Active CN115514989B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210981927.2A CN115514989B (en) 2022-08-16 2022-08-16 Data transmission method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210981927.2A CN115514989B (en) 2022-08-16 2022-08-16 Data transmission method, system and storage medium

Publications (2)

Publication Number Publication Date
CN115514989A CN115514989A (en) 2022-12-23
CN115514989B true CN115514989B (en) 2024-04-09

Family

ID=84502811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210981927.2A Active CN115514989B (en) 2022-08-16 2022-08-16 Data transmission method, system and storage medium

Country Status (1)

Country Link
CN (1) CN115514989B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116614650A (en) * 2023-06-16 2023-08-18 上海随幻智能科技有限公司 Voice and picture synchronous private domain live broadcast method, system, equipment, chip and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10260808A (en) * 1997-03-19 1998-09-29 Agency Of Ind Science & Technol Video display system, and presence improving method in the system
CN1411277A (en) * 2001-09-26 2003-04-16 Lg电子株式会社 Video-frequency communication system
CN107197139A (en) * 2017-04-13 2017-09-22 深圳电航空技术有限公司 The data processing method of panorama camera
CN110166794A (en) * 2018-04-26 2019-08-23 腾讯科技(深圳)有限公司 A kind of live audio processing method, apparatus and system
WO2020160563A1 (en) * 2019-01-22 2020-08-06 MGM Resorts International Operations, Inc. Systems and methods for customizing and compositing a video feed at a client device
CN112291238A (en) * 2020-10-29 2021-01-29 腾讯科技(深圳)有限公司 Data communication method, device, equipment and computer readable storage medium
CN112637614A (en) * 2020-11-27 2021-04-09 深圳市创成微电子有限公司 Network live broadcast audio and video processing method, processor, device and readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10260808A (en) * 1997-03-19 1998-09-29 Agency Of Ind Science & Technol Video display system, and presence improving method in the system
CN1411277A (en) * 2001-09-26 2003-04-16 Lg电子株式会社 Video-frequency communication system
CN107197139A (en) * 2017-04-13 2017-09-22 深圳电航空技术有限公司 The data processing method of panorama camera
CN110166794A (en) * 2018-04-26 2019-08-23 腾讯科技(深圳)有限公司 A kind of live audio processing method, apparatus and system
WO2020160563A1 (en) * 2019-01-22 2020-08-06 MGM Resorts International Operations, Inc. Systems and methods for customizing and compositing a video feed at a client device
CN112291238A (en) * 2020-10-29 2021-01-29 腾讯科技(深圳)有限公司 Data communication method, device, equipment and computer readable storage medium
WO2022089183A1 (en) * 2020-10-29 2022-05-05 腾讯科技(深圳)有限公司 Data communication method and apparatus, and device, storage medium and computer program product
CN112637614A (en) * 2020-11-27 2021-04-09 深圳市创成微电子有限公司 Network live broadcast audio and video processing method, processor, device and readable storage medium

Also Published As

Publication number Publication date
CN115514989A (en) 2022-12-23

Similar Documents

Publication Publication Date Title
CN105991962B (en) Connection method, information display method, device and system
WO2019205886A1 (en) Method and apparatus for pushing subtitle data, subtitle display method and apparatus, device and medium
CN110784730B (en) Live video data transmission method, device, equipment and storage medium
EP3562163A1 (en) Audio-video synthesis method and system
CN111010614A (en) Method, device, server and medium for displaying live caption
CN112738540B (en) Multi-device live broadcast switching method, device, system, electronic device and readable storage medium
CN112752114B (en) Method and device for generating live broadcast playback interactive message, server and storage medium
CN112203106B (en) Live broadcast teaching method and device, computer equipment and storage medium
CN111163330A (en) Live video rendering method, device, system, equipment and storage medium
CN112135155B (en) Audio and video connecting and converging method and device, electronic equipment and storage medium
CN115514989B (en) Data transmission method, system and storage medium
CN103841466A (en) Screen projection method, computer end and mobile terminal
CN112929681A (en) Video stream image rendering method and device, computer equipment and storage medium
CN113286190A (en) Cross-network and same-screen control method and device and cross-network and same-screen system
CN108174264B (en) Synchronous lyric display method, system, device, medium and equipment
CN111432284A (en) Bullet screen interaction method of multimedia terminal and multimedia terminal
CN113259762B (en) Audio processing method and device, electronic equipment and computer readable storage medium
CN109842524B (en) Automatic upgrading method and device, electronic equipment and computer readable storage medium
CN111629223A (en) Video synchronization method and device, computer readable storage medium and electronic device
CN111107301A (en) Video conference platform and communication method based on video conference platform
CN109862385B (en) Live broadcast method and device, computer readable storage medium and terminal equipment
CN112565799B (en) Video data processing method and device
CN108933769B (en) Streaming media screenshot system, method and device
CN112738446B (en) Simultaneous interpretation method and system based on online conference
CN111918092B (en) Video stream processing method, device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant