CN106060655B - Video processing method, server and terminal - Google Patents

Video processing method, server and terminal Download PDF

Info

Publication number
CN106060655B
CN106060655B CN201610639190.0A CN201610639190A CN106060655B CN 106060655 B CN106060655 B CN 106060655B CN 201610639190 A CN201610639190 A CN 201610639190A CN 106060655 B CN106060655 B CN 106060655B
Authority
CN
China
Prior art keywords
special effect
subdata
data
terminal
video data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610639190.0A
Other languages
Chinese (zh)
Other versions
CN106060655A (en
Inventor
王颖琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201610639190.0A priority Critical patent/CN106060655B/en
Publication of CN106060655A publication Critical patent/CN106060655A/en
Priority to PCT/CN2017/095338 priority patent/WO2018024179A1/en
Application granted granted Critical
Publication of CN106060655B publication Critical patent/CN106060655B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4782Web browsing, e.g. WebTV

Abstract

The invention discloses a video processing method and a terminal, comprising the following steps: receiving first data sent by a terminal; analyzing the first data, and extracting first subdata and second subdata from the first data, wherein the first subdata is used for representing special effect rendering parameters, and the second subdata is used for representing original video data; rendering and generating target video data according to the first subdata and the second subdata; and sending the target video data to the terminal so that the terminal can display the target video data.

Description

Video processing method, server and terminal
Technical Field
The present invention relates to video processing technologies, and in particular, to a video processing method, a server, and a terminal.
Background
When social sharing is performed through the internet, pictures, characters, short videos and the like can be shared. Among them, short video is strongly sought after by people because of its simple operation and wide entertainment. The most important application of short video is its special effects function, which makes the common tedious video more magic and dramatic. For example: and adding a magic special effect in the short video.
In order to meet the entertainment and diversity of different people, various special effects and template materials need to be created. However, due to the limitations of the storage capacity and the processing chip of the terminal (such as a mobile phone), the processing speed is very slow due to the fact that the terminal accesses the template material and locally renders the special effect, and therefore rich and various special effect effects cannot be quickly created to meet the requirements of users.
Disclosure of Invention
In order to solve the above technical problem, embodiments of the present invention provide a video processing method, a server, and a terminal.
The video processing method provided by the embodiment of the invention is applied to a server and comprises the following steps:
receiving first data sent by a terminal;
analyzing the first data, and extracting first subdata and second subdata from the first data, wherein the first subdata is used for representing special effect rendering parameters, and the second subdata is used for representing original video data;
rendering and generating target video data according to the first subdata and the second subdata;
and sending the target video data to the terminal so that the terminal can display the target video data.
In this embodiment of the present invention, the generating target video data by rendering according to the first sub data and the second sub data includes:
searching a special effect template file corresponding to the special effect rendering parameter, and dividing the original video data into a plurality of original video subdata;
rendering the special effect template file and the original video subdata in parallel to generate a plurality of target video subdata;
and merging the plurality of target video subdata into target video data.
In this embodiment of the present invention, the analyzing the first data and extracting first sub data and second sub data from the first data includes:
analyzing the first data to obtain a packet head part and a packet body part of the first data;
extracting first subdata from the packet part, and extracting second subdata from the packet part.
Another embodiment of the present invention provides a video processing method applied to a terminal, including:
acquiring original video data and acquiring input special effect rendering parameters;
packaging first subdata for representing the special effect rendering parameters and second subdata for representing the original video data in first data;
sending the first data to a server;
receiving target video data which are sent by the server and generated according to the first data rendering;
and displaying the target video data.
In an embodiment of the present invention, the acquiring of the original video data includes:
and separating the video data and the audio data in the video file to obtain original video data and original audio data.
In the embodiment of the present invention, the method further includes:
and when the target video data is displayed, playing the original audio data.
The server provided by the embodiment of the invention comprises:
the receiving unit is used for receiving first data sent by a terminal;
the analysis unit is used for analyzing the first data and extracting first subdata and second subdata from the first data, wherein the first subdata is used for representing special effect rendering parameters, and the second subdata is used for representing original video data;
the rendering unit is used for rendering and generating target video data according to the first subdata and the second subdata;
and the sending unit is used for sending the target video data to the terminal so as to display the target video data by the terminal.
In the embodiment of the present invention, the server further includes:
the storage unit is used for storing the special effect template file;
the rendering unit includes:
the searching subunit is used for searching a special effect template file corresponding to the special effect rendering parameter;
a dividing subunit, configured to divide the original video data into a plurality of original video sub-data;
a parallel rendering subunit, configured to perform parallel rendering on the special effect template file and the plurality of original video subdata, and generate a plurality of target video subdata;
and the merging subunit is used for merging the plurality of target video sub-data into target video data.
In an embodiment of the present invention, the parsing unit includes:
the acquisition subunit is used for analyzing the first data to obtain a header part and a packet part of the first data;
and the extraction subunit is used for extracting first subdata from the packet part and extracting second subdata from the packet part.
The terminal provided by the embodiment of the invention comprises:
the acquisition unit is used for acquiring original video data;
the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring input special effect rendering parameters;
the packaging unit is used for packaging first subdata used for representing the special effect rendering parameters and second subdata used for representing the original video data into first data;
a sending unit, configured to send the first data to a server;
a receiving unit, configured to receive target video data sent by the server and generated according to the first data rendering;
and the display unit is used for displaying the target video data.
In the embodiment of the present invention, the acquisition unit is further configured to separate video data and audio data in the video file to obtain original video data and original audio data.
In the embodiment of the present invention, the terminal further includes:
and the audio playing unit is used for playing the original audio data when the display unit displays the target video data.
In the technical scheme of the embodiment of the invention, a terminal acquires original video data and acquires input special effect rendering parameters; and packaging first subdata used for representing the special effect rendering parameters and second subdata used for representing the original video data in the first data and sending the first subdata and the second subdata to a server. The server receives first data sent by the terminal; analyzing the first data, and extracting first subdata and second subdata from the first data, wherein the first subdata is used for representing special effect rendering parameters, and the second subdata is used for representing original video data; rendering and generating target video data according to the first subdata and the second subdata; and sending the target video data to the terminal so that the terminal can display the target video data. Therefore, the terminal delivers the rendering process with higher processing complexity to the server for processing, the rendering processing speed is greatly improved, and then various special effects can be created to meet the requirements of users.
Drawings
FIG. 1 is a diagram of hardware entities performing information interaction in an embodiment of the present invention;
FIG. 2 is a first flowchart illustrating a video processing method according to an embodiment of the present invention;
FIG. 3 is a second flowchart illustrating a video processing method according to an embodiment of the invention;
FIG. 4 is a diagram illustrating a combination of special effects structures according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a CPU and GPU combined rendering according to an embodiment of the present invention;
FIG. 6 is a diagram of a terminal server network interaction architecture according to an embodiment of the present invention;
FIG. 7 is a third flowchart illustrating a video processing method according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a server according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
So that the manner in which the features and aspects of the embodiments of the present invention can be understood in detail, a more particular description of the embodiments of the invention, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings.
Abundant and various video resources are displayed in an internet video website or a video Application (APP), wherein short videos are paid high attention due to high instantaneity, sharing, entertainment and the like. The biggest characteristic of short video is that special effects are blended into actually shot or made video pictures. For example, the background of a certain movie is merged into a short video, so that characters in the video become the leading roles in the movie. For another example, a magic effect is blended into a short video, so that a person in the video has an excess ability. When a special effect is added in a short video, a video picture needs to be rendered again, if the video picture is rendered locally at a terminal, the rendering speed is slow due to the limitation of the processing capacity of the terminal, and in order to overcome the defect of hardware resources of the terminal, the embodiment of the invention provides a scheme for rendering the short video based on a server.
Fig. 1 is a schematic diagram of hardware entities performing information interaction in an embodiment of the present invention, where fig. 1 includes: a terminal 11 and a server 12. Wherein, the terminal 11 performs information interaction with the server 12 through a wired network or a wireless network. The terminal device referred to by the terminal 11 includes a mobile phone, a desktop computer, a PC computer, an all-in-one machine, and the like. In one example, the terminal 11 sends video data to be rendered or to be added with a special effect to the server 12, the server 12 renders or adds the special effect to the video data, and then the server sends the rendered or added with the special effect to the terminal 11 for displaying.
The above example of fig. 1 is only an example of a system architecture for implementing the embodiment of the present invention, and the embodiment of the present invention is not limited to the system architecture described in the above fig. 1, and various embodiments of the present invention are proposed based on the system architecture.
Fig. 2 is a first flowchart of a video processing method according to an embodiment of the present invention, where the video processing method in this example is applied to a server side, as shown in fig. 2, the video processing method includes the following steps:
step 201: and receiving first data sent by the terminal.
In the embodiment of the invention, the terminal can be a mobile phone, a tablet computer, a notebook computer and other equipment.
In one embodiment, the terminal is equipped with a short video APP with which shorter time videos can be captured, which are referred to herein as short videos, e.g., 10 seconds videos, 60 seconds videos, etc. The user can add a special effect in the short video, and for this reason, the terminal needs to send video data and a special effect rendering parameter for adding the special effect to the server at first.
In another embodiment, the terminal has a video editing class APP, and a short section of video can be intercepted from a video file by using the video editing class APP, where the short section of video is referred to as a short video. The user can add a special effect in the short video, and for this reason, the terminal needs to send video data and a special effect rendering parameter for adding the special effect to the server at first.
In the embodiment of the invention, after receiving the rendering request sent by the terminal, the server determines that special effect rendering needs to be carried out on the original video data in the terminal, and after receiving the rendering request, the server continues to receive the first data sent by the terminal. Here, the first data carries the following information: special effect rendering parameters and original video data.
In the embodiment of the invention, the communication between the server and the terminal is based on a hypertext Transfer Protocol (HTTP). Based on this, the format of the first data is an HTTP packet.
Step 202: and analyzing the first data, and extracting first subdata and second subdata from the first data, wherein the first subdata is used for representing special effect rendering parameters, and the second subdata is used for representing original video data.
In the embodiment of the invention, the format of first data is an HTTP data packet, and the first data is analyzed to obtain a header part and a packet part of the first data; wherein, the packet head part is HTTP packet head (Header), and the packet Body part is HTTP packet Body (Body). Extracting first subdata from the packet part, and extracting second subdata from the packet part.
In the embodiment of the invention, the first subdata is used for representing the special effect rendering parameters, wherein the special effect rendering parameters refer to special effect identifiers, and special effects to be added can be uniquely determined through the special effect identifiers. Each special effect corresponds to a special effect identifier, and when a user selects a special effect to be added in the terminal, the special effect identifier corresponding to the special effect is packaged in the first data and is sent to the server.
In this embodiment of the present invention, the second sub-data is used to represent original video data. Here, the original video data refers to video data before adding no special effect, and the video data is acquired by the terminal, packaged into the first data, and sent to the server. In one embodiment, the format of the original video data is mp4 format.
Step 203: and rendering and generating target video data according to the first subdata and the second subdata.
In the embodiment of the invention, the server analyzes the first subdata to obtain the special effect rendering parameter. Then, the server searches for a special effect template file corresponding to the special effect rendering parameter. The special effect template file is stored at the server side, and the server side has a large number of storage resources, so that various special effect templates can be stored at the server side, each special effect template is realized through one special effect template file, and the special effect template file comprises a series of special effect commands and special effect parameters required for displaying the special effect. Referring to fig. 4, the rendering of a special effect includes two major categories, the first category is video rendering, and the second category is text rendering, and for video rendering, the two categories include: affine transformation, mirror inversion, alpha gradient, and the like. For text rendering, comprising: translation transformation, bloom transformation, blur transformation, rotation transformation, and the like. Different text may correspond to different text renderings.
In the embodiment of the invention, the server analyzes the second subdata to obtain the original video data. Then, the server divides the original video data into a plurality of original video sub-data, and as shown in fig. 5, it is assumed that the original video data includes 12 video frames, which are 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, and 12 frames, respectively. The 12 frames of data are arranged in time sequence, and the 12 frames of video data are divided into 4 original video subdata, wherein each original video subdata comprises 3 frames. The first original video subdata comprises 01 frames, 02 frames and 03 frames; the second original video subdata comprises 04 frames, 05 frames and 06 frames; the third original video sub-data includes 07 frames, 08 frames, 09 frames. The fourth original video sub-data includes 10 frames, 11 frames, and 12 frames. Because a Central Processing Unit (CPU) and a Graphic Processing Unit (GPU) in the server have multiple cores, the special effect template file and the multiple original video subdata can be rendered in parallel through the multiple cores of CPU and the multiple cores of GPU to generate multiple target video subdata; then, the plurality of target video sub-data are merged into target video data. Referring to fig. 5, the four original video sub-data are distributed to 4 cores (CPU 1, CPU2, CPU3, and CPU4, respectively) for parallel processing. For each core of the CPU, there may be a set of GPUs, for example, CPU1 corresponds to GPUs 01-12, and video data is rendered in parallel by GPU01-GPU12 on a pixel-by-pixel basis. And finally rendering 4 target video subdata by the four groups of GPUs, and then combining the 4 target video subdata to obtain target video data. In the embodiment of the invention, the parallel computing capability of the processor can greatly improve the special effect rendering speed, and bring the experience of super real-time rendering, previewing and sharing to users.
Step 204: and sending the target video data to the terminal so that the terminal can display the target video data.
In the embodiment of the invention, the server sends the target video data to the terminal in an HTTP mode so that the terminal can display the target video data. In one embodiment, the target video data is encoded into a file in an mp4 format by the server, and the target video data in the mp4 format is transmitted to the terminal for display.
The technical scheme of the embodiment of the invention solves the problem of slow special effect rendering caused by the limitation of hardware resources, and realizes the super real-time rendering, previewing and sharing of the short video file. And a large number of special effect templates do not need to be downloaded to a local storage space, so that the storage resources are saved. In addition, the use of a local chip is reduced, and the complex rendering is submitted to a server for processing, so that the video editing experience of a user is smoother and more convenient, and the sharing is faster.
Fig. 3 is a second flowchart of a video processing method according to an embodiment of the present invention, where the video processing method in this example is applied to a terminal, and as shown in fig. 3, the video processing method includes the following steps:
step 301: acquiring original video data and acquiring input special effect rendering parameters.
In the embodiment of the invention, the terminal can be a mobile phone, a tablet computer, a notebook computer and other equipment.
In one embodiment, the terminal is equipped with a short video APP with which shorter time videos can be captured, which are referred to herein as short videos, e.g., 10 seconds videos, 60 seconds videos, etc. The user can add a special effect in the short video, and for this reason, the terminal needs to send video data and a special effect rendering parameter for adding the special effect to the server at first.
In another embodiment, the terminal has a video editing class APP, and a short section of video can be intercepted from a video file by using the video editing class APP, where the short section of video is referred to as a short video. The user can add a special effect in the short video, and for this reason, the terminal needs to send video data and a special effect rendering parameter for adding the special effect to the server at first.
In the embodiment of the invention, the video file acquired by the terminal generally comprises video data and audio data. Here, the video data and the audio data in the video file are separated to obtain original video data and original audio data.
In the embodiment of the present invention, the original video data obtained after separation needs to be converted into an mp4 format, where the mp4 format is a standard format for performing rendering processing subsequently.
In the embodiment of the invention, the special effect rendering parameter refers to a special effect identifier, and the special effect to be added can be uniquely determined through the special effect identifier. Each special effect corresponds to a special effect identifier, and when a user selects a special effect to be added in the terminal, the special effect identifier corresponding to the special effect is packaged in the first data and is sent to the server.
Step 302: and encapsulating first subdata for representing the special effect rendering parameters and second subdata for representing the original video data in the first data.
In the embodiment of the invention, the communication between the server and the terminal is based on HTTP. Based on this, the format of the first data is an HTTP packet.
In this embodiment of the present invention, encapsulating first sub data used for characterizing the special effect rendering parameter and second sub data used for characterizing the original video data in first data includes: and packaging first subdata used for representing the special effect rendering parameters in a header part of first data, and packaging second subdata used for representing the original video data in a packet part of the first data. Wherein, the packet head part is HTTP packet head (Header), and the packet Body part is HTTP packet Body (Body).
Step 303: and sending the first data to a server.
In the embodiment of the invention, the terminal sends the first data to the server through HTTP POST, and the server receives the first data sent by the terminal through HTTP GET.
And then, the server renders and generates target video data according to the first data, and specifically, the server analyzes the first subdata to obtain a special effect rendering parameter. Then, the server searches for a special effect template file corresponding to the special effect rendering parameter. The special effect template file is stored at the server side, and the server side has a large number of storage resources, so that various special effect templates can be stored at the server side, each special effect template is realized through one special effect template file, and the special effect template file comprises a series of special effect commands and special effect parameters required for displaying the special effect. And the server analyzes the second subdata to obtain original video data. Then, the server divides the original video data into a plurality of original video subdata, and the special effect template file and the plurality of original video subdata can be rendered in parallel through a multi-core CPU and a multi-core GPU to generate a plurality of target video subdata; then, the plurality of target video sub-data are merged into target video data.
Step 304: receiving target video data which are sent by the server and generated according to the first data rendering; and displaying the target video data.
In the embodiment of the present invention, the terminal receives, based on the HTTP, target video data generated by rendering according to the first data and sent by the server, where the target video data is in an mp4 format, and therefore, the target video data can be directly displayed, and the displayed video is a video to which a special effect is added.
In the embodiment of the invention, when the target video data is displayed, the original audio data is played.
The technical scheme of the embodiment of the invention can also adopt a mode of combining local (finger terminal) and the server, wherein the server mode is adopted if the calculation complexity is high, and the local mode is adopted if the calculation complexity is low. Therefore, more requirements of users can be met, and more novel special effects can be provided. In addition, the local operation mode of one-key and special effects can be adopted, so that the operation of a common user is very convenient, and all rendering processing is operated by the server. All processing details are shielded from the user so that the user feels the same processing locally. The server has ultra-fast second level rendering time and smaller second level transmission delay, so that the user experience is smooth and free. Referring to fig. 6, the server is connected to each terminal according to the principle of near distribution, so that the transmission speed of the short video is faster, and the speed of changing different special effect mode preview effects by the user can be increased by using a long-connection HTTP communication mechanism.
Fig. 7 is a schematic flowchart of a third exemplary flow of a video processing method according to an embodiment of the present invention, as shown in fig. 7, the video processing method includes the following steps:
step 701: acquiring original video data by a terminal; the original video data is converted into an mp4 file.
Step 702: the terminal obtains special effect rendering parameters input by a user; and converting the special effect rendering parameters into json files.
Step 703: the terminal encapsulates the mp4 file and json file into an HTTP packet.
Step 704: and the terminal sends a rendering request to the server and sends the HTTP data packet to the server.
Step 705: and the server receives the rendering request and the HTTP data packet sent by the terminal.
Step 706: and the server reads the HTTP data packet to the memory.
Step 707: the server analyzes the HTTP data packet in the memory and separates an mp4 file and a json file. And slicing the mp4 file to obtain a plurality of sliced files.
Step 708: the server analyzes the special effect rendering parameters in the json file; and reading the corresponding special effect template from the special effect template library according to the special effect rendering parameters.
Step 709: the server analyzes the special effect commands and parameters in the special effect template.
Step 710: and the server renders the special effects in the json file in parallel and in turn through the CPU and the GPU.
Here, there are many kinds of special effects, taking the special effects including a caption special effect and a video special effect as an example, for the caption special effect, analyzing a caption special effect number in a json file, and reading the json file related to the caption through the caption special effect number; rendering the caption special effect through the GPU and the CPU in parallel, and storing rendered caption files into a buffer (buffer); for the video special effect, analyzing a video special effect number in a json file, and reading the json file related to the video through the video special effect number; and rendering the video special effect in parallel through the GPU and the CPU, and storing a rendered video file into a buffer (buffer). And finally, caching the subtitles and the videos in the two buffers into other buffers together to obtain a final special effect file.
Step 711: the server encodes the special effect data in the buffer into the 264 stream.
Step 712: the server stores 264 streams into each sharded file.
Step 713: the server merges all sharded files into an mp4 file.
Step 714: the terminal receives the mp4 file with special effect sent by the server.
Step 715: the terminal reads the video data from the mp4 file.
Here, the terminal may also read audio data from an mp4 file.
Step 716: and parsing the video data into a buffer.
Step 717: the terminal displays the video data to the user through the display interface.
According to the technical scheme of the embodiment of the invention, the special effect template does not need to be downloaded locally, so that the storage resource is saved, and the server can accelerate the rendering progress by directly obtaining the special effect template; because the rendering process is executed at the server side, various terminal devices can be spanned, the universality is stronger, and the maintenance is more convenient; the data files of the user can be backed up by operating at the server side, so that the user can conveniently operate and lose and retrieve the data files; the hardware configuration requirement on the terminal is low, and the common smart phone can be operated.
Fig. 8 is a schematic structural component diagram of a server according to an embodiment of the present invention, and as shown in fig. 8, the server includes:
a receiving unit 81, configured to receive first data sent by a terminal;
an analyzing unit 82, configured to analyze the first data and extract first subdata and second subdata from the first data, where the first subdata is used to represent special effect rendering parameters, and the second subdata is used to represent original video data;
a rendering unit 83, which generates target video data by rendering according to the first subdata and the second subdata;
a sending unit 84, configured to send the target video data to the terminal, so that the terminal displays the target video data.
The server further comprises:
a storage unit 85 for storing a special effect template file;
the rendering unit 83 includes:
a searching subunit 831, configured to search for a special effect template file corresponding to the special effect rendering parameter;
a dividing subunit 832, configured to divide the original video data into a plurality of original video sub-data;
a parallel rendering subunit 833, configured to perform parallel rendering on the special effect template file and the plurality of original video sub-data, and generate a plurality of target video sub-data;
a merging subunit 834, configured to merge the plurality of target video sub-data into target video data.
The parsing unit 82 includes:
an obtaining subunit 821, configured to analyze the first data to obtain a header portion and a packet portion of the first data;
an extracting subunit 822, configured to extract the first sub data from the packet part, and extract the second sub data from the packet part.
Those skilled in the art will appreciate that the functions implemented by the units in the server shown in fig. 8 can be understood by referring to the related description of the video processing method.
Fig. 9 is a schematic structural composition diagram of a terminal according to an embodiment of the present invention, and as shown in fig. 9, the terminal includes:
a collecting unit 91 for collecting original video data;
an obtaining unit 92, configured to obtain an input special effect rendering parameter;
an encapsulating unit 93, configured to encapsulate first subdata used for representing the special effect rendering parameter and second subdata used for representing the original video data in first data;
a sending unit 94, configured to send the first data to a server;
a receiving unit 95, configured to receive target video data sent by the server and generated according to the rendering of the first data;
and a display unit 96 for displaying the target video data.
The acquisition unit 91 is further configured to separate video data and audio data in the video file to obtain original video data and original audio data.
The terminal further comprises:
an audio playing unit 97, configured to play the original audio data when the display unit displays the target video data.
Those skilled in the art will appreciate that the functions implemented by the units in the terminal shown in fig. 9 can be understood by referring to the related description of the video processing method described above.
The technical schemes described in the embodiments of the present invention can be combined arbitrarily without conflict.
In the embodiments provided in the present invention, it should be understood that the disclosed method and intelligent device may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one second processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention.

Claims (10)

1. A method of video processing, the method comprising:
when the special effect cannot be rendered in real time based on the processing capacity of the terminal, receiving first data sent by the terminal;
analyzing the first data to obtain a packet head part and a packet body part of the first data;
extracting first subdata from the packet part, and extracting second subdata from the packet part;
the first subdata is used for representing special effect rendering parameters corresponding to the special effects, and the second subdata is used for representing original video data;
analyzing the first subdata to obtain the special effect rendering parameters, and searching corresponding special effect template files according to the special effect rendering parameters;
analyzing the special effect template file to obtain a special effect command and a special effect parameter;
analyzing the second subdata to obtain the original video data, and dividing the original video data into a plurality of original video subdata;
distributing the plurality of original video subdata to a plurality of cores of a central processor and a graphic processor;
performing parallel rendering on the distributed original video subdata according to the special effect command and the special effect parameter through the central processing unit and the cores of the graphics processor to generate a plurality of target video subdata;
merging the plurality of target video subdata into target video data;
establishing a long connection based on a hypertext transfer protocol mechanism with the terminal through a server distributed for the terminal by a near distribution principle;
and sending the target video data to the terminal through the long connection so that the terminal can display the video picture of the target video data and the special effect blended in the video picture in real time.
2. The video processing method according to claim 1, wherein the rendering manner comprises: video rendering and text rendering.
3. A method of video processing, the method comprising:
acquiring original video data and acquiring input special effect rendering parameters corresponding to a special effect;
when the special effect cannot be rendered in real time based on the processing capability of the terminal, packaging first subdata for representing the special effect rendering parameters in a packet head part of first data, and packaging second subdata for representing the original video data in a packet body part of the first data;
sending the first data to a server;
establishing a long connection based on a hypertext transfer protocol mechanism with a server distributed for the terminal by a near distribution principle;
receiving target video data which is sent by the server and generated according to the first data rendering through the long connection;
displaying a video picture of the target video data and the special effect blended in the video picture in real time;
the first subdata is used for the server to analyze to obtain the special effect rendering parameters, and a corresponding special effect template file is searched according to the special effect rendering parameters;
the second subdata is used for the server to analyze to obtain the original video data, and the original video data is divided into a plurality of original video subdata;
the special effect template file is used for the server to analyze to obtain a special effect command and a special effect parameter;
the original video subdata is used for the server to distribute the original video subdata to a central processor and a plurality of cores of a graphic processor; performing parallel rendering on the distributed original video subdata according to the special effect command and the special effect parameter through the central processing unit and the cores of the graphics processor to generate a plurality of target video subdata; and merging the plurality of target video subdata into target video data.
4. The video processing method of claim 3, wherein said capturing raw video data comprises:
and separating the video data and the audio data in the video file to obtain original video data and original audio data.
5. The video processing method of claim 4, wherein the method further comprises:
and when the target video data is displayed, playing the original audio data.
6. A server, characterized in that,
the server is a server distributed for the terminal by a near distribution principle;
the server includes:
the terminal comprises a receiving unit, a processing unit and a display unit, wherein the receiving unit is used for receiving first data sent by the terminal when a special effect cannot be rendered in real time based on the processing capacity of the terminal;
the analysis unit is used for analyzing the first data to obtain a packet head part and a packet body part of the first data; extracting first subdata from the packet part, and extracting second subdata from the packet part; the first subdata is used for representing special effect rendering parameters corresponding to the special effects, and the second subdata is used for representing original video data;
the rendering unit is used for rendering and generating target video data according to the first subdata and the second subdata;
a sending unit, configured to establish a long connection based on a hypertext transfer protocol mechanism with the terminal; sending the target video data to the terminal through the long connection so that the terminal can display the video picture of the target video data and the special effect blended in the video picture in real time;
the storage unit is used for storing the special effect template file;
wherein the rendering unit includes:
the searching subunit is used for analyzing the first subdata to obtain the special effect rendering parameters, and searching corresponding special effect template files according to the special effect rendering parameters; analyzing the special effect template file to obtain a special effect command and a special effect parameter;
a dividing subunit, configured to analyze the second sub-data to obtain the original video data, and divide the original video data into a plurality of original video sub-data;
a parallel rendering subunit, configured to distribute the plurality of original video sub-data to a plurality of cores of a central processor and a graphics processor; performing parallel rendering on the distributed original video subdata according to the special effect command and the special effect parameter through the central processing unit and the cores of the graphics processor to generate a plurality of target video subdata;
and the merging subunit is used for merging the plurality of target video sub-data into target video data.
7. The server according to claim 6, wherein the rendering manner comprises: video rendering and text rendering.
8. A terminal, characterized in that the terminal comprises:
the acquisition unit is used for acquiring original video data;
an acquisition unit configured to acquire an input special effect rendering parameter corresponding to a special effect;
the packaging unit is used for packaging first subdata for representing the special effect rendering parameters into a header part of first data and packaging second subdata for representing the original video data into a packet part of the first data when the special effect cannot be rendered in real time based on the processing capacity of the terminal;
a sending unit, configured to send the first data to a server;
a receiving unit, configured to establish a long connection based on a hypertext transfer protocol mechanism with a server that is allocated to the terminal according to a near allocation principle; receiving target video data which is sent by the server and generated according to the first data rendering through the long connection;
the display unit is used for displaying a video picture of the target video data and the special effect blended in the video picture in real time;
the first subdata is used for the server to analyze to obtain the special effect rendering parameters, and a corresponding special effect template file is searched according to the special effect rendering parameters;
the second subdata is used for the server to analyze to obtain the original video data, and the original video data is divided into a plurality of original video subdata;
the special effect template file is used for the server to analyze to obtain a special effect command and a special effect parameter;
the original video subdata is used for the server to distribute the original video subdata to a central processor and a plurality of cores of a graphic processor; and performing parallel rendering on the distributed original video subdata according to the special effect command and the special effect parameter through the central processing unit and the cores of the graphics processor to generate a plurality of target video subdata, and combining the plurality of target video subdata into target video data.
9. The terminal according to claim 8, wherein the capture unit is further configured to separate video data and audio data in a video file to obtain original video data and original audio data.
10. The terminal of claim 9, wherein the terminal further comprises:
and the audio playing unit is used for playing the original audio data when the display unit displays the target video data.
CN201610639190.0A 2016-08-04 2016-08-04 Video processing method, server and terminal Active CN106060655B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610639190.0A CN106060655B (en) 2016-08-04 2016-08-04 Video processing method, server and terminal
PCT/CN2017/095338 WO2018024179A1 (en) 2016-08-04 2017-07-31 Video processing method, server, terminal, and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610639190.0A CN106060655B (en) 2016-08-04 2016-08-04 Video processing method, server and terminal

Publications (2)

Publication Number Publication Date
CN106060655A CN106060655A (en) 2016-10-26
CN106060655B true CN106060655B (en) 2021-04-06

Family

ID=57480392

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610639190.0A Active CN106060655B (en) 2016-08-04 2016-08-04 Video processing method, server and terminal

Country Status (2)

Country Link
CN (1) CN106060655B (en)
WO (1) WO2018024179A1 (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106060655B (en) * 2016-08-04 2021-04-06 腾讯科技(深圳)有限公司 Video processing method, server and terminal
WO2018119602A1 (en) * 2016-12-26 2018-07-05 深圳前海达闼云端智能科技有限公司 Rendering method and device
CN107369165A (en) * 2017-07-10 2017-11-21 Tcl移动通信科技(宁波)有限公司 A kind of video selection picture optimization method and storage medium, intelligent terminal
CN109474844B (en) * 2017-09-08 2020-08-18 腾讯科技(深圳)有限公司 Video information processing method and device and computer equipment
CN108536790A (en) * 2018-03-30 2018-09-14 北京市商汤科技开发有限公司 The generation of sound special efficacy program file packet and sound special efficacy generation method and device
CN108986227B (en) * 2018-06-28 2022-11-29 北京市商汤科技开发有限公司 Particle special effect program file package generation method and device and particle special effect generation method and device
CN109275007B (en) * 2018-09-30 2020-11-20 联想(北京)有限公司 Processing method and electronic equipment
CN111199519B (en) * 2018-11-16 2023-08-22 北京微播视界科技有限公司 Method and device for generating special effect package
CN110245258B (en) * 2018-12-10 2023-03-17 浙江大华技术股份有限公司 Method for establishing index of video file, video file analysis method and related system
CN111355978B (en) * 2018-12-21 2022-09-06 北京字节跳动网络技术有限公司 Video file processing method and device, mobile terminal and storage medium
CN109731337B (en) * 2018-12-28 2023-02-21 超级魔方(北京)科技有限公司 Method and device for creating special effect of particles in Unity, electronic equipment and storage medium
CN109600629A (en) * 2018-12-28 2019-04-09 北京区块云科技有限公司 A kind of Video Rendering method, system and relevant apparatus
CN110012352B (en) * 2019-04-17 2020-07-24 广州华多网络科技有限公司 Image special effect processing method and device and video live broadcast terminal
CN110062163B (en) 2019-04-22 2020-10-20 珠海格力电器股份有限公司 Multimedia data processing method and device
CN112116690B (en) * 2019-06-19 2023-07-07 腾讯科技(深圳)有限公司 Video special effect generation method, device and terminal
CN113572948B (en) * 2020-04-29 2022-11-11 华为技术有限公司 Video processing method and video processing device
CN111957039A (en) * 2020-09-04 2020-11-20 Oppo(重庆)智能科技有限公司 Game special effect realization method and device and computer readable storage medium
CN112190933A (en) * 2020-09-30 2021-01-08 珠海天燕科技有限公司 Special effect processing method and device in game scene
CN112532896A (en) * 2020-10-28 2021-03-19 北京达佳互联信息技术有限公司 Video production method, video production device, electronic device and storage medium
CN112291590A (en) * 2020-10-30 2021-01-29 北京字节跳动网络技术有限公司 Video processing method and device
CN113192152A (en) * 2021-05-24 2021-07-30 腾讯音乐娱乐科技(深圳)有限公司 Audio-based image generation method, electronic device and storage medium
CN115243108B (en) * 2022-07-25 2023-04-11 深圳市腾客科技有限公司 Decoding playing method
CN115049775B (en) * 2022-08-15 2023-01-31 广州中平智能科技有限公司 Dynamic allocation method and system for meta-universe rendering in dual-carbon energy industry

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1415280A1 (en) * 2001-05-14 2004-05-06 Thomson Licensing S.A. Device, server, system and method to generate mutual photometric effects
CN102868923A (en) * 2012-09-13 2013-01-09 北京富年科技有限公司 Method, equipment and system applied to special-effect cloud treatment of videos of mobile terminal
CN104461788A (en) * 2014-12-30 2015-03-25 成都品果科技有限公司 Mobile terminal photo backup method and system based on remote special effect rendering
CN104732568A (en) * 2015-04-16 2015-06-24 成都品果科技有限公司 Method and device for online addition of lyric subtitles to pictures
CN104796767A (en) * 2015-03-31 2015-07-22 北京奇艺世纪科技有限公司 Method and system for editing cloud video

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040226047A1 (en) * 2003-05-05 2004-11-11 Jyh-Bor Lin Live broadcasting method and its system for SNG webcasting studio
CN103297660A (en) * 2012-02-23 2013-09-11 上海魔睿信息科技有限公司 Real-time interaction special effect camera shooting and photographing method
CN103391414B (en) * 2013-07-24 2017-06-06 杭州趣维科技有限公司 A kind of video process apparatus and processing method for being applied to cell phone platform
CN104811829A (en) * 2014-01-23 2015-07-29 苏州乐聚一堂电子科技有限公司 Karaoke interactive multifunctional special effect system
CN105338370A (en) * 2015-10-28 2016-02-17 北京七维视觉科技有限公司 Method and apparatus for synthetizing animations in videos in real time
CN105323252A (en) * 2015-11-16 2016-02-10 上海璟世数字科技有限公司 Method and system for realizing interaction based on augmented reality technology and terminal
CN106060655B (en) * 2016-08-04 2021-04-06 腾讯科技(深圳)有限公司 Video processing method, server and terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1415280A1 (en) * 2001-05-14 2004-05-06 Thomson Licensing S.A. Device, server, system and method to generate mutual photometric effects
CN102868923A (en) * 2012-09-13 2013-01-09 北京富年科技有限公司 Method, equipment and system applied to special-effect cloud treatment of videos of mobile terminal
CN104461788A (en) * 2014-12-30 2015-03-25 成都品果科技有限公司 Mobile terminal photo backup method and system based on remote special effect rendering
CN104796767A (en) * 2015-03-31 2015-07-22 北京奇艺世纪科技有限公司 Method and system for editing cloud video
CN104732568A (en) * 2015-04-16 2015-06-24 成都品果科技有限公司 Method and device for online addition of lyric subtitles to pictures

Also Published As

Publication number Publication date
WO2018024179A1 (en) 2018-02-08
CN106060655A (en) 2016-10-26

Similar Documents

Publication Publication Date Title
CN106060655B (en) Video processing method, server and terminal
US10560755B2 (en) Methods and systems for concurrently transmitting object data by way of parallel network interfaces
CN111901674B (en) Video playing control method and device
US20220007083A1 (en) Method and stream-pushing client for processing live stream in webrtc
KR100889367B1 (en) System and Method for Realizing Vertual Studio via Network
US8876601B2 (en) Method and apparatus for providing a multi-screen based multi-dimension game service
US9860285B2 (en) System, apparatus, and method for sharing a screen having multiple visual components
US20160029079A1 (en) Method and Device for Playing and Processing a Video Based on a Virtual Desktop
WO2022257699A1 (en) Image picture display method and apparatus, device, storage medium and program product
CN109309842B (en) Live broadcast data processing method and device, computer equipment and storage medium
US20210392386A1 (en) Data model for representation and streaming of heterogeneous immersive media
US11451858B2 (en) Method and system of processing information flow and method of displaying comment information
Han Mobile immersive computing: Research challenges and the road ahead
CN110392063A (en) Electronic whiteboard method of data synchronization, device, equipment and medium
CN111464828A (en) Virtual special effect display method, device, terminal and storage medium
CN114139491A (en) Data processing method, device and storage medium
JP2023538825A (en) Methods, devices, equipment and storage media for picture to video conversion
CN114374853A (en) Content display method and device, computer equipment and storage medium
KR102516831B1 (en) Method, computer device, and computer program for providing high-definition image of region of interest using single stream
EP3229478B1 (en) Cloud streaming service system, image cloud streaming service method using application code, and device therefor
KR102271721B1 (en) System for cloud streaming service, method of image cloud streaming service considering terminal performance and apparatus for the same
CN107707930B (en) Video processing method, device and system
KR20130109904A (en) Method and apparatus for servicing multi-dimension game based on multi-screen service
KR102247887B1 (en) System for cloud streaming service, method of cloud streaming service using source information and apparatus for the same
US20240013461A1 (en) Interactive Animation Generation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant