WO2018024179A1 - Video processing method, server, terminal, and computer storage medium - Google Patents

Video processing method, server, terminal, and computer storage medium Download PDF

Info

Publication number
WO2018024179A1
WO2018024179A1 PCT/CN2017/095338 CN2017095338W WO2018024179A1 WO 2018024179 A1 WO2018024179 A1 WO 2018024179A1 CN 2017095338 W CN2017095338 W CN 2017095338W WO 2018024179 A1 WO2018024179 A1 WO 2018024179A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
sub
video
terminal
server
Prior art date
Application number
PCT/CN2017/095338
Other languages
French (fr)
Chinese (zh)
Inventor
王颖琦
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018024179A1 publication Critical patent/WO2018024179A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4782Web browsing, e.g. WebTV

Definitions

  • the present invention relates to video processing technologies, and in particular, to a video processing method, a server and a terminal, and a computer storage medium.
  • an embodiment of the present invention provides a video processing method, a server, a terminal, and a computer storage medium.
  • the video processing method provided by the embodiment of the present invention is applied to a server, and includes:
  • a video processing method is applied to a terminal, including:
  • a receiving part configured to receive first data sent by the terminal
  • the parsing portion is configured to parse the first data, and extract the first sub data and the second sub data from the first data, where the first sub data is used to represent the special effect rendering parameter, and the second Subdata is used to characterize the original video data;
  • a rendering part configured to generate target video data according to the first sub data and the second sub data
  • a sending part configured to send the target video data to the terminal.
  • the acquisition part is configured to collect original video data
  • An encapsulation portion configured to be used to characterize the first sub-data of the effect rendering parameter, and a second sub-data for characterizing the original video data is encapsulated in the first data;
  • a sending part configured to send the first data to a server
  • a receiving part configured to receive target video data generated by the server according to the first data rendering
  • a display portion configured to display the target video data.
  • the computer storage medium provided by the embodiment of the present invention, wherein computer executable instructions are stored, and the computer executable instructions are used to execute any of the video processing methods described above.
  • the terminal collects the original video data, and obtains the input special effect rendering parameter; the first sub data used to represent the special effect rendering parameter, and the second used to represent the original video data
  • the sub-data package is sent to the server in the first data.
  • the server receives the first data sent by the terminal, parses the first data, and extracts the first sub data and the second sub data from the first data, where the first sub data is used to represent the special effect rendering parameter.
  • the second sub-data is used to represent the original video data; the target video data is generated according to the first sub-data and the second sub-data; and the target video data is sent to the terminal for the terminal Displaying the target video data.
  • the terminal handles the processing process with higher complexity to the server, and the rendering processing speed is greatly improved, thereby creating a variety of special effects to meet the user's needs, and in addition, because the rendering processing speed is improved, Video sharing enables real-time performance.
  • 1 is a schematic diagram of hardware entities of each party performing information interaction in an embodiment of the present invention
  • FIG. 2 is a schematic flowchart 1 of a video processing method according to an embodiment of the present invention.
  • FIG. 3 is a second schematic flowchart of a video processing method according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a combination of special effects structures according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a combined rendering of a CPU and a GPU according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram of a network interaction architecture of a terminal server according to an embodiment of the present invention.
  • FIG. 7 is a schematic flowchart 3 of a video processing method according to an embodiment of the present invention.
  • FIG. 8 is a flowchart of generating a target video data according to an embodiment of the present invention.
  • FIG. 9 is a first schematic structural diagram of a server according to an embodiment of the present invention.
  • FIG. 10 is a first schematic structural diagram of a terminal according to an embodiment of the present invention.
  • FIG. 11 is a second schematic structural diagram of a server according to an embodiment of the present invention.
  • FIG. 12 is a second schematic structural diagram of a terminal according to an embodiment of the present invention.
  • short video refers to a video with a short video length, such as a video with a length of a few seconds to a few minutes.
  • the biggest feature of short video is the integration of special effects in the actual shooting or production of the video. For example, incorporating a movie's background into a short video makes the characters in the video the protagonists in the movie. For another example, the effect of magic is incorporated into the short video, so that the characters in the video have super powers.
  • the video picture needs to be re-rendered. If the video picture is rendered locally in the terminal, the rendering speed is slow due to the limitation of the terminal processing capability.
  • Embodiments of the present invention provide a scheme for rendering short video based on a server.
  • FIG. 1 is a schematic diagram of hardware entities of each party performing information interaction according to an embodiment of the present invention.
  • FIG. 1 includes: a terminal 11 and a server 12.
  • the terminal 11 performs information interaction with the server 12 through a wired network or a wireless network.
  • the device referred to by the terminal 11 includes a mobile phone, a desktop computer, a PC, an all-in-one, and the like.
  • the terminal 11 sends the video data to be rendered (also referred to as special effects) to the server 12, and the video data is rendered by the server 12, and then the server will render The dyed video data is sent to the terminal 11 for display.
  • FIG. 1 is only a system architecture example for implementing the embodiment of the present invention.
  • the embodiment of the present invention is not limited to the system structure described in FIG. 1 above, and various embodiments of the present invention are proposed based on the system architecture.
  • FIG. 2 is a schematic flowchart of a video processing method according to an embodiment of the present invention.
  • the video processing method in this example is applied to a server side. As shown in FIG. 2, the video processing method includes the following steps:
  • Step 201 Receive first data sent by the terminal.
  • the terminal may be a mobile phone, a tablet computer, a notebook computer, or the like.
  • the terminal is installed with a short video type APP, and the short video type APP can call the camera of the terminal to collect a short time video through the camera; in another embodiment, the terminal is installed with a web browser. Log in to the short video website through a web browser, and the short video website can call the camera of the terminal to collect the video with shorter time through the camera.
  • a short-time video is referred to as a short video, such as a 10-second video, a 60-second video, and the like. Users can add special effects to this short video. To do this, the terminal needs to first send the video data and the effect rendering parameters for adding effects to the server.
  • the terminal has a video editing class APP, and the video editing class APP can intercept a small piece of video from the video file.
  • the short video is called short video. Users can add special effects to this short video. To do this, the terminal needs to first send the video data and the effect rendering parameters for adding effects to the server.
  • the server determines that the original video data in the terminal needs to be rendered by the effect. After receiving the rendering request, the server continues to receive the first data sent by the terminal.
  • the first data carries the following information: special effect rendering parameters, original video data.
  • the communication between the server and the terminal is based on Hypertext Transfer Protocol (HTTP).
  • HTTP Hypertext Transfer Protocol
  • the format of the first data is an HTTP packet.
  • the communication protocol between the server and the terminal can be adaptively adjusted as the network develops, for example, the server and the terminal communicate via HTTP 2.0.
  • Step 202 Parsing the first data, extracting first sub data and second sub data from the first data, where the first sub data is used to represent a special effect rendering parameter, and the second sub data Used to characterize raw video data.
  • the format of the first data is an HTTP data packet, and the first data is parsed to obtain a header portion and a packet portion of the first data; wherein the packet header is divided into an HTTP header (Header), The body part is the HTTP body. A first sub-data is extracted in the header portion, and a second sub-data is extracted in the packet portion.
  • the first sub-data is used to represent the special effect rendering parameter.
  • the special effect rendering parameter refers to the special effect identifier, and the special effect identifier can be uniquely determined by the special effect identifier.
  • Each special effect corresponds to a special effect identifier.
  • the special effect rendering parameter refers to a specific special effect content
  • the user wants to design the special effect content by himself, and can design the personalized special effect content through the special effect design material (such as text, picture, line, etc.), and then Send the special effects content to the server, and the server directly adds personalized effects based on the special effects content.
  • the special effect design material such as text, picture, line, etc.
  • the second sub data is used to represent the original video data.
  • the original video data refers to the video data before the special effect is obtained.
  • the video data is encapsulated into the first data and sent to the server.
  • the format of the original video data is mp4 format, and the format of the original video data may also be other video formats, for example, MPEG (Moving Picture Experts Group) format, window. Media Video (WMV, Windows Media Video) format, Real Media Variable Bit Rate (RMVB) format, and so on.
  • MPEG Motion Picture Experts Group
  • WMV Windows Media Video
  • RMVB Real Media Variable Bit Rate
  • Step 203 Render and generate target video data according to the first sub data and the second sub data.
  • the rendering generation process of the target video data includes the following steps:
  • Step 2031 Find a special effect template file corresponding to the special effect rendering parameter, and divide the original video data into a plurality of original video sub data.
  • the server parses the first sub-data to obtain a special effect rendering parameter.
  • the server looks up the effect template file corresponding to the effect rendering parameters.
  • the effect template file is stored on the server side. Since the server side has a large amount of storage resources, a variety of special effect templates can be stored on the server side, and each effect template is implemented by a special effect template file in the special effect template file. It includes a series of special effects commands and special effects parameters that are needed to display the effect.
  • the rendering of the effect includes two categories, the first type is video rendering, the second type is text rendering, and for video rendering, including: affine transformation, mirror flip, alpha gradient, and the like. For text rendering, including: translation transformation, bloom transformation, fuzzy transformation, rotation transformation, and so on. Different texts can be rendered for different texts.
  • the server parses the second sub-data to obtain original video data. Then, the server divides the original video data into a plurality of original video sub-data. As shown in FIG. 5, it is assumed that the original video data includes 12 frames, which are 01 frames, 02 frames, 03 frames, and 04 frames. , 05 frames, 06 frames, 07 frames, 08 frames, 09 frames, 10 frames, 11 frames, and 12 frames. The 12 frames of data are arranged in chronological order, and the 12 frames of video data are divided into 4 original video sub-data, and each original video sub-data includes 3 frames.
  • the first original video sub-data includes 01 frames, 02 frames, and 03 frames; the second original video sub-data includes 04 frames, 05 frames, and 06 frames; and the third original video sub-data includes 07 frames, 08 frames, and 09 frames. .
  • the fourth original video subdata includes 10 frames, 11 frames, and 12 frames.
  • Step 2032 Perform parallel rendering on the special effect template file and the plurality of original video sub-data to generate a plurality of target video sub-data; and combine the plurality of target video sub-data into target video data.
  • the effect template file and the plurality of original video sub-data can be rendered in parallel by the multi-core CPU and the multi-core GPU.
  • the four original video sub-data are distributed to four cores (CPU1, CPU2, CPU3, and CPU4, respectively) for parallel processing.
  • a set of GPUs may be correspondingly, for example, CPU1 corresponds to GPU01-GPU12, and video data is rendered in parallel by pixels through GPU01-GPU12.
  • the four sets of GPUs finally render 4 target video sub-data, and then combine the 4 target video sub-data to obtain the target video data.
  • the parallel computing capability of the processor can greatly improve the speed of the effect rendering, and bring the user a super real-time rendering, previewing and sharing experience.
  • Step 204 Send the target video data to the terminal, so that the terminal displays the target video data.
  • the server sends the target video data to the terminal by means of HTTP, so that the terminal displays the target video data.
  • the target video data is encoded by the server into a file in the mp4 format, and the target video data in the mp4 format is sent to the terminal for display.
  • the format of the target video data may also be other video formats, for example: MPEG format, WMV format, RMVB format, and the like.
  • the terminal can play a variety of video formats, including: mp4 format, MPEG format, WMV format, RMVB format, the format of the target video data may be the same as the format of the original video data, or may be different, the user may pre- Set the format of the target video data so that the server encodes the target video data according to the format set.
  • video formats including: mp4 format, MPEG format, WMV format, RMVB format
  • the format of the target video data may be the same as the format of the original video data, or may be different
  • the user may pre- Set the format of the target video data so that the server encodes the target video data according to the format set.
  • the technical solution of the embodiment of the invention solves the problem that the special effect rendering is slow due to limitation of hardware resources, and realizes ultra-real-time rendering, previewing and sharing of short video files. You don't have to download a lot of effects templates to your local storage, saving storage resources. In addition, the use of local chips is reduced, allowing complex renderings to be submitted to the server for processing, making the user's video editing experience smoother, more convenient, and faster to share.
  • FIG. 3 is a schematic flowchart of a video processing method according to an embodiment of the present invention.
  • the video processing method in this example is applied to a terminal. As shown in FIG. 3, the video processing method includes the following steps:
  • Step 301 Acquire original video data, and obtain input effect rendering parameters.
  • the terminal may be a mobile phone, a tablet computer, a notebook computer, or the like.
  • the short video type APP can call the camera of the terminal, so that the short time video is collected by the camera; in another embodiment, the terminal is installed with a web browser, and the short video is registered through the web browser.
  • the website, through the short video website, can call the camera of the terminal to collect the video with shorter time through the camera.
  • a short-time video is referred to as a short video, such as a 10-second video, a 60-second video, and the like. Users can add special effects to this short video. To do this, the terminal needs to first send the video data and the effect rendering parameters for adding effects to the server.
  • the terminal has a video editing class APP, and the video editing class APP can intercept a small piece of video from the video file.
  • the short video is called short video. Users can add special effects to this short video. To do this, the terminal needs to first send the video data and the effect rendering parameters for adding effects to the server.
  • the video file acquired by the terminal generally includes video data and audio data.
  • the video data and the audio data in the video file are separated to obtain original video data as well as original audio data.
  • the original video data obtained after the separation needs to be converted into a video format that can be rendered by the server.
  • the mp4 format is a standard cell for performing rendering processing. Therefore, the original video data obtained after the separation needs to be converted into the mp4 format, and the video format that can be rendered can also be other video formats, such as MPEG format, WMV format, RMVB format, and the like.
  • the special effect rendering parameter refers to the special effect identifier, and the special effect identifier can uniquely determine the special effect that needs to be added.
  • Each special effect corresponds to a special effect identifier.
  • the terminal may also have a rich personalized rendering function. Specifically, the user may design a personalized special effect content, and then send the special effect content to the server, and the server directly adds a personalized special effect according to the special effect content.
  • Step 302 Encapsulate the first sub data for characterizing the special effect rendering parameter and the second sub data for characterizing the original video data in the first data.
  • the communication between the server and the terminal is based on HTTP.
  • the format of the first data is an HTTP data packet.
  • the first sub-data for characterizing the special effect rendering parameter and the second sub-data for characterizing the original video data are encapsulated in the first data, including:
  • the first sub-data of the effect rendering parameter is encapsulated in a header portion of the first data
  • the second sub-data for characterizing the original video data is encapsulated in a body portion of the first data.
  • the packet header is divided into an HTTP header
  • the packet body is an HTTP body.
  • Step 303 Send the first data to a server.
  • the terminal sends the first data to the server through HTTP POST, and the server receives the first data sent by the terminal through HTTP GET.
  • the server generates target video data according to the first data rendering. Specifically, the server parses the first sub data to obtain special effect rendering parameters. The server then looks up the effect template file corresponding to the effect rendering parameters.
  • the effect template file is stored in the service.
  • the server side because the server side has a large amount of storage resources, a variety of special effects templates can be stored on the server side.
  • Each effect template is implemented by a special effect template file, and the special effect template file includes the need to display the special effect.
  • the server parses the second sub-data to obtain the original video data.
  • the server divides the original video data into a plurality of original video sub-data, and the multi-core CPU and the multi-core GPU can perform parallel rendering on the special effect template file and the multiple original video sub-data to generate multiple targets.
  • Video subdata then, combining the plurality of target video subdata into target video data.
  • Step 304 Receive target video data generated by the server according to the first data rendering, and display the target video data.
  • the terminal receives the target video data generated by the server according to the first data rendering based on HTTP.
  • the target video data is in the mp4 format. Therefore, the target video data may be directly displayed and displayed.
  • the video that came out is the video after adding the effect.
  • the original audio data is played.
  • the technical solution of the embodiment of the present invention may also adopt a mode in which a local (referring to a terminal) and a server are combined. If the computational complexity is high, the server mode is adopted, and if the computational complexity is low, the local mode is adopted. This can meet the needs of users and provide more novel effects.
  • the local can use a one-button plus special effects mode of operation, making ordinary user operations very convenient, and all rendering processing is operated by the server. All processing details are masked to the user so that the user feels it is handled locally.
  • the server's ultra-fast second-level rendering time and smaller second-level transmission delays make the user experience smooth. Referring to FIG. 6, the server is connected to each terminal according to the principle of nearest allocation, so that the short video is faster in transmission speed, and the long-connected HTTP communication mechanism can improve the speed at which the user can change the preview effect of different special effects modes.
  • FIG. 7 is a schematic flowchart 3 of a video processing method according to an embodiment of the present invention. As shown in FIG. 7, the video processing method includes the following steps:
  • Step 701 The terminal collects original video data; converts the original video data into an mp4 file.
  • Step 702 The terminal acquires a special effect rendering parameter input by the user, and converts the special effect rendering parameter into a json file.
  • the json file refers to a file in the form of a JavaScript object notation (JSON, JavaScript Object Notation) format, which is a lightweight data exchange format that stores and represents data in a text format that is completely independent of the programming language.
  • JSON JavaScript object notation
  • the JSON format is easy to read and write, and it is also easy to parse and generate by machine, so it can effectively improve network transmission efficiency.
  • Step 703 The terminal encapsulates the mp4 file and the json file into an HTTP data packet.
  • Step 704 The terminal sends a rendering request to the server, and sends the HTTP data packet to the server.
  • Step 705 The server receives the rendering request sent by the terminal and the HTTP data packet.
  • Step 706 The server reads the HTTP packet into the memory.
  • Step 707 The server parses the HTTP data packet in the memory, and separates the mp4 file and the json file.
  • the mp4 file is fragmented to obtain a plurality of fragment files.
  • Step 708 The server parses the special effect rendering parameter in the json file; and reads the corresponding special effect template from the special effect template library according to the special effect rendering parameter.
  • Step 709 The server parses the special effect commands and parameters in the special effect template.
  • Step 710 The server sequentially renders the effects in the json file in parallel through the CPU and the GPU.
  • special effects including subtitle effects and video effects.
  • subtitle effects the subtitle effect number in the json file is parsed, and the subtitle related json file is read through the subtitle effect number; through the GPU and the CPU.
  • Parallel rendering of subtitle effects storing the rendered subtitle file into a buffer; for video effects, parsing the video effect number in the json file, reading the video related json file through the video effect number; passing the GPU and The CPU renders the video effects in parallel and stores the rendered video files in a buffer.
  • the subtitles and videos in the two buffers are cached together in other buffers to get the final special effect file.
  • Step 711 The server encodes the effect data in the buffer to the 264 stream.
  • Step 712 The server stores 264 streams into each fragment file.
  • Step 713 The server merges all the fragment files into an mp4 file.
  • Step 714 The terminal receives the mp4 file sent by the server.
  • Step 715 The terminal reads the video data from the mp4 file.
  • the terminal can also read audio data from the mp4 file.
  • Step 716 Parse the video data into the buffer.
  • Step 717 The terminal displays the video data to the user through the display interface.
  • the special effect template is not downloaded to the local, saving storage resources, and the server directly obtaining the special effect template can speed up the rendering progress; since the rendering process is performed by the server side, it can span multiple terminal devices. It is more versatile and more convenient to maintain; it can also back up the user's data files on the server side, which is convenient for users to operate and lose in the future; the hardware configuration requirements of the terminal are low, and the ordinary smartphone can operate.
  • FIG. 9 is a schematic structural diagram of a server according to an embodiment of the present invention. As shown in FIG. 9, the server includes:
  • the receiving part 901 is configured to receive the first data sent by the terminal;
  • the parsing part 902 is configured to parse the first data, and extract the first sub data and the second sub data from the first data, where the first sub data is used to represent the special effect rendering parameter, where The two sub-data is used to represent the original video data;
  • the rendering part 903 is configured to generate target video data according to the first sub data and the second sub data.
  • the transmitting portion 904 is configured to send the target video data to the terminal.
  • the server further includes:
  • the storage part 905 is configured to store the special effect template file
  • the rendering portion 903 includes:
  • the finding subsection 9031 is configured to find a special effect template file corresponding to the special effect rendering parameter
  • Segmenting subsection 9032 configured to split the original video data into a plurality of original video subdata
  • the parallel rendering sub-portion 9033 is configured to perform parallel rendering on the special effect template file and the plurality of original video sub-data to generate a plurality of target video sub-data;
  • the merging sub-portion 9034 is configured to merge the plurality of target video sub-data into target video data.
  • the parsing part 902 includes:
  • the obtaining sub-portion 9021 is configured to parse the first data to obtain a header portion and a packet portion of the first data;
  • the extracting subsection 9022 is configured to extract the first subdata in the header portion and extract the second subdata in the encapsulation portion.
  • the parsing portion 902 and the rendering portion 903 can be implemented by the processor 1101 located on the server.
  • the receiving portion 901 and the transmitting portion 904 described above may be implemented by an external communication interface 1102 located on a server.
  • the above storage portion 905 can be implemented by the memory 1103 located on the server.
  • the processor 1101, the external communication interface 1102, and the memory 1103 on the server can interact through the system bus 1104.
  • the processor 1101 may be a general purpose processor, a digital signal processor (DSP, Digital Signal Processor), or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the processor 1101 can implement or perform the methods, steps, and flowcharts disclosed in the embodiments of the present invention.
  • a general purpose processor can be a microprocessor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiment of the present invention may be directly implemented as a hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the memory 1103 can be implemented by any type of volatile or non-volatile storage device, or a combination thereof.
  • the non-volatile memory may be a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), or an Erasable Programmable Read (EPROM). Only Memory), Electrically Erasable Programmable Read-Only Memory (EEPROM), Ferromagnetic Random Access Memory (FRAM), Flash Memory, Magnetic Surface Memory , CD-ROM, or Compact Disc Read-Only Memory (CD-ROM); the magnetic surface memory can be a disk storage or a tape storage.
  • the volatile memory can be a random access memory (RAM) that acts as an external cache.
  • RAM Static Random Access Memory
  • SSRAM Synchronous Static Random Access Memory
  • SSRAM Dynamic Random Access
  • DRAM Dynamic Random Access Memory
  • SDRAM Synchronous Dynamic Random Access Memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • ESDRAM enhancement Synchronous Dynamic Random Access Memory
  • SLDRAM synchronously connected dynamic random access memory
  • DRRAM Direct RAM Bus Random Access Memory
  • the external communication interface 1102 can enable the server to access a wireless network based on a communication protocol, such as WiFi, 2G or 3G, 4G, the Internet, or a combination thereof.
  • a communication protocol such as WiFi, 2G or 3G, 4G, the Internet, or a combination thereof.
  • FIG. 10 is a schematic structural diagram of a terminal according to an embodiment of the present invention. As shown in FIG. 10, the terminal includes:
  • the collecting part 1001 is configured to collect original video data
  • the obtaining part 1002 is configured to obtain a special effect rendering parameter
  • the encapsulating portion 1003 is configured to encapsulate the first sub data for characterizing the special effect rendering parameter and the second sub data for characterizing the original video data in the first data;
  • the sending part 1004 is configured to send the first data to the server
  • the receiving part 1005 is configured to receive target video data generated by the server according to the first data rendering
  • the display portion 1006 is configured to display the target video data.
  • the collecting portion 1001 is further configured to separate video data and audio data in the video file to obtain original video data and original audio data.
  • the terminal further includes:
  • the audio playing portion 1007 is configured to play the original audio data when the display portion displays the target video data.
  • the above-described acquisition portion 1001 can be implemented by a camera 1201 located on the terminal.
  • the acquisition portion 1002 described above can be implemented by the input device 1202 located on the terminal.
  • the encapsulating portion 1003 and the audio playing portion 1007 may be implemented by the processor 1203 located on the terminal.
  • the transmitting portion 1004 and the receiving portion 1005 described above may be implemented by an external communication interface 1204 located on a server.
  • the display portion 1006 described above can be implemented by the display 1205 located on the server.
  • the camera 1201, the input device 1202, the processor 1203, the external communication interface 1204, and the display 1205 on the terminal can interact through the system bus 1206.
  • the processor 1203 may be a general purpose processor, a DSP, or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or the like.
  • the external communication interface 1204 enables the server to access a wireless network based on a communication protocol, such as WiFi, 2G or 3G, 4G, the Internet, or a combination thereof.
  • a communication protocol such as WiFi, 2G or 3G, 4G, the Internet, or a combination thereof.
  • Input device 1202 can be a keyboard, mouse, trackball, click wheel, button, button, and the like.
  • Display 1205 can be a cathode ray tube (CRT) display, a liquid crystal display (LCD), an organic light emitting diode (OLED) display, a thin film transistor (TFT) display, a plasma display, and the like.
  • CTR cathode ray tube
  • LCD liquid crystal display
  • OLED organic light emitting diode
  • TFT thin film transistor
  • the integrated modules described in the embodiments of the present invention may also be stored in a computer readable storage medium if they are implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, those skilled in the art will appreciate that embodiments of the present application can be provided as a method, system, or computer program product. Thus, the present application can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media containing computer usable program code, including but not limited to a USB flash drive, a mobile hard drive, a read only memory (ROM, Read-Only Memory), Random Access Memory (RAM), disk storage, CD-ROM, optical storage, and the like.
  • a USB flash drive a mobile hard drive
  • ROM read only memory
  • RAM Random Access Memory
  • disk storage CD-ROM, optical storage, and the like.
  • the present application is described in terms of flowcharts and/or block diagrams of methods, apparatuses (systems), and computer program products according to embodiments of the present application. It should be understood that the flow chart can be implemented by computer program instructions And/or a combination of the processes and/or blocks in the block diagrams, and the flowcharts and/or blocks in the flowcharts. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine for the execution of instructions for execution by a processor of a computer or other programmable data processing device. Means for implementing the functions specified in one or more of the flow or in a block or blocks of the flow chart.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • an embodiment of the present invention further provides a computer storage medium, wherein a computer program is stored, and the computer program is used to execute the above video processing method in the embodiment of the present invention.
  • the terminal sends the first sub-data for characterizing the special effect rendering parameter, and the second sub-data for characterizing the original video data is encapsulated in the first data and sent to the server.
  • the server parses the first data, and generates and generates target video data according to the first sub data and the second sub data, and sends the target video data to the terminal, where the terminal is configured by the terminal The target video data is displayed.
  • the terminal handles the processing process with higher complexity to the server, and the rendering processing speed is greatly improved, thereby creating a variety of special effects to meet the needs of users.

Abstract

Disclosed in the present invention are a video processing method, a terminal, and a computer storage medium. The method comprises: receiving first data sent by a terminal; parsing the first data, and extracting first subdata and second subdata from the first data, the first subdata being used for representing a special effect rendering parameter, and the second subdata being used for representing original video data; and according to the first subdata and the second subdata, rendering to generate target video data; and sending the target video data to the terminal, so that the terminal displays the target video data.

Description

一种视频处理方法、服务器及终端、计算机存储介质Video processing method, server and terminal, computer storage medium
相关申请的交叉引用Cross-reference to related applications
本申请基于申请号为201610639190.0、申请日为2016年08月04日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请。The present application is filed on the basis of the Chinese Patent Application Serial No. No. No. No. No. No. No. No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No
技术领域Technical field
本发明涉及视频处理技术,尤其涉及一种视频处理方法、服务器及终端、计算机存储介质。The present invention relates to video processing technologies, and in particular, to a video processing method, a server and a terminal, and a computer storage medium.
背景技术Background technique
通过互联网进行社交分享时,可以分享图片、文字、短视频等。其中,短视频因具有操作简单、广泛的娱乐性,受到人们的热烈追捧。短视频最重要的应用就是它的特效功能,这让普通乏味的视频变得更具有魔术性和戏剧性。例如:在短视频中添加魔术特效。Share photos, text, short videos, etc. when sharing via the Internet. Among them, the short video has been enthusiastically sought after because of its simple operation and wide entertainment. The most important application of short video is its special effects, which makes ordinary boring videos more magical and dramatic. For example: add magic effects to short videos.
为了满足不同人群的娱乐性和多样性,需要创作丰富多样的特效效果以及模板素材。但由于终端,如手机的存储容量和处理芯片的限制,通过终端存取模板素材以及在本地渲染特效会导致视频处理速度非常慢,影响了视频分享的实时性,且无法创造出丰富多样的特效效果。In order to meet the entertainment and diversity of different groups of people, it is necessary to create a variety of special effects and template materials. However, due to the limitations of the terminal, such as the storage capacity of the mobile phone and the processing chip, accessing the template material through the terminal and rendering the effect locally may result in very slow video processing, affecting the real-time performance of video sharing, and failing to create a variety of special effects. effect.
发明内容Summary of the invention
为解决上述技术问题,本发明实施例提供了一种视频处理方法、服务器及终端、计算机存储介质。To solve the above technical problem, an embodiment of the present invention provides a video processing method, a server, a terminal, and a computer storage medium.
本发明实施例提供的视频处理方法,应用于服务器,包括: The video processing method provided by the embodiment of the present invention is applied to a server, and includes:
接收终端发送的第一数据;Receiving first data sent by the terminal;
对所述第一数据进行解析,从所述第一数据中提取出第一子数据和第二子数据,所述第一子数据用于表征特效渲染参数,所述第二子数据用于表征原始视频数据;Parsing the first data, extracting first sub data and second sub data from the first data, the first sub data is used to represent a special effect rendering parameter, and the second sub data is used to represent Raw video data;
依据所述第一子数据和所述第二子数据,渲染生成目标视频数据;Generating target video data according to the first sub data and the second sub data;
向所述终端发送所述目标视频数据。Sending the target video data to the terminal.
本发明另一实施例提供的视频处理方法,应用于终端,包括:A video processing method according to another embodiment of the present invention is applied to a terminal, including:
采集原始视频数据,以及获取特效渲染参数;Collect raw video data and get special effect rendering parameters;
将用于表征所述特效渲染参数的第一子数据,以及用于表征所述原始视频数据的第二子数据封装在第一数据中;Decoding a first sub-data for characterizing the effect rendering parameter and a second sub-data for characterizing the original video data in the first data;
向服务器发送所述第一数据;Sending the first data to a server;
接收所述服务器发送的依据所述第一数据渲染生成的目标视频数据;Receiving target video data generated by the server according to the first data rendering;
对所述目标视频数据进行显示。Displaying the target video data.
本发明实施例提供的服务器,包括:The server provided by the embodiment of the present invention includes:
接收部分,配置为接收终端发送的第一数据;a receiving part, configured to receive first data sent by the terminal;
解析部分,配置为对所述第一数据进行解析,从所述第一数据中提取出第一子数据和第二子数据,所述第一子数据用于表征特效渲染参数,所述第二子数据用于表征原始视频数据;The parsing portion is configured to parse the first data, and extract the first sub data and the second sub data from the first data, where the first sub data is used to represent the special effect rendering parameter, and the second Subdata is used to characterize the original video data;
渲染部分,配置为依据所述第一子数据和所述第二子数据,渲染生成目标视频数据;a rendering part, configured to generate target video data according to the first sub data and the second sub data;
发送部分,配置为向所述终端发送所述目标视频数据。And a sending part, configured to send the target video data to the terminal.
本发明实施例提供的终端,包括:The terminal provided by the embodiment of the present invention includes:
采集部分,配置为采集原始视频数据;The acquisition part is configured to collect original video data;
获取部分,配置为获取特效渲染参数;Get part, configured to get the effect rendering parameters;
封装部分,配置为将用于表征所述特效渲染参数的第一子数据,以及 用于表征所述原始视频数据的第二子数据封装在第一数据中;An encapsulation portion configured to be used to characterize the first sub-data of the effect rendering parameter, and a second sub-data for characterizing the original video data is encapsulated in the first data;
发送部分,配置为向服务器发送所述第一数据;a sending part, configured to send the first data to a server;
接收部分,配置为接收所述服务器发送的依据所述第一数据渲染生成的目标视频数据;a receiving part, configured to receive target video data generated by the server according to the first data rendering;
显示部分,配置为对所述目标视频数据进行显示。a display portion configured to display the target video data.
本发明实施例提供的计算机存储介质,其中存储有计算机可执行指令,该计算机可执行指令用于执行上述任意所述的视频处理方法。The computer storage medium provided by the embodiment of the present invention, wherein computer executable instructions are stored, and the computer executable instructions are used to execute any of the video processing methods described above.
本发明实施例的技术方案中,终端采集原始视频数据,以及获取输入的特效渲染参数;将用于表征所述特效渲染参数的第一子数据,以及用于表征所述原始视频数据的第二子数据封装在第一数据中发送给服务器。服务器接收终端发送的第一数据;对所述第一数据进行解析,从所述第一数据中提取出第一子数据和第二子数据,所述第一子数据用于表征特效渲染参数,所述第二子数据用于表征原始视频数据;依据所述第一子数据和所述第二子数据,渲染生成目标视频数据;向所述终端发送所述目标视频数据,以供所述终端对所述目标视频数据进行显示。如此,终端将处理复杂度较高的渲染过程交由服务器来处理,渲染处理速度得到大大提升,进而能够创造出丰富多样的特效效果来满足用户的需求,此外,由于渲染处理速度得到提升,因此视频分享能够实现实时性。In the technical solution of the embodiment of the present invention, the terminal collects the original video data, and obtains the input special effect rendering parameter; the first sub data used to represent the special effect rendering parameter, and the second used to represent the original video data The sub-data package is sent to the server in the first data. The server receives the first data sent by the terminal, parses the first data, and extracts the first sub data and the second sub data from the first data, where the first sub data is used to represent the special effect rendering parameter. The second sub-data is used to represent the original video data; the target video data is generated according to the first sub-data and the second sub-data; and the target video data is sent to the terminal for the terminal Displaying the target video data. In this way, the terminal handles the processing process with higher complexity to the server, and the rendering processing speed is greatly improved, thereby creating a variety of special effects to meet the user's needs, and in addition, because the rendering processing speed is improved, Video sharing enables real-time performance.
附图说明DRAWINGS
图1为本发明实施例中进行信息交互的各方硬件实体的示意图;1 is a schematic diagram of hardware entities of each party performing information interaction in an embodiment of the present invention;
图2为本发明实施例的视频处理方法的流程示意图一;2 is a schematic flowchart 1 of a video processing method according to an embodiment of the present invention;
图3为本发明实施例的视频处理方法的流程示意图二;3 is a second schematic flowchart of a video processing method according to an embodiment of the present invention;
图4为本发明实施例的特效结构组合示意图;4 is a schematic diagram of a combination of special effects structures according to an embodiment of the present invention;
图5为本发明实施例的CPU和GPU组合渲染示意图;FIG. 5 is a schematic diagram of a combined rendering of a CPU and a GPU according to an embodiment of the present invention;
图6为本发明实施例的终端服务器网络交互架构图; FIG. 6 is a schematic diagram of a network interaction architecture of a terminal server according to an embodiment of the present invention; FIG.
图7为本发明实施例的视频处理方法的流程示意图三;FIG. 7 is a schematic flowchart 3 of a video processing method according to an embodiment of the present invention;
图8为本发明实施例的目标视频数据的渲染生成流程图;FIG. 8 is a flowchart of generating a target video data according to an embodiment of the present invention;
图9为本发明实施例的服务器的结构组成示意图一;FIG. 9 is a first schematic structural diagram of a server according to an embodiment of the present invention; FIG.
图10为本发明实施例的终端的结构组成示意图一;FIG. 10 is a first schematic structural diagram of a terminal according to an embodiment of the present invention; FIG.
图11为本发明实施例的服务器的结构组成示意图二;11 is a second schematic structural diagram of a server according to an embodiment of the present invention;
图12为本发明实施例的终端的结构组成示意图二。FIG. 12 is a second schematic structural diagram of a terminal according to an embodiment of the present invention.
具体实施方式detailed description
为了能够更加详尽地了解本发明实施例的特点与技术内容,下面结合附图对本发明实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本发明实施例。The embodiments of the present invention are described in detail below with reference to the accompanying drawings.
在互联网视频网站中或者视频应用(APP,Application)中展示有丰富多样的视频资源,其中,短视频因其具有较高的即时性、分享性、娱乐性等受到了较高的关注。这里,短视频是指视频长度较短的视频,例如视频长度为几秒到几分钟的视频。短视频最大的特点是在实际拍摄或制作的视频画面中融入了特效。例如,在短视频中融入某个电影的背景,使得视频中的人物成为电影中的主角。再例如,在短视频中融入魔术的效果,使得视频中的人物具有超能力。在短视频中添加特效时,需要重新对视频画面进行渲染,如果在终端本地对视频画面进行渲染,则会因终端处理能力的限制导致渲染的速度较慢,为了克服终端硬件资源的不足,本发明实施例提供了一种基于服务器渲染短视频的方案。There are a variety of video resources displayed in Internet video websites or video applications (APP, Application). Among them, short videos have received high attention due to their high immediacy, sharing and entertainment. Here, short video refers to a video with a short video length, such as a video with a length of a few seconds to a few minutes. The biggest feature of short video is the integration of special effects in the actual shooting or production of the video. For example, incorporating a movie's background into a short video makes the characters in the video the protagonists in the movie. For another example, the effect of magic is incorporated into the short video, so that the characters in the video have super powers. When adding a special effect in a short video, the video picture needs to be re-rendered. If the video picture is rendered locally in the terminal, the rendering speed is slow due to the limitation of the terminal processing capability. In order to overcome the shortage of the terminal hardware resources, Embodiments of the present invention provide a scheme for rendering short video based on a server.
图1为本发明实施例中进行信息交互的各方硬件实体的示意图,图1中包括:终端11、服务器12。其中,终端11通过有线网络或者无线网络与服务器12进行信息交互。终端11所指的设备包括手机、台式机、PC机、一体机等类型。一个示例中,终端11将待渲染(也称为加特效)的视频数据发送给服务器12,由服务器12对视频数据进行渲染,然后,服务器将渲 染后的视频数据发送给终端11进行显示。FIG. 1 is a schematic diagram of hardware entities of each party performing information interaction according to an embodiment of the present invention. FIG. 1 includes: a terminal 11 and a server 12. The terminal 11 performs information interaction with the server 12 through a wired network or a wireless network. The device referred to by the terminal 11 includes a mobile phone, a desktop computer, a PC, an all-in-one, and the like. In one example, the terminal 11 sends the video data to be rendered (also referred to as special effects) to the server 12, and the video data is rendered by the server 12, and then the server will render The dyed video data is sent to the terminal 11 for display.
上述图1的例子只是实现本发明实施例的一个系统架构实例,本发明实施例并不限于上述图1所述的系统结构,基于该系统架构,提出本发明各个实施例。The above-mentioned example of FIG. 1 is only a system architecture example for implementing the embodiment of the present invention. The embodiment of the present invention is not limited to the system structure described in FIG. 1 above, and various embodiments of the present invention are proposed based on the system architecture.
图2为本发明实施例的视频处理方法的流程示意图一,本示例中的视频处理方法应用于服务器侧,如图2所示,所述视频处理方法包括以下步骤:2 is a schematic flowchart of a video processing method according to an embodiment of the present invention. The video processing method in this example is applied to a server side. As shown in FIG. 2, the video processing method includes the following steps:
步骤201:接收终端发送的第一数据。Step 201: Receive first data sent by the terminal.
本发明实施例中,终端可以是手机、平板电脑、笔记本电脑等设备。In the embodiment of the present invention, the terminal may be a mobile phone, a tablet computer, a notebook computer, or the like.
在一实施方式中,终端安装有短视频类APP,通过该短视频类APP能够调用终端的摄像头,从而通过摄像头采集时间较短的视频;在另一实施方式中,终端安装有网页浏览器,通过网页浏览器登录短视频类网站,通过该短视频类网站能够调用终端的摄像头,从而通过摄像头采集时间较短的视频。In an embodiment, the terminal is installed with a short video type APP, and the short video type APP can call the camera of the terminal to collect a short time video through the camera; in another embodiment, the terminal is installed with a web browser. Log in to the short video website through a web browser, and the short video website can call the camera of the terminal to collect the video with shorter time through the camera.
上述方案中,将时间较短的视频称为短视频,例如10秒的视频,60秒的视频等等。用户可以在这个短视频中添加特效,为此,终端需要首先将视频数据与用于添加特效的特效渲染参数发送给服务器。In the above scheme, a short-time video is referred to as a short video, such as a 10-second video, a 60-second video, and the like. Users can add special effects to this short video. To do this, the terminal needs to first send the video data and the effect rendering parameters for adding effects to the server.
在另一实施方式中,终端具有视频编辑类APP,利用视频编辑类APP能够从视频文件中截取出其中一小段视频,这里,将这一小段视频称为短视频。用户可以在这个短视频中添加特效,为此,终端需要首先将视频数据与用于添加特效的特效渲染参数发送给服务器。In another embodiment, the terminal has a video editing class APP, and the video editing class APP can intercept a small piece of video from the video file. Here, the short video is called short video. Users can add special effects to this short video. To do this, the terminal needs to first send the video data and the effect rendering parameters for adding effects to the server.
本发明实施例中,服务器接收终端发送的渲染请求后,确定需要对终端中的原始视频数据进行特效渲染,服务器接受该渲染请求后,继续接收终端发送的第一数据。这里,第一数据携有如下信息:特效渲染参数、原始视频数据。 In the embodiment of the present invention, after receiving the rendering request sent by the terminal, the server determines that the original video data in the terminal needs to be rendered by the effect. After receiving the rendering request, the server continues to receive the first data sent by the terminal. Here, the first data carries the following information: special effect rendering parameters, original video data.
在本发明一实施方式中,服务器与终端之间的通信是基于超文本传输协议(HTTP,Hyper Text Transfer Protocol)。基于此,第一数据的格式为HTTP数据包。这里,服务器与终端之间的通信协议随着网络的发展可以适应性地调整,例如:服务器与终端之间通过HTTP2.0进行通信。In an embodiment of the invention, the communication between the server and the terminal is based on Hypertext Transfer Protocol (HTTP). Based on this, the format of the first data is an HTTP packet. Here, the communication protocol between the server and the terminal can be adaptively adjusted as the network develops, for example, the server and the terminal communicate via HTTP 2.0.
步骤202:对所述第一数据进行解析,从所述第一数据中提取出第一子数据和第二子数据,所述第一子数据用于表征特效渲染参数,所述第二子数据用于表征原始视频数据。Step 202: Parsing the first data, extracting first sub data and second sub data from the first data, where the first sub data is used to represent a special effect rendering parameter, and the second sub data Used to characterize raw video data.
本发明实施例中,第一数据的格式为HTTP数据包,对所述第一数据进行解析,得到所述第一数据的包头部分和包体部分;其中,包头部分为HTTP包头(Header),包体部分为HTTP包体(Body)。在所述包头部分中提取出第一子数据,在所述包体部分中提取出第二子数据。In the embodiment of the present invention, the format of the first data is an HTTP data packet, and the first data is parsed to obtain a header portion and a packet portion of the first data; wherein the packet header is divided into an HTTP header (Header), The body part is the HTTP body. A first sub-data is extracted in the header portion, and a second sub-data is extracted in the packet portion.
本发明实施例中,第一子数据用于表征特效渲染参数,在一实施方式中,特效渲染参数是指特效标识,通过特效标识能够唯一确定需要添加的特效。每个特效都对应一个特效标识,当用户在终端中选择需要添加的特效时,即将该特效对应的特效标识封装在第一数据中发送给服务器。服务器获得特效标识后,依据特效标识与特效内容的对应关系,确定出需要添加的特效。在另一实施方式中,特效渲染参数是指具体的特效内容,例如:用户想要自己设计特效内容,可以通过特效设计素材(如文字、图片、线条等)设计出个性化的特效内容,然后将该特效内容发送给服务器,服务器直接根据特效内容添加个性化的特效。In the embodiment of the present invention, the first sub-data is used to represent the special effect rendering parameter. In an embodiment, the special effect rendering parameter refers to the special effect identifier, and the special effect identifier can be uniquely determined by the special effect identifier. Each special effect corresponds to a special effect identifier. When the user selects an effect to be added in the terminal, the special effect identifier corresponding to the special effect is encapsulated in the first data and sent to the server. After the server obtains the special effect identifier, it determines the special effect to be added according to the correspondence between the special effect identifier and the special effect content. In another embodiment, the special effect rendering parameter refers to a specific special effect content, for example, the user wants to design the special effect content by himself, and can design the personalized special effect content through the special effect design material (such as text, picture, line, etc.), and then Send the special effects content to the server, and the server directly adds personalized effects based on the special effects content.
本发明实施例中,所述第二子数据用于表征原始视频数据。这里,原始视频数据是指未加特效之前的视频数据,该视频数据由终端采集获得后,封装至第一数据中发送给服务器。在一实施方式中,原始视频数据的格式为mp4格式,不局限于此,原始视频数据的格式还可以是其他视频格式,例如:动态图像专家组(MPEG,Moving Picture Experts Group)格式、窗 媒体视频(WMV,Windows Media Video)格式、真媒体可变比特率(RMVB,RealMedia Variable Bitrate)格式等等。In the embodiment of the present invention, the second sub data is used to represent the original video data. Here, the original video data refers to the video data before the special effect is obtained. After the video data is collected by the terminal, the video data is encapsulated into the first data and sent to the server. In an embodiment, the format of the original video data is mp4 format, and the format of the original video data may also be other video formats, for example, MPEG (Moving Picture Experts Group) format, window. Media Video (WMV, Windows Media Video) format, Real Media Variable Bit Rate (RMVB) format, and so on.
步骤203:依据所述第一子数据和所述第二子数据,渲染生成目标视频数据。Step 203: Render and generate target video data according to the first sub data and the second sub data.
具体地,如图8所示,目标视频数据的渲染生成流程包括如下步骤:Specifically, as shown in FIG. 8, the rendering generation process of the target video data includes the following steps:
步骤2031:查找与所述特效渲染参数相对应的特效模板文件,以及将所述原始视频数据分割为多个原始视频子数据。Step 2031: Find a special effect template file corresponding to the special effect rendering parameter, and divide the original video data into a plurality of original video sub data.
本发明实施例中,服务器对第一子数据进行解析,得到特效渲染参数。然后,服务器查找与所述特效渲染参数相对应的特效模板文件。这里,特效模板文件存储在服务器侧,由于服务器侧具有大量的存储资源,因此,在服务器侧可以存储丰富多样的特效模板,每个特效模板均通过一个特效模板文件来实现,在特效模板文件中,包括了展示该特效需要用到的一系列特效命令和特效参数。参照图4所示,特效的渲染包括两大类,第一类为视频渲染,第二类为文字渲染,对于视频渲染,包括:仿射变换、镜像翻转、alpha渐变等。对于文字渲染,包括:平移变换、绽放变换、模糊变换、旋转变换等。不同的文字可以对应不同的文字渲染。In the embodiment of the present invention, the server parses the first sub-data to obtain a special effect rendering parameter. The server then looks up the effect template file corresponding to the effect rendering parameters. Here, the effect template file is stored on the server side. Since the server side has a large amount of storage resources, a variety of special effect templates can be stored on the server side, and each effect template is implemented by a special effect template file in the special effect template file. It includes a series of special effects commands and special effects parameters that are needed to display the effect. Referring to FIG. 4, the rendering of the effect includes two categories, the first type is video rendering, the second type is text rendering, and for video rendering, including: affine transformation, mirror flip, alpha gradient, and the like. For text rendering, including: translation transformation, bloom transformation, fuzzy transformation, rotation transformation, and so on. Different texts can be rendered for different texts.
本发明实施例中,服务器对第二子数据进行解析,得到原始视频数据。然后,服务器将所述原始视频数据分割为多个原始视频子数据,参照图5所示,假设原始视频数据包括的视频帧数量为12帧,分别为01帧、02帧、03帧、04帧、05帧、06帧、07帧、08帧、09帧、10帧、11帧和12帧。这12帧数据是按照时间顺序排列,将这12帧视频数据分割为4个原始视频子数据,每个原始视频子数据包括3个帧。第一个原始视频子数据包括01帧、02帧、03帧;第二个原始视频子数据包括04帧、05帧、06帧;第三个原始视频子数据包括07帧、08帧、09帧。第四个原始视频子数据包括10帧、11帧、12帧。 In the embodiment of the present invention, the server parses the second sub-data to obtain original video data. Then, the server divides the original video data into a plurality of original video sub-data. As shown in FIG. 5, it is assumed that the original video data includes 12 frames, which are 01 frames, 02 frames, 03 frames, and 04 frames. , 05 frames, 06 frames, 07 frames, 08 frames, 09 frames, 10 frames, 11 frames, and 12 frames. The 12 frames of data are arranged in chronological order, and the 12 frames of video data are divided into 4 original video sub-data, and each original video sub-data includes 3 frames. The first original video sub-data includes 01 frames, 02 frames, and 03 frames; the second original video sub-data includes 04 frames, 05 frames, and 06 frames; and the third original video sub-data includes 07 frames, 08 frames, and 09 frames. . The fourth original video subdata includes 10 frames, 11 frames, and 12 frames.
步骤2032:对所述特效模板文件和所述多个原始视频子数据进行并行渲染,生成多个目标视频子数据;将所述多个目标视频子数据合并为目标视频数据。Step 2032: Perform parallel rendering on the special effect template file and the plurality of original video sub-data to generate a plurality of target video sub-data; and combine the plurality of target video sub-data into target video data.
由于服务器中的中央处理器(CPU)和图形处理器(GPU)具有多核,因此,通过多核的CPU和多核的GPU能够对所述特效模板文件和所述多个原始视频子数据进行并行渲染,生成多个目标视频子数据;然后,将所述多个目标视频子数据合并为目标视频数据。参照图5,将这四个原始视频子数据分发给4个核(分别为CPU1、CPU2、CPU3和CPU4)进行并行处理。对于CPU的每个核,又可以对应一组GPU,例如CPU1对应GPU01-GPU12,通过GPU01-GPU12按像素对视频数据进行并行渲染。四组GPU最终渲染出4个目标视频子数据,然后对这4个目标视频子数据进行合并,得到目标视频数据。本发明实施例中,处理器并行计算的能力能够使得特效渲染的速度大大提高,给用户带来超实时渲染、预览和分享的体验。Since the central processing unit (CPU) and the graphics processing unit (GPU) in the server have multiple cores, the effect template file and the plurality of original video sub-data can be rendered in parallel by the multi-core CPU and the multi-core GPU. Generating a plurality of target video subdata; then, combining the plurality of target video subdata into the target video data. Referring to FIG. 5, the four original video sub-data are distributed to four cores (CPU1, CPU2, CPU3, and CPU4, respectively) for parallel processing. For each core of the CPU, a set of GPUs may be correspondingly, for example, CPU1 corresponds to GPU01-GPU12, and video data is rendered in parallel by pixels through GPU01-GPU12. The four sets of GPUs finally render 4 target video sub-data, and then combine the 4 target video sub-data to obtain the target video data. In the embodiment of the present invention, the parallel computing capability of the processor can greatly improve the speed of the effect rendering, and bring the user a super real-time rendering, previewing and sharing experience.
步骤204:向所述终端发送所述目标视频数据,以供所述终端对所述目标视频数据进行显示。Step 204: Send the target video data to the terminal, so that the terminal displays the target video data.
本发明实施例中,服务器通过HTTP的方式向所述终端发送所述目标视频数据,以供所述终端对所述目标视频数据进行显示。在一实施方式中,目标视频数据被服务器编码成mp4格式的文件,将mp4格式的目标视频数据发送给终端进行显示,不局限于此,目标视频数据的格式还可以是其他视频格式,例如:MPEG格式、WMV格式、RMVB格式等等。在一应用中,终端能够播放的视频格式有多种,包括:mp4格式、MPEG格式、WMV格式、RMVB格式,目标视频数据的格式与原始视频数据的格式可以相同,也可以不同,用户可以预先设置目标视频数据的格式,这样,服务器就会依据设置好的格式对目标视频数据进行编码。 In the embodiment of the present invention, the server sends the target video data to the terminal by means of HTTP, so that the terminal displays the target video data. In an embodiment, the target video data is encoded by the server into a file in the mp4 format, and the target video data in the mp4 format is sent to the terminal for display. The format of the target video data may also be other video formats, for example: MPEG format, WMV format, RMVB format, and the like. In an application, the terminal can play a variety of video formats, including: mp4 format, MPEG format, WMV format, RMVB format, the format of the target video data may be the same as the format of the original video data, or may be different, the user may pre- Set the format of the target video data so that the server encodes the target video data according to the format set.
本发明实施例的技术方案解决了由于硬件资源的限制造成的特效渲染较慢的问题,实现了短视频文件的超实时渲染、预览和分享。不必下载大量的特效模板到本地存储空间,节省了存储资源。此外,还减少了本地芯片的使用,使复杂的渲染提交到服务器进行处理,使得用户视频编辑体验更流畅、更方便,分享也更快捷。The technical solution of the embodiment of the invention solves the problem that the special effect rendering is slow due to limitation of hardware resources, and realizes ultra-real-time rendering, previewing and sharing of short video files. You don't have to download a lot of effects templates to your local storage, saving storage resources. In addition, the use of local chips is reduced, allowing complex renderings to be submitted to the server for processing, making the user's video editing experience smoother, more convenient, and faster to share.
图3为本发明实施例的视频处理方法的流程示意图二,本示例中的视频处理方法应用于终端,如图3所示,所述视频处理方法包括以下步骤:FIG. 3 is a schematic flowchart of a video processing method according to an embodiment of the present invention. The video processing method in this example is applied to a terminal. As shown in FIG. 3, the video processing method includes the following steps:
步骤301:采集原始视频数据,以及获取输入的特效渲染参数。Step 301: Acquire original video data, and obtain input effect rendering parameters.
本发明实施例中,终端可以是手机、平板电脑、笔记本电脑等设备。In the embodiment of the present invention, the terminal may be a mobile phone, a tablet computer, a notebook computer, or the like.
在一实施方式中,通过该短视频类APP能够调用终端的摄像头,从而通过摄像头采集时间较短的视频;在另一实施方式中,终端安装有网页浏览器,通过网页浏览器登录短视频类网站,通过该短视频类网站能够调用终端的摄像头,从而通过摄像头采集时间较短的视频。In an embodiment, the short video type APP can call the camera of the terminal, so that the short time video is collected by the camera; in another embodiment, the terminal is installed with a web browser, and the short video is registered through the web browser. The website, through the short video website, can call the camera of the terminal to collect the video with shorter time through the camera.
上述方案中,将时间较短的视频称为短视频,例如10秒的视频,60秒的视频等等。用户可以在这个短视频中添加特效,为此,终端需要首先将视频数据与用于添加特效的特效渲染参数发送给服务器。In the above scheme, a short-time video is referred to as a short video, such as a 10-second video, a 60-second video, and the like. Users can add special effects to this short video. To do this, the terminal needs to first send the video data and the effect rendering parameters for adding effects to the server.
在另一实施方式中,终端具有视频编辑类APP,利用视频编辑类APP能够从视频文件中截取出其中一小段视频,这里,将这一小段视频称为短视频。用户可以在这个短视频中添加特效,为此,终端需要首先将视频数据与用于添加特效的特效渲染参数发送给服务器。In another embodiment, the terminal has a video editing class APP, and the video editing class APP can intercept a small piece of video from the video file. Here, the short video is called short video. Users can add special effects to this short video. To do this, the terminal needs to first send the video data and the effect rendering parameters for adding effects to the server.
本发明实施例中,终端获取到的视频文件一般包含有视频数据和音频数据。这里,对视频文件中的视频数据和音频数据进行分离,得到原始视频数据以及原始音频数据。In the embodiment of the present invention, the video file acquired by the terminal generally includes video data and audio data. Here, the video data and the audio data in the video file are separated to obtain original video data as well as original audio data.
本发明实施例中,需要将分离后得到的原始视频数据转换为服务器能够渲染的视频格式,在一实施方式中,mp4格式是进行渲染处理的标准格 式,因此,需要将分离后得到的原始视频数据转换为mp4格式,不局限于此,能够渲染的视频格式还可以是其他视频格式,例如:MPEG格式、WMV格式、RMVB格式等等。In the embodiment of the present invention, the original video data obtained after the separation needs to be converted into a video format that can be rendered by the server. In an implementation manner, the mp4 format is a standard cell for performing rendering processing. Therefore, the original video data obtained after the separation needs to be converted into the mp4 format, and the video format that can be rendered can also be other video formats, such as MPEG format, WMV format, RMVB format, and the like.
本发明实施例中,特效渲染参数是指特效标识,通过特效标识能够唯一确定需要添加的特效。每个特效都对应一个特效标识,当用户在终端中选择需要添加的特效时,即将该特效对应的特效标识封装在第一数据中发送给服务器。In the embodiment of the present invention, the special effect rendering parameter refers to the special effect identifier, and the special effect identifier can uniquely determine the special effect that needs to be added. Each special effect corresponds to a special effect identifier. When the user selects an effect to be added in the terminal, the special effect identifier corresponding to the special effect is encapsulated in the first data and sent to the server.
本发明实施例中,终端还可以具有丰富的个性化渲染功能,具体地,用户可以设计个性化的特效内容,然后将该特效内容发送给服务器,服务器直接根据特效内容添加个性化的特效。In the embodiment of the present invention, the terminal may also have a rich personalized rendering function. Specifically, the user may design a personalized special effect content, and then send the special effect content to the server, and the server directly adds a personalized special effect according to the special effect content.
步骤302:将用于表征所述特效渲染参数的第一子数据,以及用于表征所述原始视频数据的第二子数据封装在第一数据中。Step 302: Encapsulate the first sub data for characterizing the special effect rendering parameter and the second sub data for characterizing the original video data in the first data.
本发明实施例中,以服务器与终端之间的通信基于HTTP为例,基于此,第一数据的格式为HTTP数据包。In the embodiment of the present invention, the communication between the server and the terminal is based on HTTP. Based on this, the format of the first data is an HTTP data packet.
本发明实施例中,将用于表征所述特效渲染参数的第一子数据,以及用于表征所述原始视频数据的第二子数据封装在第一数据中,包括:将用于表征所述特效渲染参数的第一子数据封装在第一数据的包头部分,将用于表征所述原始视频数据的第二子数据封装在第一数据的包体部分。其中,包头部分为HTTP Header,包体部分为HTTP Body。In the embodiment of the present invention, the first sub-data for characterizing the special effect rendering parameter and the second sub-data for characterizing the original video data are encapsulated in the first data, including: The first sub-data of the effect rendering parameter is encapsulated in a header portion of the first data, and the second sub-data for characterizing the original video data is encapsulated in a body portion of the first data. The packet header is divided into an HTTP header, and the packet body is an HTTP body.
步骤303:向服务器发送所述第一数据。Step 303: Send the first data to a server.
本发明实施例中,终端通过HTTP POST将第一数据发给服务器,而服务器通过HTTP GET接收终端发送的第一数据。In the embodiment of the present invention, the terminal sends the first data to the server through HTTP POST, and the server receives the first data sent by the terminal through HTTP GET.
之后,服务器依据所述第一数据渲染生成目标视频数据,具体地,服务器对第一子数据进行解析,得到特效渲染参数。然后,服务器查找与所述特效渲染参数相对应的特效模板文件。这里,特效模板文件存储在服务 器侧,由于服务器侧具有大量的存储资源,因此,在服务器侧可以存储丰富多样的特效模板,每个特效模板均通过一个特效模板文件来实现,在特效模板文件中,包括了展示该特效需要用到的一系列特效命令和特效参数。服务器对第二子数据进行解析,得到原始视频数据。然后,服务器将所述原始视频数据分割为多个原始视频子数据,通过多核的CPU和多核的GPU能够对所述特效模板文件和所述多个原始视频子数据进行并行渲染,生成多个目标视频子数据;然后,将所述多个目标视频子数据合并为目标视频数据。Then, the server generates target video data according to the first data rendering. Specifically, the server parses the first sub data to obtain special effect rendering parameters. The server then looks up the effect template file corresponding to the effect rendering parameters. Here, the effect template file is stored in the service. On the server side, because the server side has a large amount of storage resources, a variety of special effects templates can be stored on the server side. Each effect template is implemented by a special effect template file, and the special effect template file includes the need to display the special effect. A series of special effects commands and special effects parameters. The server parses the second sub-data to obtain the original video data. Then, the server divides the original video data into a plurality of original video sub-data, and the multi-core CPU and the multi-core GPU can perform parallel rendering on the special effect template file and the multiple original video sub-data to generate multiple targets. Video subdata; then, combining the plurality of target video subdata into target video data.
步骤304:接收所述服务器发送的依据所述第一数据渲染生成的目标视频数据;对所述目标视频数据进行显示。Step 304: Receive target video data generated by the server according to the first data rendering, and display the target video data.
本发明实施例中,终端基于HTTP接收所述服务器发送的依据所述第一数据渲染生成的目标视频数据,这里,目标视频数据为mp4格式,因此,可以直接对该目标视频数据进行显示,显示出来的视频即为加入特效后的视频。In the embodiment of the present invention, the terminal receives the target video data generated by the server according to the first data rendering based on HTTP. Here, the target video data is in the mp4 format. Therefore, the target video data may be directly displayed and displayed. The video that came out is the video after adding the effect.
本发明实施例中,对所述目标视频数据进行显示时,对所述原始音频数据进行播放。In the embodiment of the present invention, when the target video data is displayed, the original audio data is played.
本发明实施例的技术方案还可以采用本地(指终端)与服务器相结合的模式,如果计算复杂度高择则采用服务器模式,如果计算复杂度低则采用本地模式。这样可以满足用户更多的需求,提供更多新奇的特效。此外,本地可以采用一键加特效的操作模式,使得普通用户操作非常方便,而所有的渲染处理都由服务器来操作。对用户屏蔽了所有处理细节,使得用户感觉在本地处理一样。服务器超快的秒级渲染时间以及更小秒级传输延迟,让用户体验流畅自如。参照图6所示,服务器按就近分配的原则与各个终端进行连接,使得短视频在传输速度上更快,采用长连接的HTTP通信机制可以提高用户更换不同的特效模式预览效果的速度。 The technical solution of the embodiment of the present invention may also adopt a mode in which a local (referring to a terminal) and a server are combined. If the computational complexity is high, the server mode is adopted, and if the computational complexity is low, the local mode is adopted. This can meet the needs of users and provide more novel effects. In addition, the local can use a one-button plus special effects mode of operation, making ordinary user operations very convenient, and all rendering processing is operated by the server. All processing details are masked to the user so that the user feels it is handled locally. The server's ultra-fast second-level rendering time and smaller second-level transmission delays make the user experience smooth. Referring to FIG. 6, the server is connected to each terminal according to the principle of nearest allocation, so that the short video is faster in transmission speed, and the long-connected HTTP communication mechanism can improve the speed at which the user can change the preview effect of different special effects modes.
图7为本发明实施例的视频处理方法的流程示意图三,如图7所示,所述视频处理方法包括以下步骤:FIG. 7 is a schematic flowchart 3 of a video processing method according to an embodiment of the present invention. As shown in FIG. 7, the video processing method includes the following steps:
步骤701:终端采集原始视频数据;将原始视频数据转换为mp4文件。Step 701: The terminal collects original video data; converts the original video data into an mp4 file.
步骤702:终端获取用户输入的特效渲染参数;将特效渲染参数转换为json文件。Step 702: The terminal acquires a special effect rendering parameter input by the user, and converts the special effect rendering parameter into a json file.
这里,json文件是指采用JavaScript对象标记(JSON,JavaScript Object Notation)格式的文件,JSON格式是一种轻量级的数据交换格式,它采用完全独立于编程语言的文本格式来存储和表示数据。JSON格式易于阅读和编写,同时也易于机器解析和生成,因此能够有效提升网络传输效率。Here, the json file refers to a file in the form of a JavaScript object notation (JSON, JavaScript Object Notation) format, which is a lightweight data exchange format that stores and represents data in a text format that is completely independent of the programming language. The JSON format is easy to read and write, and it is also easy to parse and generate by machine, so it can effectively improve network transmission efficiency.
步骤703:终端将mp4文件和json文件封装至HTTP数据包中。Step 703: The terminal encapsulates the mp4 file and the json file into an HTTP data packet.
步骤704:终端向服务器发送渲染请求,并将HTTP数据包发送给服务器。Step 704: The terminal sends a rendering request to the server, and sends the HTTP data packet to the server.
步骤705:服务器接收终端发送的渲染请求以及HTTP数据包。Step 705: The server receives the rendering request sent by the terminal and the HTTP data packet.
步骤706:服务器将HTTP数据包读取至内存。Step 706: The server reads the HTTP packet into the memory.
步骤707:服务器解析内存中的HTTP数据包,分离出mp4文件和json文件。对所述mp4文件进行分片,得到多个分片文件。Step 707: The server parses the HTTP data packet in the memory, and separates the mp4 file and the json file. The mp4 file is fragmented to obtain a plurality of fragment files.
步骤708:服务器解析json文件中的特效渲染参数;根据特效渲染参数从特效模板库中读取对应的特效模板。Step 708: The server parses the special effect rendering parameter in the json file; and reads the corresponding special effect template from the special effect template library according to the special effect rendering parameter.
步骤709:服务器解析特效模板中的特效命令和参数。Step 709: The server parses the special effect commands and parameters in the special effect template.
步骤710:服务器通过CPU和GPU并行依次渲染json文件中的特效。Step 710: The server sequentially renders the effects in the json file in parallel through the CPU and the GPU.
这里,特效的种类有很多,以特效包括字幕特效和视频特效为例,对于字幕特效而言,解析json文件中的字幕特效编号,通过字幕特效编号读取字幕相关的json文件;通过GPU和CPU并行渲染字幕特效,将渲染的字幕文件存储到缓存(buffer)中;对于视频特效而言,解析json文件中的视频特效编号,通过视频特效编号读取视频相关的json文件;通过GPU和 CPU并行渲染视频特效,将渲染的视频文件存储到缓存(buffer)中。最后,将两个buffer中的字幕和视频一并缓存至其他buffer中,得到最终的特效文件。Here, there are many types of special effects, including special effects including subtitle effects and video effects. For subtitle effects, the subtitle effect number in the json file is parsed, and the subtitle related json file is read through the subtitle effect number; through the GPU and the CPU. Parallel rendering of subtitle effects, storing the rendered subtitle file into a buffer; for video effects, parsing the video effect number in the json file, reading the video related json file through the video effect number; passing the GPU and The CPU renders the video effects in parallel and stores the rendered video files in a buffer. Finally, the subtitles and videos in the two buffers are cached together in other buffers to get the final special effect file.
步骤711:服务器将buffer中的特效数据编码至264流。Step 711: The server encodes the effect data in the buffer to the 264 stream.
步骤712:服务器将264流存储至各个分片文件中。Step 712: The server stores 264 streams into each fragment file.
步骤713:服务器将所有的分片文件合并为mp4文件。Step 713: The server merges all the fragment files into an mp4 file.
步骤714:终端接收服务器发送的加特效的mp4文件。Step 714: The terminal receives the mp4 file sent by the server.
步骤715:终端从mp4文件中读取视频数据。Step 715: The terminal reads the video data from the mp4 file.
这里,终端还可以从mp4文件中读取到音频数据。Here, the terminal can also read audio data from the mp4 file.
步骤716:将视频数据解析至buffer中。Step 716: Parse the video data into the buffer.
步骤717:终端通过显示接口将视频数据显示给用户。Step 717: The terminal displays the video data to the user through the display interface.
本发明实施例的技术方案中,特效模板不用下载到本地,节省了存储资源,并且服务器直接获取特效模板可以加快渲染进度;由于渲染的过程是服务器侧执行的,因此可以跨多种终端设备,通用性较强,维护也更方便;在服务器端操作还可以备份用户的数据文件,方便用户以后操作和丢失找回;对终端的硬件配置要求较低,普通智能手机便可进行操作。In the technical solution of the embodiment of the present invention, the special effect template is not downloaded to the local, saving storage resources, and the server directly obtaining the special effect template can speed up the rendering progress; since the rendering process is performed by the server side, it can span multiple terminal devices. It is more versatile and more convenient to maintain; it can also back up the user's data files on the server side, which is convenient for users to operate and lose in the future; the hardware configuration requirements of the terminal are low, and the ordinary smartphone can operate.
图9为本发明实施例的服务器的结构组成示意图,如图9所示,所述服务器包括:FIG. 9 is a schematic structural diagram of a server according to an embodiment of the present invention. As shown in FIG. 9, the server includes:
接收部分901,配置为接收终端发送的第一数据;The receiving part 901 is configured to receive the first data sent by the terminal;
解析部分902,配置为对所述第一数据进行解析,从所述第一数据中提取出第一子数据和第二子数据,所述第一子数据用于表征特效渲染参数,所述第二子数据用于表征原始视频数据;The parsing part 902 is configured to parse the first data, and extract the first sub data and the second sub data from the first data, where the first sub data is used to represent the special effect rendering parameter, where The two sub-data is used to represent the original video data;
渲染部分903,配置为依据所述第一子数据和所述第二子数据,渲染生成目标视频数据;The rendering part 903 is configured to generate target video data according to the first sub data and the second sub data.
发送部分904,配置为向所述终端发送所述目标视频数据。 The transmitting portion 904 is configured to send the target video data to the terminal.
所述服务器还包括:The server further includes:
存储部分905,配置为存储特效模板文件;The storage part 905 is configured to store the special effect template file;
所述渲染部分903包括:The rendering portion 903 includes:
查找子部分9031,配置为查找与所述特效渲染参数相对应的特效模板文件;The finding subsection 9031 is configured to find a special effect template file corresponding to the special effect rendering parameter;
分割子部分9032,配置为将所述原始视频数据分割为多个原始视频子数据;Segmenting subsection 9032, configured to split the original video data into a plurality of original video subdata;
并行渲染子部分9033,配置为对所述特效模板文件和所述多个原始视频子数据进行并行渲染,生成多个目标视频子数据;The parallel rendering sub-portion 9033 is configured to perform parallel rendering on the special effect template file and the plurality of original video sub-data to generate a plurality of target video sub-data;
合并子部分9034,配置为将所述多个目标视频子数据合并为目标视频数据。The merging sub-portion 9034 is configured to merge the plurality of target video sub-data into target video data.
所述解析部分902包括:The parsing part 902 includes:
获取子部分9021,配置为对所述第一数据进行解析,得到所述第一数据的包头部分和包体部分;The obtaining sub-portion 9021 is configured to parse the first data to obtain a header portion and a packet portion of the first data;
提取子部分9022,配置为在所述包头部分中提取出第一子数据,在所述包体部分中提取出第二子数据。The extracting subsection 9022 is configured to extract the first subdata in the header portion and extract the second subdata in the encapsulation portion.
本领域技术人员应当理解,图9所示的服务器中的各部分的实现功能可参照前述视频处理方法的相关描述而理解。Those skilled in the art should understand that the implementation functions of the various parts in the server shown in FIG. 9 can be understood by referring to the related description of the foregoing video processing method.
如图11所示,在实际应用中,上述解析部分902、渲染部分903可由位于服务器上的处理器1101实现。上述接收部分901、发送部分904可由位于服务器上的外部通信接口1102实现。上述存储部分905可由位于服务器上的存储器1103实现。As shown in FIG. 11, in an actual application, the parsing portion 902 and the rendering portion 903 can be implemented by the processor 1101 located on the server. The receiving portion 901 and the transmitting portion 904 described above may be implemented by an external communication interface 1102 located on a server. The above storage portion 905 can be implemented by the memory 1103 located on the server.
服务器上的处理器1101、外部通信接口1102、存储器1103可以通过系统总线1104进行交互。The processor 1101, the external communication interface 1102, and the memory 1103 on the server can interact through the system bus 1104.
上述方案中,处理器1101可以是通用处理器、数字信号处理器(DSP, Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。处理器1101可以实现或者执行本发明实施例中的公开的各方法、步骤及流程图。通用处理器可以是微处理器或者任何常规的处理器等。结合本发明实施例所公开的方法的步骤,可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。In the above solution, the processor 1101 may be a general purpose processor, a digital signal processor (DSP, Digital Signal Processor), or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. The processor 1101 can implement or perform the methods, steps, and flowcharts disclosed in the embodiments of the present invention. A general purpose processor can be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiment of the present invention may be directly implemented as a hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
上述方案中,存储器1103可以由任何类型的易失性或非易失性存储设备、或者它们的组合来实现。其中,非易失性存储器可以是只读存储器(ROM,Read Only Memory)、可编程只读存储器(PROM,Programmable Read-Only Memory)、可擦除可编程只读存储器(EPROM,Erasable Programmable Read-Only Memory)、电可擦除可编程只读存储器(EEPROM,Electrically Erasable Programmable Read-Only Memory)、磁性随机存取存储器(FRAM,Ferromagnetic Random Access Memory)、快闪存储器(Flash Memory)、磁表面存储器、光盘、或只读光盘(CD-ROM,Compact Disc Read-Only Memory);磁表面存储器可以是磁盘存储器或磁带存储器。易失性存储器可以是随机存取存储器(RAM,Random Access Memory),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(SRAM,Static Random Access Memory)、同步静态随机存取存储器(SSRAM,Synchronous Static Random Access Memory)、动态随机存取存储器(DRAM,Dynamic Random Access Memory)、同步动态随机存取存储器(SDRAM,Synchronous Dynamic Random Access Memory)、双倍数据速率同步动态随机存取存储器(DDRSDRAM,Double Data Rate Synchronous Dynamic Random Access Memory)、增强型同步动态随机存取存储器(ESDRAM,Enhanced Synchronous Dynamic Random Access Memory)、同步连接动态随机存取存储器(SLDRAM,SyncLink Dynamic  Random Access Memory)、直接内存总线随机存取存储器(DRRAM,Direct Rambus Random Access Memory)。本发明实施例描述的存储器1101旨在包括但不限于这些和任意其它适合类型的存储器。In the above scheme, the memory 1103 can be implemented by any type of volatile or non-volatile storage device, or a combination thereof. The non-volatile memory may be a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), or an Erasable Programmable Read (EPROM). Only Memory), Electrically Erasable Programmable Read-Only Memory (EEPROM), Ferromagnetic Random Access Memory (FRAM), Flash Memory, Magnetic Surface Memory , CD-ROM, or Compact Disc Read-Only Memory (CD-ROM); the magnetic surface memory can be a disk storage or a tape storage. The volatile memory can be a random access memory (RAM) that acts as an external cache. By way of example and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access (SSRAM). DRAM (Dynamic Random Access Memory), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), enhancement Synchronous Dynamic Random Access Memory (ESDRAM), synchronously connected dynamic random access memory (SLDRAM, SyncLink Dynamic) Random Access Memory), Direct RAM Bus Random Access Memory (DRRAM). The memory 1101 described in the embodiments of the present invention is intended to include, but is not limited to, these and any other suitable types of memory.
外部通信接口1102能够使服务器接入基于通信协议的无线网络,如WiFi、2G或3G、4G、互联网或它们的组合。The external communication interface 1102 can enable the server to access a wireless network based on a communication protocol, such as WiFi, 2G or 3G, 4G, the Internet, or a combination thereof.
图10为本发明实施例的终端的结构组成示意图,如图10所示,所述终端包括:FIG. 10 is a schematic structural diagram of a terminal according to an embodiment of the present invention. As shown in FIG. 10, the terminal includes:
采集部分1001,配置为采集原始视频数据;The collecting part 1001 is configured to collect original video data;
获取部分1002,配置为获取特效渲染参数;The obtaining part 1002 is configured to obtain a special effect rendering parameter;
封装部分1003,配置为将用于表征所述特效渲染参数的第一子数据,以及用于表征所述原始视频数据的第二子数据封装在第一数据中;The encapsulating portion 1003 is configured to encapsulate the first sub data for characterizing the special effect rendering parameter and the second sub data for characterizing the original video data in the first data;
发送部分1004,配置为向服务器发送所述第一数据;The sending part 1004 is configured to send the first data to the server;
接收部分1005,配置为接收所述服务器发送的依据所述第一数据渲染生成的目标视频数据;The receiving part 1005 is configured to receive target video data generated by the server according to the first data rendering;
显示部分1006,配置为对所述目标视频数据进行显示。The display portion 1006 is configured to display the target video data.
所述采集部分1001,还配置为对视频文件中的视频数据和音频数据进行分离,得到原始视频数据以及原始音频数据。The collecting portion 1001 is further configured to separate video data and audio data in the video file to obtain original video data and original audio data.
所述终端还包括:The terminal further includes:
音频播放部分1007,配置为当所述显示部分对所述目标视频数据进行显示时,对所述原始音频数据进行播放。The audio playing portion 1007 is configured to play the original audio data when the display portion displays the target video data.
本领域技术人员应当理解,图10所示的终端中的各部分的实现功能可参照前述视频处理方法的相关描述而理解。Those skilled in the art should understand that the implementation functions of the various parts in the terminal shown in FIG. 10 can be understood by referring to the related description of the foregoing video processing method.
如图12所示,在实际应用中,上述采集部分1001可由位于终端上的摄像头1201实现。上述获取部分1002可由位于终端上的输入装置1202实现。上述封装部分1003、音频播放部分1007可由位于终端上的处理器1203 实现。上述发送部分1004、接收部分1005可由位于服务器上的外部通信接口1204实现。上述显示部分1006可由位于服务器上的显示器1205实现。As shown in FIG. 12, in practical applications, the above-described acquisition portion 1001 can be implemented by a camera 1201 located on the terminal. The acquisition portion 1002 described above can be implemented by the input device 1202 located on the terminal. The encapsulating portion 1003 and the audio playing portion 1007 may be implemented by the processor 1203 located on the terminal. achieve. The transmitting portion 1004 and the receiving portion 1005 described above may be implemented by an external communication interface 1204 located on a server. The display portion 1006 described above can be implemented by the display 1205 located on the server.
终端上的摄像头1201、输入装置1202、处理器1203、外部通信接口1204以及显示器1205可以通过系统总线1206进行交互。The camera 1201, the input device 1202, the processor 1203, the external communication interface 1204, and the display 1205 on the terminal can interact through the system bus 1206.
上述方案中,处理器1203可以是通用处理器、DSP,或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。In the above solution, the processor 1203 may be a general purpose processor, a DSP, or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or the like.
外部通信接口1204能够使服务器接入基于通信协议的无线网络,如WiFi、2G或3G、4G、互联网或它们的组合。The external communication interface 1204 enables the server to access a wireless network based on a communication protocol, such as WiFi, 2G or 3G, 4G, the Internet, or a combination thereof.
输入装置1202可以是键盘、鼠标、轨迹球、点击轮、按键、按钮等。 Input device 1202 can be a keyboard, mouse, trackball, click wheel, button, button, and the like.
显示器1205可以是阴极射线管(CRT)显示器、液晶显示器(LCD)、有机发光二极管(OLED)显示器、薄膜晶体管(TFT)显示器、等离子显示器等等。 Display 1205 can be a cathode ray tube (CRT) display, a liquid crystal display (LCD), an organic light emitting diode (OLED) display, a thin film transistor (TFT) display, a plasma display, and the like.
本发明实施例所记载的技术方案之间,在不冲突的情况下,可以任意组合。The technical solutions described in the embodiments of the present invention can be arbitrarily combined without conflict.
本发明实施例所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质上实施的计算机程序产品的形式,所述存储介质包括但不限于U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁盘存储器、CD-ROM、光学存储器等。The integrated modules described in the embodiments of the present invention may also be stored in a computer readable storage medium if they are implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, those skilled in the art will appreciate that embodiments of the present application can be provided as a method, system, or computer program product. Thus, the present application can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment in combination of software and hardware. Moreover, the application can take the form of a computer program product embodied on one or more computer-usable storage media containing computer usable program code, including but not limited to a USB flash drive, a mobile hard drive, a read only memory ( ROM, Read-Only Memory), Random Access Memory (RAM), disk storage, CD-ROM, optical storage, and the like.
本申请是根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图 和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present application is described in terms of flowcharts and/or block diagrams of methods, apparatuses (systems), and computer program products according to embodiments of the present application. It should be understood that the flow chart can be implemented by computer program instructions And/or a combination of the processes and/or blocks in the block diagrams, and the flowcharts and/or blocks in the flowcharts. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine for the execution of instructions for execution by a processor of a computer or other programmable data processing device. Means for implementing the functions specified in one or more of the flow or in a block or blocks of the flow chart.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。The computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device. The apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device. The instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。While the preferred embodiment of the present application has been described, those skilled in the art can make further changes and modifications to these embodiments once they are aware of the basic inventive concept. Therefore, the appended claims are intended to be interpreted as including the preferred embodiments and the modifications and
相应的,本发明实施例还提供一种计算机存储介质,其中存储有计算机程序,该计算机程序用于执行本发明实施例的上述视频处理方法。Correspondingly, an embodiment of the present invention further provides a computer storage medium, wherein a computer program is stored, and the computer program is used to execute the above video processing method in the embodiment of the present invention.
虽然说明书包含许多具体的实施细节,但是这些实施细节不应当被解释为对任何权利要求的范围的限定,而是对专用于特定实施方式的特征的描述。说明书中在独立实施方式前后文中描述的特定的特征同样能够以单个实施方式的结合中实现。相反地,单个实施方式的上下文中描述的各个 特征同样能够在多个实施方式中单独实现或者以任何合适的子结合中实现。而且,尽管特征可以在上文中描述为在特定结合中甚至如最初所要求的作用,但是在一些情况下所要求的结合中的一个或多个特征能够从该结合中去除,并且所要求的结合可以为子结合或者子结合的变型。The description contains many specific implementation details, which are not to be construed as limiting the scope of the claims, but rather the description of the features of the particular embodiments. Particular features described in the specification before and after the independent embodiments can also be implemented in a combination of a single embodiment. Conversely, each of the individual contexts described in the context of a single embodiment Features can also be implemented separately in various embodiments or in any suitable sub-combination. Moreover, although features may be described above as even in the particular combination, even as originally claimed, in some cases one or more of the required combinations can be removed from the combination and the required combination It can be a sub-binding or a sub-combination variant.
类似地,虽然在附图中以特定次序描绘操作,但是这不应当被理解为要求该操作以所示的特定次序或者以相继次序来执行,或者所示的全部操作都被执行以达到期望的结果。在特定环境下,多任务处理和并行处理可以是有利的。此外,上述实施方式中各个系统组件的分离不应当被理解为要求在全部实施方式中实现该分离,并且应当理解的是所描述的程序组件和系统通常能够被共同集成在单个软件产品中或被封装为多个软件产品。Similarly, although the operations are depicted in a particular order in the figures, this should not be construed as requiring that the operations are performed in the particular order shown, or in a sequential order, or all of the operations illustrated are performed to achieve the desired. result. Multitasking and parallel processing can be advantageous in certain circumstances. Furthermore, the separation of various system components in the above-described embodiments should not be understood as requiring that the separation be implemented in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or Packaged into multiple software products.
因此,已经对主题的特定实施方式进行了描述。其它实施方式在以下权利要求的范围内。在一些情况下,权利要求中所限定的动作能够以不同的次序执行并且仍能够达到期望的结果。此外,附图中描绘的过程并不必须采用所示出的特定次序、或相继次序来达到期望的结果。在特定实施方式中,可以使用多任务处理或并行处理。Thus, specific embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions defined in the claims can be performed in a different order and still achieve the desired results. Moreover, the processes depicted in the figures are not necessarily in the particular order shown, or in a sequential order to achieve the desired results. In particular embodiments, multitasking or parallel processing can be used.
工业实用性Industrial applicability
本发明实施例的技术方案,终端将用于表征特效渲染参数的第一子数据,以及用于表征原始视频数据的第二子数据封装在第一数据中发送给服务器。服务器对所述第一数据进行解析;依据所述第一子数据和所述第二子数据,渲染生成目标视频数据;向所述终端发送所述目标视频数据,以供所述终端对所述目标视频数据进行显示。如此,终端将处理复杂度较高的渲染过程交由服务器来处理,渲染处理速度得到大大提升,进而能够创造出丰富多样的特效效果来满足用户的需求。 In the technical solution of the embodiment of the present invention, the terminal sends the first sub-data for characterizing the special effect rendering parameter, and the second sub-data for characterizing the original video data is encapsulated in the first data and sent to the server. The server parses the first data, and generates and generates target video data according to the first sub data and the second sub data, and sends the target video data to the terminal, where the terminal is configured by the terminal The target video data is displayed. In this way, the terminal handles the processing process with higher complexity to the server, and the rendering processing speed is greatly improved, thereby creating a variety of special effects to meet the needs of users.

Claims (14)

  1. 一种视频处理方法,所述方法包括:A video processing method, the method comprising:
    接收终端发送的第一数据;Receiving first data sent by the terminal;
    对所述第一数据进行解析,从所述第一数据中提取出第一子数据和第二子数据,所述第一子数据用于表征特效渲染参数,所述第二子数据用于表征原始视频数据;Parsing the first data, extracting first sub data and second sub data from the first data, the first sub data is used to represent a special effect rendering parameter, and the second sub data is used to represent Raw video data;
    依据所述第一子数据和所述第二子数据,渲染生成目标视频数据;Generating target video data according to the first sub data and the second sub data;
    向所述终端发送所述目标视频数据。Sending the target video data to the terminal.
  2. 根据权利要求1所述的视频处理方法,其中,所述依据所述第一子数据和所述第二子数据,渲染生成目标视频数据,包括:The video processing method according to claim 1, wherein the generating the target video data according to the first sub data and the second sub data comprises:
    查找与所述特效渲染参数相对应的特效模板文件,以及将所述原始视频数据分割为多个原始视频子数据;Finding a special effect template file corresponding to the special effect rendering parameter, and dividing the original video data into a plurality of original video sub-data;
    对所述特效模板文件和所述多个原始视频子数据进行并行渲染,生成多个目标视频子数据;Performing parallel rendering on the special effect template file and the plurality of original video sub-data to generate a plurality of target video sub-data;
    将所述多个目标视频子数据合并为目标视频数据。Combining the plurality of target video subdata into target video data.
  3. 根据权利要求1或2所述的视频处理方法,其中,所述对所述第一数据进行解析,从所述第一数据中提取出第一子数据和第二子数据,包括:The video processing method according to claim 1 or 2, wherein the parsing the first data and extracting the first sub data and the second sub data from the first data comprises:
    对所述第一数据进行解析,得到所述第一数据的包头部分和包体部分;Parsing the first data to obtain a header portion and a packet portion of the first data;
    在所述包头部分中提取出第一子数据,在所述包体部分中提取出第二子数据。A first sub-data is extracted in the header portion, and a second sub-data is extracted in the packet portion.
  4. 一种视频处理方法,所述方法包括:A video processing method, the method comprising:
    采集原始视频数据,以及获取特效渲染参数;Collect raw video data and get special effect rendering parameters;
    将用于表征所述特效渲染参数的第一子数据,以及用于表征所述原 始视频数据的第二子数据封装在第一数据中;a first sub-data to be used to characterize the effect rendering parameters, and to characterize the original The second sub-data of the initial video data is encapsulated in the first data;
    向服务器发送所述第一数据;Sending the first data to a server;
    接收所述服务器发送的依据所述第一数据渲染生成的目标视频数据;Receiving target video data generated by the server according to the first data rendering;
    对所述目标视频数据进行显示。Displaying the target video data.
  5. 根据权利要求4所述的视频处理方法,其中,所述采集原始视频数据,包括:The video processing method according to claim 4, wherein the collecting the original video data comprises:
    对视频文件中的视频数据和音频数据进行分离,得到原始视频数据以及原始音频数据。The video data and the audio data in the video file are separated to obtain original video data as well as original audio data.
  6. 根据权利要求5所述的视频处理方法,其中,所述方法还包括:The video processing method according to claim 5, wherein the method further comprises:
    对所述目标视频数据进行显示时,对所述原始音频数据进行播放。When the target video data is displayed, the original audio data is played.
  7. 一种服务器,所述服务器包括:A server, the server comprising:
    接收部分,配置为接收终端发送的第一数据;a receiving part, configured to receive first data sent by the terminal;
    解析部分,配置为对所述第一数据进行解析,从所述第一数据中提取出第一子数据和第二子数据,所述第一子数据用于表征特效渲染参数,所述第二子数据用于表征原始视频数据;The parsing portion is configured to parse the first data, and extract the first sub data and the second sub data from the first data, where the first sub data is used to represent the special effect rendering parameter, and the second Subdata is used to characterize the original video data;
    渲染部分,配置为依据所述第一子数据和所述第二子数据,渲染生成目标视频数据;a rendering part, configured to generate target video data according to the first sub data and the second sub data;
    发送部分,配置为向所述终端发送所述目标视频数据。And a sending part, configured to send the target video data to the terminal.
  8. 根据权利要求7所述的服务器,其中,所述服务器还包括:The server of claim 7, wherein the server further comprises:
    存储部分,配置为存储特效模板文件;a storage portion configured to store a special effect template file;
    所述渲染部分包括:The rendering portion includes:
    查找子部分,配置为查找与所述特效渲染参数相对应的特效模板文件;Finding a subsection configured to find a special effect template file corresponding to the effect rendering parameter;
    分割子部分,配置为将所述原始视频数据分割为多个原始视频子数 据;Segmenting the subsection, configured to split the original video data into a plurality of original video sub-numbers according to;
    并行渲染子部分,配置为对所述特效模板文件和所述多个原始视频子数据进行并行渲染,生成多个目标视频子数据;And the parallel rendering sub-section is configured to perform parallel rendering on the special effect template file and the plurality of original video sub-data to generate a plurality of target video sub-data;
    合并子部分,配置为将所述多个目标视频子数据合并为目标视频数据。The merging subsection is configured to merge the plurality of target video subdata into the target video data.
  9. 根据权利要求7或8所述的服务器,其中,所述解析部分包括:The server according to claim 7 or 8, wherein the parsing portion comprises:
    获取子部分,配置为对所述第一数据进行解析,得到所述第一数据的包头部分和包体部分;Obtaining a sub-portion configured to parse the first data to obtain a header portion and a packet portion of the first data;
    提取子部分,配置为在所述包头部分中提取出第一子数据,在所述包体部分中提取出第二子数据。Extracting a sub-portion configured to extract a first sub-data in the header portion and extract a second sub-data in the encapsulation portion.
  10. 一种终端,所述终端包括:A terminal, the terminal comprising:
    采集部分,配置为采集原始视频数据;The acquisition part is configured to collect original video data;
    获取部分,配置为获取特效渲染参数;Get part, configured to get the effect rendering parameters;
    封装部分,配置为将用于表征所述特效渲染参数的第一子数据,以及用于表征所述原始视频数据的第二子数据封装在第一数据中;An encapsulating portion configured to encapsulate a first sub-data for characterizing the effect rendering parameter and a second sub-data for characterizing the original video data in the first data;
    发送部分,配置为向服务器发送所述第一数据;a sending part, configured to send the first data to a server;
    接收部分,配置为接收所述服务器发送的依据所述第一数据渲染生成的目标视频数据;a receiving part, configured to receive target video data generated by the server according to the first data rendering;
    显示部分,配置为对所述目标视频数据进行显示。a display portion configured to display the target video data.
  11. 根据权利要求10所述的终端,其中,所述采集部分,还配置为对视频文件中的视频数据和音频数据进行分离,得到原始视频数据以及原始音频数据。The terminal according to claim 10, wherein said collecting portion is further configured to separate video data and audio data in the video file to obtain original video data and original audio data.
  12. 根据权利要求11所述的终端,其中,所述终端还包括:The terminal of claim 11, wherein the terminal further comprises:
    音频播放部分,配置为当所述显示部分对所述目标视频数据进行显示时,对所述原始音频数据进行播放。 And an audio playing portion configured to play the original audio data when the display portion displays the target video data.
  13. 一种计算机存储介质,其中存储有计算机可执行指令,该计算机可执行指令用于执行所述权利要求1至3任一项所述的视频处理方法。A computer storage medium having stored therein computer executable instructions for performing the video processing method of any one of claims 1 to 3.
  14. 一种计算机存储介质,其中存储有计算机可执行指令,该计算机可执行指令用于执行所述权利要求4至6任一项所述的视频处理方法。 A computer storage medium having stored therein computer executable instructions for performing the video processing method of any one of claims 4 to 6.
PCT/CN2017/095338 2016-08-04 2017-07-31 Video processing method, server, terminal, and computer storage medium WO2018024179A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610639190.0 2016-08-04
CN201610639190.0A CN106060655B (en) 2016-08-04 2016-08-04 Video processing method, server and terminal

Publications (1)

Publication Number Publication Date
WO2018024179A1 true WO2018024179A1 (en) 2018-02-08

Family

ID=57480392

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/095338 WO2018024179A1 (en) 2016-08-04 2017-07-31 Video processing method, server, terminal, and computer storage medium

Country Status (2)

Country Link
CN (1) CN106060655B (en)
WO (1) WO2018024179A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111199519A (en) * 2018-11-16 2020-05-26 北京微播视界科技有限公司 Method and device for generating special effect package
CN111355978A (en) * 2018-12-21 2020-06-30 北京字节跳动网络技术有限公司 Video file processing method and device, mobile terminal and storage medium
CN112116690A (en) * 2019-06-19 2020-12-22 腾讯科技(深圳)有限公司 Video special effect generation method and device and terminal
CN112190933A (en) * 2020-09-30 2021-01-08 珠海天燕科技有限公司 Special effect processing method and device in game scene
CN113192152A (en) * 2021-05-24 2021-07-30 腾讯音乐娱乐科技(深圳)有限公司 Audio-based image generation method, electronic device and storage medium
CN115049775A (en) * 2022-08-15 2022-09-13 广州中平智能科技有限公司 Dynamic allocation method and system for meta-universe rendering in dual-carbon energy industry

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106060655B (en) * 2016-08-04 2021-04-06 腾讯科技(深圳)有限公司 Video processing method, server and terminal
CN107223264B (en) * 2016-12-26 2022-07-08 达闼机器人股份有限公司 Rendering method and device
CN107369165A (en) * 2017-07-10 2017-11-21 Tcl移动通信科技(宁波)有限公司 A kind of video selection picture optimization method and storage medium, intelligent terminal
CN109474844B (en) 2017-09-08 2020-08-18 腾讯科技(深圳)有限公司 Video information processing method and device and computer equipment
CN108536790A (en) * 2018-03-30 2018-09-14 北京市商汤科技开发有限公司 The generation of sound special efficacy program file packet and sound special efficacy generation method and device
CN108986227B (en) * 2018-06-28 2022-11-29 北京市商汤科技开发有限公司 Particle special effect program file package generation method and device and particle special effect generation method and device
CN109275007B (en) * 2018-09-30 2020-11-20 联想(北京)有限公司 Processing method and electronic equipment
CN110245258B (en) * 2018-12-10 2023-03-17 浙江大华技术股份有限公司 Method for establishing index of video file, video file analysis method and related system
CN109731337B (en) * 2018-12-28 2023-02-21 超级魔方(北京)科技有限公司 Method and device for creating special effect of particles in Unity, electronic equipment and storage medium
CN109600629A (en) * 2018-12-28 2019-04-09 北京区块云科技有限公司 A kind of Video Rendering method, system and relevant apparatus
CN110012352B (en) * 2019-04-17 2020-07-24 广州华多网络科技有限公司 Image special effect processing method and device and video live broadcast terminal
CN110062163B (en) 2019-04-22 2020-10-20 珠海格力电器股份有限公司 Multimedia data processing method and device
CN113572948B (en) * 2020-04-29 2022-11-11 华为技术有限公司 Video processing method and video processing device
CN111957039A (en) * 2020-09-04 2020-11-20 Oppo(重庆)智能科技有限公司 Game special effect realization method and device and computer readable storage medium
CN112532896A (en) * 2020-10-28 2021-03-19 北京达佳互联信息技术有限公司 Video production method, video production device, electronic device and storage medium
CN112291590A (en) * 2020-10-30 2021-01-29 北京字节跳动网络技术有限公司 Video processing method and device
CN115243108B (en) * 2022-07-25 2023-04-11 深圳市腾客科技有限公司 Decoding playing method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103297660A (en) * 2012-02-23 2013-09-11 上海魔睿信息科技有限公司 Real-time interaction special effect camera shooting and photographing method
CN103391414A (en) * 2013-07-24 2013-11-13 杭州趣维科技有限公司 Video processing device and processing method applied to mobile phone platform
CN104811829A (en) * 2014-01-23 2015-07-29 苏州乐聚一堂电子科技有限公司 Karaoke interactive multifunctional special effect system
CN106060655A (en) * 2016-08-04 2016-10-26 腾讯科技(深圳)有限公司 Video processing method, server and terminal

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1258837A1 (en) * 2001-05-14 2002-11-20 Thomson Licensing S.A. Method to generate mutual photometric effects
US20040226047A1 (en) * 2003-05-05 2004-11-11 Jyh-Bor Lin Live broadcasting method and its system for SNG webcasting studio
CN102868923B (en) * 2012-09-13 2016-04-06 北京富年科技有限公司 Be applied to special-effect cloud processing method, the equipment and system of mobile terminal video
CN104461788A (en) * 2014-12-30 2015-03-25 成都品果科技有限公司 Mobile terminal photo backup method and system based on remote special effect rendering
CN104796767B (en) * 2015-03-31 2019-02-12 北京奇艺世纪科技有限公司 A kind of cloud video editing method and system
CN104732568A (en) * 2015-04-16 2015-06-24 成都品果科技有限公司 Method and device for online addition of lyric subtitles to pictures
CN105338370A (en) * 2015-10-28 2016-02-17 北京七维视觉科技有限公司 Method and apparatus for synthetizing animations in videos in real time
CN105323252A (en) * 2015-11-16 2016-02-10 上海璟世数字科技有限公司 Method and system for realizing interaction based on augmented reality technology and terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103297660A (en) * 2012-02-23 2013-09-11 上海魔睿信息科技有限公司 Real-time interaction special effect camera shooting and photographing method
CN103391414A (en) * 2013-07-24 2013-11-13 杭州趣维科技有限公司 Video processing device and processing method applied to mobile phone platform
CN104811829A (en) * 2014-01-23 2015-07-29 苏州乐聚一堂电子科技有限公司 Karaoke interactive multifunctional special effect system
CN106060655A (en) * 2016-08-04 2016-10-26 腾讯科技(深圳)有限公司 Video processing method, server and terminal

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111199519A (en) * 2018-11-16 2020-05-26 北京微播视界科技有限公司 Method and device for generating special effect package
CN111199519B (en) * 2018-11-16 2023-08-22 北京微播视界科技有限公司 Method and device for generating special effect package
CN111355978A (en) * 2018-12-21 2020-06-30 北京字节跳动网络技术有限公司 Video file processing method and device, mobile terminal and storage medium
CN111355978B (en) * 2018-12-21 2022-09-06 北京字节跳动网络技术有限公司 Video file processing method and device, mobile terminal and storage medium
CN112116690A (en) * 2019-06-19 2020-12-22 腾讯科技(深圳)有限公司 Video special effect generation method and device and terminal
CN112190933A (en) * 2020-09-30 2021-01-08 珠海天燕科技有限公司 Special effect processing method and device in game scene
CN113192152A (en) * 2021-05-24 2021-07-30 腾讯音乐娱乐科技(深圳)有限公司 Audio-based image generation method, electronic device and storage medium
CN115049775A (en) * 2022-08-15 2022-09-13 广州中平智能科技有限公司 Dynamic allocation method and system for meta-universe rendering in dual-carbon energy industry

Also Published As

Publication number Publication date
CN106060655B (en) 2021-04-06
CN106060655A (en) 2016-10-26

Similar Documents

Publication Publication Date Title
WO2018024179A1 (en) Video processing method, server, terminal, and computer storage medium
US11622134B2 (en) System and method for low-latency content streaming
WO2017000580A1 (en) Media content rendering method, user equipment, and system
CN107645491A (en) Media flow transmission equipment and media serving device
US10791160B2 (en) Method and apparatus for cloud streaming service
JP7392136B2 (en) Methods, computer systems, and computer programs for displaying video content
WO2022105597A1 (en) Method and apparatus for playing back video at speed multiples , electronic device, and storage medium
US8856212B1 (en) Web-based configurable pipeline for media processing
CN110446114B (en) Multimedia data processing device, method, electronic equipment and storage medium
JP2022525895A (en) Methods, devices, and programs for segmented data stream processing
CN113330751B (en) Method and apparatus for storage and signaling of media segment size and priority ranking
JP6920475B2 (en) Modify digital video content
CN102819851B (en) Method for implementing sound pictures by using computer
JP7416351B2 (en) Methods, apparatus and computer programs for stateless parallel processing of tasks and workflows
KR102385338B1 (en) virtual device, rendering device, WoT device and system for processing web-based content
CN113535063A (en) Live broadcast page switching method, video page switching method, electronic device and storage medium
WO2020258907A1 (en) Virtual article generation method, apparatus and device
JP2018522346A (en) Conversion of FLASH content to HTML content by generating an instruction list
WO2022116822A1 (en) Data processing method and apparatus for immersive media, and computer-readable storage medium
WO2013181756A1 (en) System and method for generating and disseminating digital video
EP3229478B1 (en) Cloud streaming service system, image cloud streaming service method using application code, and device therefor
WO2020205286A1 (en) Supporting interactive video on non-browser-based devices
US20140157097A1 (en) Selecting video thumbnail based on surrounding context
US20130106873A1 (en) Pluggable media source and sink
CN115329122A (en) Audio information processing method, audio information presenting method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17836361

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17836361

Country of ref document: EP

Kind code of ref document: A1