CN107277616A - Special video effect rendering intent, device and terminal - Google Patents

Special video effect rendering intent, device and terminal Download PDF

Info

Publication number
CN107277616A
CN107277616A CN201710600569.5A CN201710600569A CN107277616A CN 107277616 A CN107277616 A CN 107277616A CN 201710600569 A CN201710600569 A CN 201710600569A CN 107277616 A CN107277616 A CN 107277616A
Authority
CN
China
Prior art keywords
video
special
frame data
gpu
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710600569.5A
Other languages
Chinese (zh)
Inventor
彭召龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou lieyou Information Technology Co.,Ltd.
Original Assignee
Guangzhou Aipai Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Aipai Network Technology Co Ltd filed Critical Guangzhou Aipai Network Technology Co Ltd
Priority to CN201710600569.5A priority Critical patent/CN107277616A/en
Publication of CN107277616A publication Critical patent/CN107277616A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Generation (AREA)

Abstract

A kind of special video effect rendering intent, device and the terminal of the present invention, in response to user instruction, the video source to acquisition is decoded, to obtain the first video requency frame data of specific format;First video requency frame data and default special effect parameters are sent to GPU, to cause GPU to complete to render first video requency frame data;The second video requency frame data is obtained from GPU to be encoded to obtain target special efficacy video, wherein second video requency frame data is that GPU renders first video requency frame data.Effectively utilize the disposal ability that GPU is directed to figure, GPU computation capability was also utilized to accelerate the efficiency that special video effect is rendered simultaneously, shorten Video Rendering duration, so that Video Rendering picture preview frame per second is high, picture plays smooth, also reduce to CPU usage when rendering, so as to reduce the heating temp of mobile terminal.

Description

Special video effect rendering intent, device and terminal
Technical field
The present invention relates to video technique field, more particularly to a kind of special video effect rendering intent, device and terminal.
Background technology
In this micro- bat epoch, it is desirable to by the mobile terminals such as mobile phone, tablet personal computer there is video to compile The software of volume function, is converted into interesting dynamic video, with friend household by the scattered photo and image of oneself and friends and family Friend together shares.In this conversion process, it is an important link that frame of video, which render,.
The Video editing software of traditional mobile terminal be all by using certain special efficacy algorithm in CPU to video image number Special effect processing is carried out according to by traveling through to calculate, CPU its limited parallel computation function of computing unit is not very powerful, is occupied Substantial amounts of calculating time and calculating internal memory, special video effect rendering efficiency is carried out with CPU lowly, and takes substantial amounts of internal memory, in meter Easily there is picture interim card during calculation, treatment effeciency is all low, and in calculating process, terminal heating is serious.
The content of the invention
The purpose of the present invention aims to provide a kind of special video effect rendering intent, device and terminal, some or all of can solve Certainly above mentioned problem, is improved under the efficiency of Video Rendering, CPU committed memories, and the time that calculates shortens, during calculating, eventually There is slight heating at end.
To achieve these goals, the present invention provides following technical scheme:
A kind of special video effect rendering intent, including:
In response to user instruction, the video source to acquisition is decoded, to obtain the first video requency frame data of specific format;
First video requency frame data and default special effect parameters are sent to GPU, to cause GPU to complete first to regard this Frequency frame data are rendered;
The second video requency frame data is obtained from GPU to be encoded to obtain target special efficacy video, wherein second video frame number First video requency frame data is rendered according to for GPU.
Further, it is described to send first video requency frame data and default special effect parameters to GPU, to cause GPU Complete to first video requency frame data the step of rendering in, specifically include:
The rendering contexts of special video effect are initialized, render instruction are sent to GPU, to cause GPU in response to the render instruction Completion is rendered to first video requency frame data.
Further, the rendering contexts of the initialization special video effect, and render instruction is sent to GPU, to cause GPU to ring Should in the render instruction complete to first video requency frame data the step of rendering in, specifically include:
Obtain and the context environmental of the first platform that special efficacy renders is carried out to the target video and for by described second Frame of video is shown in the object in equipment, wherein, the context environmental of first platform passes through EglGetCurrentContext functions are obtained, and the object that second frame of video is shown in equipment passes through EglGetCurrentContext functions are obtained;
Create the object for handling first video requency frame data;
The Shader programs for performing Rendering operations are loaded, the Shader programs are compiled.
Further, it is described create for handling first video requency frame data object the step of in, specifically include:
Create the Buffer object of the caching rendered picture data.
Further, the Buffer object includes rendering cache area object and/or reads buffer area object.
Further, it is described create for handling first video requency frame data object the step of in, specifically include:
Create for draw first video requency frame data it is rendered after video requency frame data frame buffer object.
Further, it is described create for handling first video requency frame data object the step of in, specifically include:
Create video image picture texture object;
First video requency frame data is filled into the texture object.
Further, it is described to be encoded from GPU the second video requency frame datas of acquisition to obtain target special efficacy video, wherein should After the step of second video requency frame data is rendered for GPU to first video requency frame data, specifically include:
Show the target special efficacy video.
Further, described in response to user instruction, the video source to acquisition is decoded, to obtain the of specific format In the step of one video requency frame data, specifically include:
The form of the decoded video source is changed to obtain the first video requency frame data of specific format.
Further, the form of the decoded video source is 4:2:0 YUV planar format data sequences, described The specific format of one video requency frame data is 4:4:4 YUV planar format data sequences.
Further, it is described to send first video requency frame data and default special effect parameters to GPU, to cause GPU Complete to first video requency frame data the step of rendering in, specifically include:
First video requency frame data is initialized, the initial environment state of first platform is preserved;
The rendering program is bound with the function in first platform, the attribute of the rendering program is opened, And into the rendering program the incoming special effect parameters, to first frame of video carry out Rendering operations;
Recover the ambient condition of first platform to initial environment state.
Further, it is described to be initialized first video requency frame data, preserve the initial ring of first platform In the step of border state, specifically include:
The Shader programs for obtaining the first frame of video vertex data are loaded, the Shader programs are compiled Translate;
The program for being used to render by function creation.
Further, it is described to be initialized first video requency frame data, preserve the initial ring of first platform Border state the step of in, specifically include:
Create the object for performing the initial environment state for preserving first platform.
Further, it is described to be initialized first video requency frame data, preserve the initial ring of first platform In the step of border state, specifically include:
What binding frame buffer object and the texture object for delaying object mounting with the frame were rendered to first platform realization In context;
By the incoming rendering program of the special effect parameters;
Call the drawing function in first platform to carry out special efficacy to first frame of video to render.
Preferably, the special effect parameters include the vertex data and data texturing of the special efficacy.
Further, the drawing function called in first platform is rendered to first frame of video progress special efficacy The step of in, specifically include:
Initialize the texture object and first platform realizes the environment that texture is rendered, and be the texture object The attribute rendered for texture is created, the environment that wherein texture is rendered is realized by eglGetCurrentContext functions;
The attribute rendered according to the texture, by the mapping texture data into the attribute, sets the texture pair As parameter;
Rendering objects are created, carrying out special efficacy to first frame of video renders.
Preferably, the texture object parameter includes any one or more following parameters, including:Apex coordinate, texture Coordinate, background color, the anglec of rotation.
Further, in the step of ambient condition to initial environment state of recovery first platform, specific bag Include:
The rendering program is unbinded from first platform.
A kind of special video effect rendering device, including response unit, rendering unit, coding unit,
The response unit, in response to user instruction, the video source to acquisition to be decoded, to obtain specific format The first video requency frame data;
The rendering unit, for first video requency frame data and default special effect parameters to be sent to GPU, to cause GPU completes to render first video requency frame data;
The coding unit, is encoded to obtain target special efficacy video for obtaining the second video requency frame data from GPU, its In second video requency frame data be GPU first video requency frame data is rendered.
A kind of special video effect renders terminal, including:Memory, processor, the memory storage have computer program, its It is characterised by, the step of realizing the above method when computer program is by the computing device.
Compared with prior art, the solution of the present invention has advantages below:
Special video effect rendering intent, device and the terminal of the present invention, in response to user instruction, the video source to acquisition is carried out Decoding, to obtain the first video requency frame data of specific format;By first video requency frame data and default special effect parameters to GPU Send, to cause GPU to complete to render first video requency frame data;GPU is aimed at as graphics processor and is performed complicated number Learn what is calculated and design with geometry, these calculating are necessary to figure is rendered, while GPU also has multithreads computing Ability, carries out parallel processing to multithreading figure so as to can realize, has saved the time rendered, and the second frame of video is obtained from GPU Data are encoded to obtain target special efficacy video, and wherein second video requency frame data is GPU to first video requency frame data Rendered, the second frame of video can encode the video of boil down to extended formatting again, in order to render the video of upper special efficacy Can be shown or be used by other instruments, and this step is carried out not on GPU, reduces GPU load, improve GPU to The ability of render effects on video.
Further, by the preferred embodiments of the present invention mode, it can be achieved to have the following advantages that:
1st, special video effect rendering intent, device and terminal of the invention, it mainly carries out rendering behaviour on GPU to frame of video Make, effectively make use of the ability of GPU processing figure, make render process more smooth, and the frame per second of preview is high, preview screen Smoothness, reduces the occupation rate to CPU in processing image process, shorten video renders duration, due to GPU itself category Property, it can also realize that multi-threaded parallel operation carries out special efficacy to frame of video and rendered in render process.
2nd, special video effect rendering intent, device and terminal of the invention, CPU is in frame of video render process, and its is main Effect is the form of decoding video source data and converting video source data so that video source data can be received smoothly by GPU And carry out special efficacy and render, while CPU also carries out coding compression to the second video data so that the second video requency frame data can be by it His instrument is used or shown, effectively make use of CPU computing capability, and CPU and GPU are combined carries out special efficacy wash with watercolours to frame of video Dye, releases the CPU taken in substantial amounts of render process internal memory, shortens the duration of Video Rendering, improve mobile terminal Overall performance, while also reducing the temperature of the mobile terminal during Video Rendering.
The additional aspect of the present invention and advantage will be set forth in part in the description, and these will become from the following description Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
Of the invention above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments Substantially and be readily appreciated that, wherein:
Fig. 1 is a kind of special video effect rendering intent flow chart of the invention;
Fig. 2 is a kind of special video effect rendering device structural representation of the invention.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached The embodiment of figure description is exemplary, is only used for explaining the present invention, and is not construed as limiting the claims.
Those skilled in the art of the present technique are appreciated that unless expressly stated, singulative " one " used herein, " one It is individual ", " described " and "the" may also comprise plural form.It is to be further understood that what is used in the specification of the present invention arranges Diction " comprising " refer to there is the feature, integer, step, operation, element and/or component, but it is not excluded that in the presence of or addition Other one or more features, integer, step, operation, element, component and/or their group.It should be understood that when we claim member Part is " connected " or during " coupled " to another element, and it can be directly connected or coupled to other elements, or can also exist Intermediary element.In addition, " connection " used herein or " coupling " can include wireless connection or wireless coupling.It is used herein to arrange Taking leave "and/or" includes one or more associated wholes or any cell for listing item and all combines.
Those skilled in the art of the present technique are appreciated that unless otherwise defined, all terms used herein (including technology art Language and scientific terminology), with the general understanding identical meaning with the those of ordinary skill in art of the present invention.Should also Understand, those terms defined in such as general dictionary, it should be understood that with the context with prior art The consistent meaning of meaning, and unless by specific definitions as here, otherwise will not use idealization or excessively formal implication To explain.
It is to be synthesized selected picture or picture etc. with frame of video by video renderer that special video effect, which is rendered, is made Video seems more lively, and the software that special efficacy is rendered has a variety of, and the preferred embodiment of the invention is with OpenGL interfaces and ties Close GPU and CPU realize that the special efficacy of video is rendered jointly, GPU (graphics processing unit) employ large number of computing unit and The streamline of overlength, but there was only very simple control logic and eliminate Cache (cache memory), what GPU was good at It is large-scale concurrent;OpenGL (Open Graphics Library) is to define one across programming language, cross-platform DLL specification professional graphic package interface, it is used for three-dimensional image (two-dimentional also can), be One function it is powerful, Call convenient underlying graphics storehouse.OpenGL ES (OpenGL for Embedded Systems) are OpenGL 3-D graphic API subset, is designed for embedded devices such as mobile phone, PDA and game hosts;Shader language is a kind of dedicated for entering The high-level language of row graphical programming, is to be inherited from C language to develop, in addition to the general features for possessing C language, while also having There are some excellent features of other shading languages.Relatively conventional general programming language, Shader language provides richer Rich primary type, such as vector, matrix, simplify the flow of exploitation, and the new features of Shader shading languages addition make it It is more succinct in terms of 3D graphics process, efficient.
A kind of special video effect rendering intent, as shown in figure 1, including step S100-S300.
S100:In response to user instruction, the video source to acquisition is decoded, to obtain the first frame of video of specific format Data.
User, which determines, to be needed to carry out after the video that special efficacy is rendered, and sends it to client, and user is also to client Render instruction is sent, client is received after the instruction of user, start pair video source determined and decode, by the frame of video source Data format is decoded as specifically being easy to the identified image data format of client, in order to carry out special efficacy wash with watercolours to frame of video Dye, the executive agent of this step can be it is a kind of possess frame of video and render the client software or one kind of function only have There is single frame of video to render the client software of function, this step data calculating process runs calculating in CPU, due to the present invention Embodiment needs to handle yuv data by GPU, and CPU obtains video decoding after yuv data, and yuv data is stored Into internal memory, and GPU can not read the data in internal memory, based on this, and the CPU of the embodiment of the present invention will need the current of broadcasting Frame of video decode obtaining yuv data, and the yuv data is placed in the data group that GPU is able to access that, so can be square Just GPU is read out.Specifically, the embodiment of the present invention is provided with the data group that GPU can be allowed to access, CPU in the terminal Yuv data is stored into data group, and triggers GPU and is read out;GPU of the embodiment of the present invention read data group data it Before, first according to shell script binding data group, it then can just read the data of the data group of binding;In force, Ke Yi Bind once, subsequently every frame video can be just handled before handling whole video file, can also be to the every of frame of video One frame data are bound respectively before treatment.
S200:First video requency frame data and default special effect parameters are sent to GPU, to cause GPU to complete to this First video requency frame data is rendered.
As it was previously stated, the frame data of decoded video source are stored in data group by CPU in the way of organizing, first is obtained Video requency frame data, CPU sends the instruction that data processing is completed to GPU so that GPU can obtain first video from CPU Frame data, so as to liberate data group internal memory shared in CPU, GPU is got after the first video requency frame data, and GPU is to described First video requency frame data is handled, to be carried out the frame data for the video that special efficacy is rendered, in invention, for performing wash with watercolours Dye is OpenGL, and GPU is got after first video requency frame data, it is necessary to call the letter that Rendering operations are performed in OpenGL Number, in order to which the video requency frame data of function pair first that GPU can be in OpenGL is handled, its concrete operation implementation method exists It is described in detail hereinafter.
S300:The second video requency frame data is obtained from GPU to be encoded to obtain target special efficacy video, wherein second video Frame data are that GPU is rendered to first video requency frame data.
CPU obtains rendered good video requency frame data, i.e. the second video requency frame data from GPU, and CPU is regarded to described second Frequency frame data carry out the other data formats of coding boil down to or universal data format, such as 4:2:2 yuv data form, its Coding the second video requency frame data of compression, which is primarily to facilitate other instruments, to obtain effective mesh using the second video requency frame data Special efficacy video is marked, and/or it is shown with specific target special efficacy video by display device, the second video requency frame data is by the One video requency frame data is obtained, and is the process realized on GPU, i.e. GPU is handled first video requency frame data The second video requency frame data is obtained, concrete processing procedure is described in detail hereinafter.
Further, first video requency frame data and default special effect parameters are sent to GPU, to cause GPU to complete To in the S200 steps rendered of first video requency frame data, specifically including:
S210:The rendering contexts of special video effect are initialized, render instruction are sent to GPU, to cause GPU to be rendered in response to this Instruction completes to render first video requency frame data.
Need to initialize the rendering contexts for carrying out video before rendering special video effect, to render ring The attribute included in border can be realized to be rendered to the special efficacy of video, wherein, before an initialization, it is necessary to set special video effect related Parameter, it can set or carry in rendering tool and be set in the system of the rendering tool, in rendering contexts After initialization is completed, realize that the platform rendered or instrument send render instruction to GPU so that GPU can be rendered in foregoing Special efficacy is carried out under environment to the video requency frame data that needs are rendered to render, herein, it is necessary to which the video requency frame data rendered is the first video Frame data.
Specifically, in wherein a kind of embodiment, realize that platform or instrument that special efficacy renders are OpenGL to frame of video, It is of course also possible to be the interface that renders of the others such as the MediaShow Espresso or DirectX realization to special video effect Or application program, before initialization, the related parameter of special video effect is set in OpenGL, or is carry OpenGL Special video effect related parameter is set on system, and the rendering contexts of initialization special video effect mainly include the end for determining to be rendered The configuration of the hardware device of the physical layer at end can be used for being rendered, such as internal memory, arithmetic speed, and/or include progress The plug-in unit of the configuration of the system for the application layer that special efficacy is rendered, such as application layer, control, language environment, it is only full in environmental condition When foot is rendered to the special efficacy of frame of video, special efficacy could be carried out to video and rendered, after environmental condition is met, OpenGL is to realizing computing Hardware device (GPU) send the instruction that renders, hardware device just starts computing, video requency frame data is handled, with acquisition With specific video requency frame data (the second video requency frame data), OpenGL call functions send the instruction that renders to GPU, function with Mapping relations are stored in OpenGL database, the instruction rendered include special efficacy needed for being rendered to frame of video and Time interval of each specific action etc., can be simultaneously to the multithreading of a video in GPU when being rendered Progress special efficacy is rendered or the special efficacy that carried out in GPU of multiple video multithreadings is rendered, defeated for single special video effect Enter first video requency frame data of frame, the video frame number of input and output does not change, output appoint be so a frame after treatment The first video requency frame data view data, processed first video requency frame data of output is the second video frame number According to;For the special effect processing of multiple video materials, the image data sequence of two group of first video requency frame data is had as input source, Respectively cut out the n frame image datas of video trailer, and the beginning of incision n frame image datas, regarded while needing n frames YUV The view data of frequency form is as output, and n value is a constant, and its value is determined by user, before rendering, the first video Frame data carry the mark of sequence identification and same sources of video frames of the frame of video in same frame of video, in order to render Video requency frame data after output can be arranged according to putting in order for the frame of video of video source.
Further, the rendering contexts of the initialization special video effect, and render instruction is sent to GPU, to cause GPU to ring Step S211 should be specifically included in the step S210 rendered that the render instruction completes to first video requency frame data extremely S213。
S211:Obtain the context environmental of the first platform rendered to target video progress special efficacy and for by described in Second frame of video is shown in the object in equipment, wherein, the context environmental of first platform passes through EglGetCurrentContext functions are obtained, and the object that second frame of video is shown in equipment passes through EglGetDisplay functions are obtained.
As it was previously stated, carrying out before special efficacy is rendered frame of video, it is necessary to which the special efficacy for carrying out frame of video to initialization is rendered Environment, the system display device object that will show after the completion of being rendered including frame of video by special efficacy, in order to render After the completion of frame of video can be shown by display device object on the display apparatus, display device can be mobile phone, computer show Display screen, television set etc. other can show the device of video;Offer is wherein provided and renders function so that GPU can be according to rendering Function carries out the rendering tool or the context environmental of platform of computing and the video requency frame data after being rendered, and wherein instrument can be The application programs such as foregoing OpenGL, platform can for carry OpenGL instruments and OpenGL can carry out Rendering operations system or Person OpenGL etc., system such as Android system, ios systems, windows systems etc., platform can also be OpenGL, and other are used for wash with watercolours The instrument or application program of dye similarly, will not be described here, so as to carry out computing to the first video requency frame data, realize to video Frame progress two dimension is rendered or the special efficacy of three-dimensional rendering is rendered, and initialization specifically includes selection pixel format, application video memory, selected Display device (some equipment may have more than one display), some characteristics of selection (the prerequisite category of some special efficacy institute Property), current environment is specified to draw environment etc.;When there is multiple drafting environment, each environment has different parameters, it is necessary at these Switching (as drawn different pictures on both displays simultaneously) in environment, so necessarily referring to settled preceding drafting environment.
Specifically, in this step, the interface of a centre is further comprises between OpenGL and the system for carrying OpenGL EGL so that OpenGL and local system can be connected by EGL, therefore, and EGL is called in connected database EglGetCurrentContext functions obtain OpenGL context environmental so that can be called in render process with Function pair frame of video is rendered in the database of OpenGL connections, in other words, and GPU can be by the number that is connected with OpenGL Handled, so as to obtain the second video requency frame data, can similarly be obtained, EGL is called according to the video requency frame data of function pair first in storehouse EglGetDisplay functions in connected database obtain the display that the second video requency frame data is shown in frame of video form Device object, display device object can for computer display screen, mobile phone display screen etc., and display screen deposited in the form of an object It is in the program of system or OpenGL, and the object can be called by other application programs and/or system, such as OpenGL And/or Android etc..
S212:Create the object for handling first video requency frame data.
In frame of video render process is carried out, the first video requency frame data is by OpenGL, system and the GPU of bottom The second video requency frame data can be just obtained after processing, because the render process that OpenGL is frame of video provide only base program frame Frame, therefore the first video requency frame data is handled for convenience, created in OpenGL and/or in carrying OpenGL platform for locating The object of the data of the first frame of video is managed, its object specifically created is described in detail later, will not be described here.
S213:The Shader programs for performing Rendering operations are loaded, the Shader programs are compiled.
As it was noted above, OpenGL provides basic program frame for the render process of frame of video, to optimize it to video Render process, create the object with difference in functionality, same OpenGL is that frame of video is rendered in the frame foundation originally having There is provided the rendering pipeline of a fixed pipelines, make frame of video render process in the range of research staff's control, load Shader (tinter), shader are the editable programs for substituting fixed rendering pipeline, and main purpose is to realize image wash with watercolours Dye, shader directly can obtain corresponding color parameter from GPU calculating, and compiling shader is that loading is used for render video The program of frame, shader include summit shader and pixel shader, summit 3D figures be all by triangle sets one by one into , summit shader is exactly to calculate vertex position, and prepared for later stage pixel rendering, pixel shader is using pixel to be single Position, calculates illumination, the series of algorithms of color.Summit shader is also referred to as vertex shader, pixel in OpenGL Shader is called fragment shader.
Further, in the step S212 for creating the object for handling first video requency frame data, specific bag Include step S2121.
S2121:Create the Buffer object of the caching rendered picture data.
The Buffer object for creating the caching rendered picture data is mainly used to store the picture data after GPU is rendered, After completion is rendered, the picture data rendered can be read from the memory block, wherein, Buffer object is one section of program, Internal memory for dividing the picture data after storage is rendered, Buffer object may reside in the instrument rendered to frame of video In or in the application program or system of rendering tool is carried, divide a part of rendering tool, application program or The memory space of system is used to store the picture data after GPU is rendered, and after completion is rendered, can be read from the memory block The picture data rendered is got, if in the application program or system of carrying rendering tool, then the Render Buffer pair As being overlapped with rendering tool so that the picture data after the completion of GPU is rendered can be stored in the object of Render Buffer.
Specifically, if the instrument rendered is OpenGL, Buffer object marks off the part storage come from OpenGL Space is buffering area, for storing the picture data after GPU is rendered, after completion is rendered, and can be read from the memory block The picture data rendered, if the instrument rendered is OpenGL, a part of the Buffer object from internal memory or hard disk virtual memory For the buffering area, the picture data after being rendered for interim storage GPU after completion is rendered, can be read from the area To the picture data rendered.
Further, the Buffer object includes rendering cache area object and/or reads buffer area object.
The buffer area object of Buffer object is used to store render video special efficacy picture data, and reading buffer area object is used for The special video effect picture data rendered is read, render video special efficacy picture data is to rendering cache area, and mark should after the completion of rendering Buffer area picture data is effective;Exchange rendering cache area and read buffer area object;Rendered picture is read from buffer area is read Mark the rendering cache area picture data invalid after the completion of data, reading;Two buffer area objects can be under multi-thread environment Handled simultaneously, reduce the time for rendering and mutually being waited with reading process, so that the treatment effeciency that special video effect is rendered is improved, Further, in order to faster read the special video effect picture after rendering by reading buffer area, EGL is further created EGLImage objects, the object is primarily used to accelerate to read the picture data after rendering from GPU so that the picture after rendering Data can be utilized by EGL for local system.
Further, in the step S212 for creating the object for handling first video requency frame data, specific bag Include step S2122.
S2122:Create for draw first video requency frame data it is rendered after video requency frame data frame buffer object.
Frame buffer object is also referred to as FBO, for receiving the rendering result of rendering tool or rendering platform, and without inciting somebody to action Rendering result is rendered directly on the frame buffer object of acquiescence, the frame buffer object of relative acquiescence, in the knot for needing display to render During fruit, frame buffer object directly can show rendering result on the display apparatus, and the frame buffer object given tacit consent to is directly to exist Drawing result in display device, the relatively newly-built frame buffer object of its efficiency is more slowly, and frame buffer object is created by rendering tool And be present in rendering tool, even if rendering tool is mounted in system or application program, frame buffer object is rendered completely The control of instrument, FBO also includes the region of some storage colors, depth and template data, is that can be associated with frame buffer object The two-dimensional array pixel got up.
Specifically, if rendering tool is OpenGL, OpenGL is mounted in system or application program, then GLEXTframebufferobject provides a kind of interface for creating the extra frame buffer object that can not be shown, frame buffer pair As being connected with OpenGL, it is controlled by OpenGL completely, by using frame buffer object (FBO), and OpenGL will can be shown Reference program frame buffer object is output to, if OpenGL does not have any carrying, frame buffer object is present and in OpenGL, its is same Controlled completely by OpenGL, by using frame buffer object (FBO), OpenGL can delay display output to reference program frame Deposit object.
Further, in the step S212 for creating the object for handling first video requency frame data, specific bag Step S2123 is included to step S2124.
S2123:Create video image picture texture object;
Texture object is internal data type, stores data texturing and option etc., and texture object is encapsulated and texture patch Figure or the associated various states of video image picture are set, and texture object can not be accessed directly it, but can be by The ID of one integer is used as its handle (handler) and is tracked;In order to be assigned to a unique ID, OpenGL is provided GlGenTextures functions obtain an effective ID ident value:void glGenTexture(Glsizei n,GLuint* texture);Texture is an array, for storing n ID value being assigned to.After a glGenTextures is called, The ID being assigned to can be designated ' use '.
S2124:First video requency frame data is filled into the texture object.
From the foregoing it will be appreciated that being empty data during texture object establishment, it is necessary to fill data wherein so that texture could be What is rendered is used, and texture object encapsulates the setting of the various states related to texture mapping, therefore filling described first is regarded Frequency frame data are obtained in the texture object so that the data of first frame of video are encapsulated by texture object, i.e. texture Mapping relations are associated object according to this with the first video frame number.Specifically, when rendering tool is OpenGL, texture object is created It build in OpenGL.
Further, it is described to be encoded from GPU the second video requency frame datas of acquisition to obtain target special efficacy video, wherein should Second video requency frame data renders for GPU to first video requency frame data progress after obtained step S300, specifically includes step Rapid S310.
S310:Show the target special efficacy video.
First video requency frame data obtains the second video requency frame data after rendering processing through special efficacy, in order that user or exploit person Member can intuitively watch special efficacy render after frame of video, or can be by other routine calls, by institute by the second video requency frame data State the second video requency frame data and carry out coding compression, so as to obtain target special efficacy video, coding compression can be the second video of conversion The form of frame data, boil down to ensures to reduce video data rate on the premise of visual effect as far as possible, compressed including damaging and nothing The data damaged before compression, frame in and interframe compression, symmetrical and asymmetric coding, Lossless Compression namely compression and after decompression are complete Unanimously, most Lossless Compressions all uses RLE run-length encoding algorithms, and lossy compression method means before data and compression after decompression Data it is inconsistent.Some human eyes and human ear insensitive image or audio-frequency information are lost during compression;Frame in Compression only considers the data of a frame without the redundancy between consideration consecutive frame, and this is actually similar with Static Picture Compression. Frame in typically uses Lossy Compression Algorithm, does not have correlation between each frame during due to frame data compression, so regarding after compression Frequency is that continuous front and rear two frames based on many videos or animation have according to can still enter edlin, interframe compression in units of frame Very big correlation, has between the characteristics of front and rear two frame information is varied less in other words, namely continuous its consecutive frame of video Amount of redundancy between redundancy, compression consecutive frame just can further improve decrement, and interframe compression is also referred to as time compression (Temporal compression), it is compressed by comparing the data on time shaft between different frame, interframe compression one As be lossless, frame difference (Frame differencing) algorithm is a kind of typical time compression method, and it is by comparing this The difference of difference between frame and consecutive frame, only minute book frame frame adjacent thereto, symmetry (symmetric) is compressed encoding One key feature;Asymmetric encoding, which means to compress and decompresses occupancy identical, calculates disposal ability and time, asymmetric volume Code means to need to spend substantial amounts of disposal ability and time during compression, and when decompressing then can preferably real-time playback, namely It is compressed and decompresses at different rates;The target video frame obtained after foregoing processing procedure, it is possible to shown Showing device shows that, specifically, obtaining the second video requency frame data after being rendered such as the first video requency frame data through OpenGL special efficacys, second regards Frequency frame data obtain target special efficacy video by OpenGL coding compression again, it is possible to which shown device is shown, wherein aobvious Showing device can be computer display screen, TV or mobile phone etc..
Further, described in response to user instruction, the video source to acquisition is decoded, to obtain the of specific format In the step S100 of one video requency frame data, step S110 is specifically included.
S110:The form of the decoded video source is changed to obtain the first video requency frame data of specific format.
Because GPU can only recognize the data of specified format, when the data of video source are not the data formats that GPU can be recognized, Need by video source data change to be decoded as the data format that GPU can be recognized so that subsequently carrying out video source data When special efficacy is rendered, GPU can carry out computing according to the instruction of rendering tool to the data of video source, so that the video after being rendered Frame data, i.e. the second video requency frame data.
Specifically, video source is generally all YUV (4 by decoded data:2:0) data sequence of planar format, but It is GPU None- identifieds YUV (4:2:0) data of form, therefore, first must be carried out before view data is input to GPU to it Form is changed, and becomes YUV (4:4:4) it can be just input to after the data of form arrangement in GPU, OpenGL sends to GPU and rendered After instruction, the first foregoing video requency frame data of function pair that GPU can be called according to OpenGL carries out processing and regarded so as to obtain second Frequency frame data.
Further, the form of the decoded video source is 4:2:0 YUV planar format data sequences, described The specific format of one video requency frame data is 4:4:4 YUV planar format data sequences.
As it was previously stated, in wherein a kind of embodiment, the form of the decoded video source is 4:2:0 YUV planes Formatted data sequence, the specific format of first video requency frame data is 4:4:4 YUV planar format data sequences so that GPU Function pair the first video requency frame data progress that can be called according to OpenGL, which is handled, obtains the second video requency frame data.
Further, it is described to send first video requency frame data and default special effect parameters to GPU, to cause GPU Complete in the step S200 rendered of first video requency frame data, specifically including step S220 to step S240.
S220:First video requency frame data is initialized, the initial environment state of first platform is preserved.
First video requency frame data is initialized, the first video requency frame data is mainly set to default value so that First video requency frame data has carried out the preparation being rendered, and preserves the initial environment state of first platform, that is, preserves OpenGL OpenGL system or application program are carried before rendering to render the foregoing preparation done or preservation before rendering To render the preparation done, object being created as the aforementioned and/or object etc. is obtained, certain OpenGL can also be other foregoing For the instrument rendered.
S230:The rendering program is bound with the function in first platform, the rendering program is opened Attribute, and the incoming special effect parameters into the rendering program, Rendering operations are carried out to first frame of video.
Rendering program is bound with function in first platform so that in render process, rendering program can be adjusted Handled with the first video requency frame data described in function pair in first platform, GPU can also be according in first platform Function pair described in the first video requency frame data carry out computing to obtain the second video requency frame data, in order that the result rendered is default Effect, opens the attribute of rendering program, can increase and either reduce the flow that renders or the step of to data processing, in wash with watercolours When dye program contains the parameter of default special efficacy, rendering program is using the special effect parameters as foundation, by the special effect parameters assignment During into the function in first platform, the first video requency frame data could be obtained the second video requency frame data by GPU computings, be Rendering operations are carried out to first frame of video.
Specifically, the function in rendering program and OpenGL is bound, wherein, OpenGL can be the system that is mounted in or In application program or single OpenGL rendering tools, rendering program can be that foregoing object is either newly-built It is connected with aforementioned object, opens the attribute of rendering program, can be increased and either reduce the flow that renders or to data processing The step of, to the incoming special effect parameters of the rendering program, rendering program is using the special effect parameters as foundation, by the special efficacy When parameter assignment is into OpenGL function, the first video requency frame data could be obtained the second video requency frame data by GPU computings, be Rendering operations are carried out to first frame of video, for example, performing Rendering operations, call the binding of glUseProgram functions to render journey Sequence, calls glBindBuffer functions to bind vertical array object, calls glEnableVertexAttribArray to open summit Array attribute, calls the incoming vertex datas of glVertexAttribPointer and rendered color data, calls GlDrawArrays functions perform Rendering operations.
S240:Recover the ambient condition of first platform to initial environment state.
Recover the ambient condition of first platform to initial environment, it is to avoid handle to same video source/special efficacy During, it is necessary to each environment to first platform after the completion of each frame or continuous part frame special video effect are rendered State is initialized, and has been saved flow and time that special efficacy is rendered, has been reduced the occupation rate to internal memory;Specifically, such as preceding institute State, after the completion of each frame or continuous part frame special video effect are rendered, the OpenGL ambient conditions are recovered to foregoing State after initialization.
Further, it is described to be initialized first video requency frame data, preserve the initial ring of first platform In the step S220 of border state, step S221 and S222 are specifically included.
S221:The Shader programs for obtaining the first frame of video vertex data are loaded, to the Shader programs It is compiled.
As it was previously stated, Shader tinters include summit Shader and pixel Shader, piece member specifically further comprises Shader tinters, the most basic task of vertex shader is the coordinate for receiving three dimensions midpoint, is processed into two-dimentional sky Between in coordinate and output;The most basic task of piece member tinter is to needing each pixel on screen to be processed to export one Color value, vertex shader receives attribute variables and uniform variables, and attribute variable storages on point in itself Data, the position that most important of which is certainly put, the data of uniform variable storages only help tinter to complete to appoint Business, tinter is only to need uniform variables and do not handle them, vertex shader need to export varying variables to Piece member tinter.The task of piece member tinter is the color for providing each pixel on screen, and piece member tinter receives varying changes Amount, varying variables are the output of vertex shader, and the processing unit of piece member tinter is pixel, and summit is converted into pixel Technology be referred to as " primitive rasterization ";Vertex shader and piece member tinter to loading are compiled link so that rendering During the vertex shader and piece member tinter can be called to handle the first video requency frame data.
S222:The program for being used to render by function creation.
As it was previously stated, it is necessary to create rendering program before being rendered so that rendering step can follow rendering program to enter OK, wherein rendering program has function to build up, specifically, in OpenGL, creating the function of rendering program is glCreatProgram。
Further, it is described to be initialized first video requency frame data, preserve the initial ring of first platform In the step S220 of border state, step S223 is specifically included.
S223:Create the object for performing the initial environment state for preserving first platform.
As it was noted above, rendering flow and time, it is necessary to preserve first platform to reduce workload and save Initial environment state, that is, preserve before OpenGL is rendered and carried to render the foregoing preparation done or preserving OpenGL system or application program, to render the preparation done, performs before rendering and preserves the initial environment before OpenGL is rendered State is an object, and the object contains the program for performing and preserving OpenGL initial environment states.
Further, it is described to be initialized first video requency frame data, preserve the initial ring of first platform Border state step S220 in, specifically include step S224 to step S226.
S224:Binding frame buffer object and the texture object mounted with the slow object of the frame realize wash with watercolours to first platform In the context of dye.
As it was noted above, create frame buffer object, frame buffer object is used to receiving rendering tool or described first flat The rendering result of platform, and without being rendered directly to rendering result on the frame buffer object of acquiescence, in the knot for needing display to render During fruit, frame buffer object directly can show rendering result on the display apparatus, by frame buffer object binding to described first In the context of platform, in order to ensuing rendering tool Rendering operations can establishment frame buffer object it is enterprising OK, texture object is tied in the context environmental of first platform, in order to carry out Rendering operations to texture object, Wherein texture object is mounted with the frame buffer object.
Specifically, as it was noted above, by the context of frame buffer object binding to OpenGL, in order to ensuing OpenGL Rendering operations are carried out all on customized frame buffer object, i.e. OpenGL draws on frame buffer object and renders knot Really, texture object is tied in OpenGL context environmental, so that next OpenGL carries out rendering behaviour to texture object Make, after the processing that texture object passes through tinter, be rendered into frame buffer, frame buffer is obtained by frame buffer object, frame delays Deposit and show the texture object after rendering on the display apparatus.
S225:By the incoming rendering program of the special effect parameters.
Institute is described previously, in the incoming rendering program of the parameter of special efficacy, and GPU could be according to rendering program to described the One video requency frame data carries out computing, so as to obtain the second video requency frame data.
S226:Call the drawing function in first platform to carry out special efficacy to first frame of video to render.
As mentioned before when rendering, render as drawing function and special effect parameters by GPU the first platforms of foundation etc. The process that computing obtains the second video requency frame data is carried out to first frame of video.Therefore, it is necessary to call institute in render process State the drawing function in the first platform.Specifically, OpenGL calls drawing function in connected database, and in function Assignment special effect parameters, GPU carries out computing according to rendering program and drawing function to first video requency frame data.
Preferably, the special effect parameters include the vertex data and data texturing of the special efficacy.
As it was noted above, necessary special effect parameters mainly include vertex data and data texturing, specifically, if one Triangle, vertex data is the vertex of a triangle left side;Data texturing can be a pictures or line or point etc..
Further, the drawing function called in first platform is rendered to first frame of video progress special efficacy Step S226 in, specifically include step S2261 to step S2263.
S2261:Initialize the texture object and first platform realizes the environment that texture is rendered, and be the line Reason Object Creation is used for the attribute that texture is rendered, and the environment that wherein texture is rendered is real by eglGetCurrentContext functions It is existing.
The texture object after interpolation data is set to default value by being initialized as of the texture object, and is called EglGetCurrentContext functions obtain current EGL contexts, call one OpenGL's of glGenTextures generations Texture id, specific id are being described above, the rendering objects of common acquiescence, the not additional any value of new rendering objects.
S2262:The attribute rendered according to the texture, by the mapping texture data into the attribute, sets described Texture object parameter.
The attribute rendered according to previously described texture, OpenGL calls glTexImage2D functions by rendered picture data It is filled into OpenGL texture object, and texture object parameter is set, is realized by foregoing object or step to institute The Rendering operations of frame of video are stated, texture object parameter is described in detail later.
S2263:Rendering objects are created, carrying out special efficacy to first frame of video renders.
Rendering objects are created, the rendering objects of establishment, which are represented, will be rendered into the special efficacy in the frame of video, and create One rendering objects represents a kind of special effect, and OpenGL calls PerformRender functions to carry out special efficacy picture and renders behaviour Make.
Preferably, the texture object parameter includes any one or more following parameters, including:Apex coordinate, texture Coordinate, background color, the anglec of rotation.
As it was noted above, when carrying out parametric texture setting, major parameter includes any one or more following parameter tops Point coordinates, texture coordinate, background color, the anglec of rotation, apex coordinate are the coordinate on special-effect graph summit, and apex coordinate is as several What coordinate, when drawing texture mapping scene, will not only define geometric coordinate, and to define texture seat to each summit Mark.After a variety of conversion, geometric coordinate determines the position that summit is drawn on screen, and texture coordinate is determined in texture image Which texture assign the summit.Texture image is square array, and texture coordinate generally may be defined to one, two, three or four-dimensional Form, referred to as s, t, r and q coordinates, to be different from object coordinates (x, y, z, w) and other coordinates.One-dimensional texture often uses s coordinates tables Show, 2 d texture commonly uses (s, t) coordinate representation, and r coordinates are ignored at present, q coordinates are as w, and a half value is 1, is mainly used in building Vertical homogeneous coordinates.The function of OpenGL coordinate definitions is:void gltexCoord{1234}{sifd}[v](TYPE coords);Current texture coordinate is set, and the summit produced by hereafter calling glVertex* () all assigns current texture and sat Mark, for gltexCoord1* (), s coordinates are configured to set-point, and t and r are set to 0, q and are set to 1, use GltexCoord2* () can set s and t coordinate values, and r is set to 0, q and is set to 1, and for gltexCoord3* (), q is set For 1, other coordinates are set by set-point, and all coordinates can be given with gltexCoord4* (), appropriate suffix is used (s, i, f d) illustrate the type of coordinate with TYPE analog value (GLshort, GLint, glfloat or GLdouble); Under some occasions (environment mapping etc.), need to automatically generate texture coordinate to obtain special-effect, it is not required that to use function GltexCoord* () is that each object vertex assigns texture coordinate value, and OpenGL provides the letter for automatically generating texture coordinate Number, it is such as:void glTexGen{if}[v](GLenum coord,GLenum pname,TYPE param);Automatically generate line Manage coordinate, first parameter must be GL_S, GL_T, GL_R or GL_Q, it point out in texture coordinate s, t, r, q which will Automatically generate;Second parameter value is GL_TEXTURE_GEN_MODE, GL_OBJECT_PLANE or GL_EYE_PLANE;3rd Individual parameter param is one and defines the pointer that texture produces parameter, and its value depends on second parameter pname setting, when When pname is GL_TEXTURE_GEN_MODE, param is a constant, i.e. GL_OBJECT_LINEAR, GL_EYE_LINEAR Or GL_SPHERE_MAP, they determine which function to produce texture coordinate with.Background color is the background colour of texture, such as Aforementioned triangular, background color is triangle Fill Color, and the anglec of rotation is the triangle when triangle is rendered into frame of video The angle rotated in the video frame, the anglec of rotation is any direction any angle.
Further, the ambient condition for recovering first platform is into the step S240 of initial environment state, tool Body includes step S241.
S241:The rendering program is unbinded from first platform.
As it was noted above, after the completion of rendering, rendering, needing in order to be able to carry out special efficacy to other frames under conditions of initial environment Recover foregoing routine bound in first platform, specifically, such as in OpenGL, as recovering the initial of OpenGL Ambient condition, such as unbinds frame buffer object from OpenGL context environmentals, and texture object etc., OpenGL are unbinded from OpenGL The characteristic attribute of one or more foregoing binding can once be recovered.
A kind of special video effect rendering device, as shown in Fig. 2 including response unit 10, rendering unit 20, coding unit 30.
The response unit 10, in response to user instruction, the video source to acquisition to be decoded, to obtain particular bin First video requency frame data of formula;
The rendering unit 20, for first video requency frame data and default special effect parameters to be sent to GPU, so that GPU is obtained to complete to render first video requency frame data;
The coding unit 30, is encoded to obtain target special efficacy video for obtaining the second video requency frame data from GPU, Wherein second video requency frame data is that GPU is rendered to first video requency frame data.
A kind of special video effect renders terminal, including:Memory, processor, the memory storage have computer program, its It is characterised by, the step of realizing the above method when computer program is by the computing device.
Those skilled in the art of the present technique are appreciated that the present invention includes association and is used to perform in operation described herein One or more of equipment.These equipment can be for needed for purpose and specially design and manufacture, or can also include general Known device in computer.These equipment have the computer program being stored in it, and these computer programs are optionally Activation is reconstructed.Such computer program can be stored in equipment (for example, computer, mobile phone, TV) computer-readable recording medium Or be stored in suitable for storage e-command and be coupled to respectively in any kind of medium of bus, computer-readable Jie Matter includes but is not limited to any kind of disk (including floppy disk, hard disk, CD, CD-ROM and magneto-optic disk), ROM (Read-Only Memory, read-only storage), RAM (Random Access Memory, immediately memory), EPROM (Erasable Programmable Read-Only Memory, Erarable Programmable Read only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory, EEPROM), flash memory, magnetic card Or light card.It is, computer-readable recording medium includes storing or transmitting information in the form of it can read by equipment (for example, computer) Any medium.
Those skilled in the art of the present technique be appreciated that can be realized with computer program instructions these structure charts and/or The combination of each frame and these structure charts and/or the frame in block diagram and/or flow graph in block diagram and/or flow graph.This technology is led Field technique personnel be appreciated that these computer program instructions can be supplied to all-purpose computer, special purpose computer or other The processor of programmable data processing method is realized, so as to pass through the processing of computer or other programmable data processing methods The scheme that device is specified in the frame or multiple frames to perform structure chart disclosed by the invention and/or block diagram and/or flow graph.
Those skilled in the art of the present technique are appreciated that in the various operations discussed in the present invention, method, flow Step, measure, scheme can be replaced, changed, combined or deleted.Further, it is each with what is discussed in the present invention Kind operation, method, other steps in flow, measure, scheme can also be replaced, changed, reset, decomposed, combined or deleted. Further, it is of the prior art to have and the step in the various operations disclosed in the present invention, method, flow, measure, scheme It can also be replaced, changed, reset, decomposed, combined or deleted.
Described above is only some embodiments of the present invention, it is noted that for the ordinary skill people of the art For member, under the premise without departing from the principles of the invention, some improvements and modifications can also be made, these improvements and modifications also should It is considered as protection scope of the present invention.

Claims (20)

1. a kind of special video effect rendering intent, it is characterised in that including:
In response to user instruction, the video source to acquisition is decoded, to obtain the first video requency frame data of specific format;
First video requency frame data and default special effect parameters are sent to GPU, to cause GPU to complete to first frame of video Data are rendered;
The second video requency frame data is obtained from GPU to be encoded to obtain target special efficacy video, wherein second video requency frame data is GPU is rendered to first video requency frame data.
2. special video effect rendering intent according to claim 1, it is characterised in that described by first video requency frame data And default special effect parameters are sent to GPU, with cause GPU complete to first video requency frame data the step of rendering in, specifically Including:
The rendering contexts of special video effect are initialized, render instruction is sent to GPU, to cause GPU to be completed in response to the render instruction First video requency frame data is rendered.
3. special video effect rendering intent according to claim 2, it is characterised in that the initialization special video effect is rendered Environment, and render instruction is sent to GPU, to cause GPU to complete the wash with watercolours to first video requency frame data in response to the render instruction In the step of dye, specifically include:
Obtain and the context environmental of the first platform that special efficacy renders is carried out to the target video and for by second video Frame is shown in the object in equipment, wherein, the context environmental of first platform passes through eglGetCurrentContext letters Number is obtained, and the object that second frame of video is shown in equipment is obtained by eglGetCurrentContext functions;
Create the object for handling first video requency frame data;
The Shader programs for performing Rendering operations are loaded, the Shader programs are compiled.
4. special video effect rendering intent according to claim 3, it is characterised in that the establishment is used to handle described first In the step of object of video requency frame data, specifically include:
Create the Buffer object of the caching rendered picture data.
5. special video effect rendering intent according to claim 4, it is characterised in that the Buffer object is slow including rendering Deposit area's object and/or read buffer area object.
6. special video effect rendering intent according to claim 3, it is characterised in that the establishment is used to handle described first In the step of object of video requency frame data, specifically include:
Create for draw first video requency frame data it is rendered after video requency frame data frame buffer object.
7. special video effect rendering intent according to claim 3, it is characterised in that the establishment is used to handle described first In the step of object of video requency frame data, specifically include:
Create video image picture texture object;
First video requency frame data is filled into the texture object.
8. special video effect rendering intent according to claim 1, it is characterised in that described to obtain the second frame of video from GPU Data are encoded to obtain target special efficacy video, and wherein second video requency frame data is GPU to first video requency frame data After the step of being rendered, specifically include:
Show the target special efficacy video.
9. special video effect rendering intent according to claim 1, it is characterised in that described in response to user instruction, to obtaining The video source taken is decoded, the step of the first video requency frame data to obtain specific format in, specifically include:
The form of the decoded video source is changed to obtain the first video requency frame data of specific format.
10. special video effect rendering intent according to claim 9, it is characterised in that the lattice of the decoded video source Formula is 4:2:0 YUV planar format data sequences, the specific format of first video requency frame data is 4:4:4 YUV plane lattice Formula data sequence.
11. special video effect rendering intent according to claim 1, it is characterised in that described by first video frame number According to and default special effect parameters sent to GPU, with cause GPU complete to first video requency frame data the step of rendering in, tool Body includes:
First video requency frame data is initialized, the initial environment state of the first platform is preserved;
The rendering program is bound with the function in first platform, the attribute of the unlatching rendering program, and to The incoming special effect parameters in the rendering program, Rendering operations are carried out to first frame of video;
Recover the ambient condition of first platform to initial environment state.
12. special video effect rendering intent according to claim 11, it is characterised in that described by first video frame number According to being initialized, in the step of preserving the initial environment state of first platform, specifically include:
The Shader programs for obtaining the first frame of video vertex data are loaded, the Shader programs are compiled;
The program for being used to render by function creation.
13. special video effect rendering intent according to claim 11, it is characterised in that described by first video frame number According to being initialized, preserve first platform initial environment state the step of in, specifically include:
Create the object for performing the initial environment state for preserving first platform.
14. special video effect rendering intent according to claim 11, it is characterised in that described by first video frame number According to being initialized, in the step of preserving the initial environment state of first platform, specifically include:
Binding frame buffer object and with the frame delay object mounting texture object to first platform realization render above and below Wen Zhong;
By the incoming rendering program of the special effect parameters;
Call the drawing function in first platform to carry out special efficacy to first frame of video to render.
15. special video effect rendering intent according to claim 14, it is characterised in that the special effect parameters include the spy The vertex data and data texturing of effect.
16. special video effect rendering intent according to claim 14, it is characterised in that described to call in first platform Drawing function to first frame of video carry out special efficacy render the step of in, specifically include:
Initialize the texture object and first platform realizes the environment that texture is rendered, and created for the texture object The environment that the attribute rendered for texture, wherein texture are rendered is realized by eglGetCurrentContext functions;
The attribute rendered according to the texture, by the mapping texture data into the attribute, sets the texture object ginseng Number;
Rendering objects are created, carrying out special efficacy to first frame of video renders.
17. special video effect rendering intent according to claim 16, it is characterised in that the texture object parameter includes appointing Anticipate one or more following parameters, including:Apex coordinate, texture coordinate, background color, the anglec of rotation.
18. special video effect rendering intent according to claim 11, it is characterised in that recovery first platform In the step of ambient condition to initial environment state, specifically include:
The rendering program is unbinded from first platform.
19. a kind of special video effect rendering device, it is characterised in that including response unit, rendering unit, coding unit,
The response unit, in response to user instruction, the video source to acquisition to be decoded, to obtain the of specific format One video requency frame data;
The rendering unit, for first video requency frame data and default special effect parameters to be sent to GPU, to cause GPU Completion is rendered to first video requency frame data;
The coding unit, is encoded to obtain target special efficacy video for obtaining the second video requency frame data from GPU, wherein should Second video requency frame data is that GPU is rendered to first video requency frame data.
20. a kind of special video effect renders terminal, it is characterised in that including:Memory, processor, the memory storage have meter Calculation machine program, it is characterised in that any one of claim 1-18 institutes are realized when the computer program is by the computing device The step of stating special video effect rendering intent.
CN201710600569.5A 2017-07-21 2017-07-21 Special video effect rendering intent, device and terminal Pending CN107277616A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710600569.5A CN107277616A (en) 2017-07-21 2017-07-21 Special video effect rendering intent, device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710600569.5A CN107277616A (en) 2017-07-21 2017-07-21 Special video effect rendering intent, device and terminal

Publications (1)

Publication Number Publication Date
CN107277616A true CN107277616A (en) 2017-10-20

Family

ID=60079400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710600569.5A Pending CN107277616A (en) 2017-07-21 2017-07-21 Special video effect rendering intent, device and terminal

Country Status (1)

Country Link
CN (1) CN107277616A (en)

Cited By (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107948735A (en) * 2017-12-06 2018-04-20 北京金山安全软件有限公司 Video playing method and device and electronic equipment
CN107993185A (en) * 2017-11-28 2018-05-04 北京潘达互娱科技有限公司 Data processing method and device
CN108156520A (en) * 2017-12-29 2018-06-12 珠海市君天电子科技有限公司 Video broadcasting method, device, electronic equipment and storage medium
CN108235120A (en) * 2018-03-23 2018-06-29 北京潘达互娱科技有限公司 Live video stream method for pushing, device and electronic equipment
CN108900897A (en) * 2018-07-09 2018-11-27 腾讯科技(深圳)有限公司 A kind of multimedia data processing method, device and relevant device
CN108924645A (en) * 2018-06-25 2018-11-30 北京金山安全软件有限公司 Theme generation method and device and electronic equipment
CN109302637A (en) * 2018-11-05 2019-02-01 腾讯科技(成都)有限公司 Image processing method, image processing apparatus and electronic equipment
CN109327698A (en) * 2018-11-09 2019-02-12 杭州网易云音乐科技有限公司 Dynamic previewing map generalization method, system, medium and electronic equipment
CN109658325A (en) * 2018-12-24 2019-04-19 成都四方伟业软件股份有限公司 A kind of three-dimensional animation rendering method and device
CN109672931A (en) * 2018-12-20 2019-04-23 北京百度网讯科技有限公司 Method and apparatus for handling video frame
CN109688346A (en) * 2018-12-28 2019-04-26 广州华多网络科技有限公司 A kind of hangover special efficacy rendering method, device, equipment and storage medium
CN109920041A (en) * 2019-03-22 2019-06-21 深圳市脸萌科技有限公司 Video effect generation method, device and electronic equipment
CN109918604A (en) * 2019-03-07 2019-06-21 智慧芽信息科技(苏州)有限公司 Page drawing method, apparatus, equipment and storage medium
TWI663875B (en) * 2018-06-21 2019-06-21 威盛電子股份有限公司 Video processing method and device thereof
CN110035321A (en) * 2019-04-11 2019-07-19 北京大生在线科技有限公司 A kind of trailing and system of online real-time video
CN110049371A (en) * 2019-05-14 2019-07-23 北京比特星光科技有限公司 Video Composition, broadcasting and amending method, image synthesizing system and equipment
CN110662090A (en) * 2018-06-29 2020-01-07 腾讯科技(深圳)有限公司 Video processing method and system
CN110662102A (en) * 2018-06-29 2020-01-07 武汉斗鱼网络科技有限公司 Filter gradual change effect display method, storage medium, equipment and system
CN110930480A (en) * 2019-11-30 2020-03-27 航天科技控股集团股份有限公司 Direct rendering method for starting animation video of liquid crystal instrument
CN110996170A (en) * 2019-12-10 2020-04-10 Oppo广东移动通信有限公司 Video file playing method and related equipment
CN111050179A (en) * 2019-12-30 2020-04-21 北京奇艺世纪科技有限公司 Video transcoding method and device
CN111080728A (en) * 2019-12-19 2020-04-28 上海米哈游天命科技有限公司 Map processing method, device, equipment and storage medium
CN111210381A (en) * 2019-12-31 2020-05-29 广州市百果园信息技术有限公司 Data processing method and device, terminal equipment and computer readable medium
CN111221596A (en) * 2018-11-23 2020-06-02 北京方正手迹数字技术有限公司 Font rendering method and device and computer readable storage medium
CN111292387A (en) * 2020-01-16 2020-06-16 广州小鹏汽车科技有限公司 Dynamic picture loading method and device, storage medium and terminal equipment
CN111343499A (en) * 2018-12-18 2020-06-26 北京奇虎科技有限公司 Video synthesis method and device
CN111355960A (en) * 2018-12-21 2020-06-30 北京字节跳动网络技术有限公司 Method and device for synthesizing video file, mobile terminal and storage medium
CN111355997A (en) * 2018-12-21 2020-06-30 北京字节跳动网络技术有限公司 Video file generation method and device, mobile terminal and storage medium
CN111355978A (en) * 2018-12-21 2020-06-30 北京字节跳动网络技术有限公司 Video file processing method and device, mobile terminal and storage medium
CN111614906A (en) * 2020-05-29 2020-09-01 北京百度网讯科技有限公司 Image preprocessing method and device, electronic equipment and storage medium
CN111773691A (en) * 2020-07-03 2020-10-16 珠海金山网络游戏科技有限公司 Cloud game service system, cloud client and data processing method
CN112184854A (en) * 2020-09-04 2021-01-05 上海硬通网络科技有限公司 Animation synthesis method and device and electronic equipment
CN112184856A (en) * 2020-09-30 2021-01-05 广州光锥元信息科技有限公司 Multimedia processing device supporting multi-layer special effect and animation mixing
CN112218117A (en) * 2020-09-29 2021-01-12 北京字跳网络技术有限公司 Video processing method and device
WO2021031850A1 (en) * 2019-08-19 2021-02-25 北京字节跳动网络技术有限公司 Image processing method and apparatus, electronic device and storage medium
CN112419456A (en) * 2019-08-23 2021-02-26 腾讯科技(深圳)有限公司 Special effect picture generation method and device
CN112668474A (en) * 2020-12-28 2021-04-16 北京字节跳动网络技术有限公司 Plane generation method and device, storage medium and electronic equipment
CN112738624A (en) * 2020-12-23 2021-04-30 北京达佳互联信息技术有限公司 Method and device for special effect rendering of video
CN112954233A (en) * 2021-01-29 2021-06-11 稿定(厦门)科技有限公司 Video synthesis system and method based on GPU
CN113038221A (en) * 2021-03-02 2021-06-25 海信电子科技(武汉)有限公司 Double-channel video playing method and display equipment
CN113382178A (en) * 2021-08-12 2021-09-10 江苏三步科技股份有限公司 Multi-channel video synthesis method, engine, device and readable storage medium
CN113473181A (en) * 2021-09-03 2021-10-01 北京市商汤科技开发有限公司 Video processing method and device, computer readable storage medium and computer equipment
CN113918442A (en) * 2020-07-10 2022-01-11 北京字节跳动网络技术有限公司 Image special effect parameter processing method, equipment and storage medium
CN113946373A (en) * 2021-10-11 2022-01-18 成都中科合迅科技有限公司 Virtual reality multi-video-stream rendering method based on load balancing
CN114513614A (en) * 2022-02-14 2022-05-17 河南大学 Device and method for special effect rendering of video
CN114760526A (en) * 2022-03-31 2022-07-15 北京百度网讯科技有限公司 Video rendering method and device, electronic equipment and storage medium
CN114845162A (en) * 2021-02-01 2022-08-02 北京字节跳动网络技术有限公司 Video playing method and device, electronic equipment and storage medium
CN115311758A (en) * 2022-06-29 2022-11-08 惠州市德赛西威汽车电子股份有限公司 Method, system and storage medium for recording driving video of DVR built in Android vehicle-mounted platform
CN115361583A (en) * 2022-08-10 2022-11-18 吉林动画学院 Method for real-time rendering of video frame textures by aiming at APP and Unity
WO2023035973A1 (en) * 2021-09-10 2023-03-16 北京字跳网络技术有限公司 Video processing method and apparatus, device, and medium
CN115809047A (en) * 2023-02-02 2023-03-17 麒麟软件有限公司 Wayland synthesizer
WO2023098611A1 (en) * 2021-11-30 2023-06-08 北京字节跳动网络技术有限公司 Special effect display method and apparatus, and device and storage medium
CN116708696A (en) * 2022-11-22 2023-09-05 荣耀终端有限公司 Video processing method and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236485A (en) * 2008-01-28 2008-08-06 国电信息中心 Multi-screen 3-D in-phase display process, device and system
CN103402100A (en) * 2013-08-23 2013-11-20 北京奇艺世纪科技有限公司 Video processing method and mobile terminal
CN104036534A (en) * 2014-06-27 2014-09-10 成都品果科技有限公司 Real-time camera special effect rendering method based on WP8 platform
CN104091607A (en) * 2014-06-13 2014-10-08 北京奇艺世纪科技有限公司 Video editing method and device based on IOS equipment
CN104580837A (en) * 2015-01-20 2015-04-29 南京纳加软件有限公司 Video director engine based on GPU+CPU+IO architecture and using method thereof
CN106127673A (en) * 2016-07-19 2016-11-16 腾讯科技(深圳)有限公司 A kind of method for processing video frequency, device and computer equipment
CN106210883A (en) * 2016-08-11 2016-12-07 浙江大华技术股份有限公司 A kind of method of Video Rendering, equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236485A (en) * 2008-01-28 2008-08-06 国电信息中心 Multi-screen 3-D in-phase display process, device and system
CN103402100A (en) * 2013-08-23 2013-11-20 北京奇艺世纪科技有限公司 Video processing method and mobile terminal
CN104091607A (en) * 2014-06-13 2014-10-08 北京奇艺世纪科技有限公司 Video editing method and device based on IOS equipment
CN104036534A (en) * 2014-06-27 2014-09-10 成都品果科技有限公司 Real-time camera special effect rendering method based on WP8 platform
CN104580837A (en) * 2015-01-20 2015-04-29 南京纳加软件有限公司 Video director engine based on GPU+CPU+IO architecture and using method thereof
CN106127673A (en) * 2016-07-19 2016-11-16 腾讯科技(深圳)有限公司 A kind of method for processing video frequency, device and computer equipment
CN106210883A (en) * 2016-08-11 2016-12-07 浙江大华技术股份有限公司 A kind of method of Video Rendering, equipment

Cited By (82)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107993185A (en) * 2017-11-28 2018-05-04 北京潘达互娱科技有限公司 Data processing method and device
CN107948735A (en) * 2017-12-06 2018-04-20 北京金山安全软件有限公司 Video playing method and device and electronic equipment
CN107948735B (en) * 2017-12-06 2020-09-25 北京乐我无限科技有限责任公司 Video playing method and device and electronic equipment
US11706483B2 (en) 2017-12-06 2023-07-18 Joyme Pte. Ltd. Video playing method and apparatus, and electronic device
CN108156520B (en) * 2017-12-29 2020-08-25 珠海市君天电子科技有限公司 Video playing method and device, electronic equipment and storage medium
CN108156520A (en) * 2017-12-29 2018-06-12 珠海市君天电子科技有限公司 Video broadcasting method, device, electronic equipment and storage medium
CN108235120A (en) * 2018-03-23 2018-06-29 北京潘达互娱科技有限公司 Live video stream method for pushing, device and electronic equipment
TWI663875B (en) * 2018-06-21 2019-06-21 威盛電子股份有限公司 Video processing method and device thereof
CN108924645A (en) * 2018-06-25 2018-11-30 北京金山安全软件有限公司 Theme generation method and device and electronic equipment
CN110662090A (en) * 2018-06-29 2020-01-07 腾讯科技(深圳)有限公司 Video processing method and system
CN110662102B (en) * 2018-06-29 2021-11-09 武汉斗鱼网络科技有限公司 Filter gradual change effect display method, storage medium, equipment and system
CN110662090B (en) * 2018-06-29 2022-11-18 腾讯科技(深圳)有限公司 Video processing method and system
CN110662102A (en) * 2018-06-29 2020-01-07 武汉斗鱼网络科技有限公司 Filter gradual change effect display method, storage medium, equipment and system
CN108900897A (en) * 2018-07-09 2018-11-27 腾讯科技(深圳)有限公司 A kind of multimedia data processing method, device and relevant device
CN109302637B (en) * 2018-11-05 2023-02-17 腾讯科技(成都)有限公司 Image processing method, image processing device and electronic equipment
CN109302637A (en) * 2018-11-05 2019-02-01 腾讯科技(成都)有限公司 Image processing method, image processing apparatus and electronic equipment
CN109327698A (en) * 2018-11-09 2019-02-12 杭州网易云音乐科技有限公司 Dynamic previewing map generalization method, system, medium and electronic equipment
CN109327698B (en) * 2018-11-09 2020-09-15 杭州网易云音乐科技有限公司 Method, system, medium and electronic device for generating dynamic preview chart
CN111221596A (en) * 2018-11-23 2020-06-02 北京方正手迹数字技术有限公司 Font rendering method and device and computer readable storage medium
CN111221596B (en) * 2018-11-23 2024-04-09 北京方正手迹数字技术有限公司 Font rendering method, apparatus and computer readable storage medium
CN111343499A (en) * 2018-12-18 2020-06-26 北京奇虎科技有限公司 Video synthesis method and device
CN109672931B (en) * 2018-12-20 2020-03-20 北京百度网讯科技有限公司 Method and apparatus for processing video frames
US11195248B2 (en) 2018-12-20 2021-12-07 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for processing pixel data of a video frame
CN109672931A (en) * 2018-12-20 2019-04-23 北京百度网讯科技有限公司 Method and apparatus for handling video frame
CN111355960A (en) * 2018-12-21 2020-06-30 北京字节跳动网络技术有限公司 Method and device for synthesizing video file, mobile terminal and storage medium
CN111355997A (en) * 2018-12-21 2020-06-30 北京字节跳动网络技术有限公司 Video file generation method and device, mobile terminal and storage medium
CN111355960B (en) * 2018-12-21 2021-05-04 北京字节跳动网络技术有限公司 Method and device for synthesizing video file, mobile terminal and storage medium
CN111355978A (en) * 2018-12-21 2020-06-30 北京字节跳动网络技术有限公司 Video file processing method and device, mobile terminal and storage medium
CN109658325B (en) * 2018-12-24 2022-08-16 成都四方伟业软件股份有限公司 Three-dimensional animation rendering method and device
CN109658325A (en) * 2018-12-24 2019-04-19 成都四方伟业软件股份有限公司 A kind of three-dimensional animation rendering method and device
CN109688346B (en) * 2018-12-28 2021-04-27 广州方硅信息技术有限公司 Method, device and equipment for rendering trailing special effect and storage medium
CN109688346A (en) * 2018-12-28 2019-04-26 广州华多网络科技有限公司 A kind of hangover special efficacy rendering method, device, equipment and storage medium
CN109918604A (en) * 2019-03-07 2019-06-21 智慧芽信息科技(苏州)有限公司 Page drawing method, apparatus, equipment and storage medium
CN109920041A (en) * 2019-03-22 2019-06-21 深圳市脸萌科技有限公司 Video effect generation method, device and electronic equipment
CN110035321A (en) * 2019-04-11 2019-07-19 北京大生在线科技有限公司 A kind of trailing and system of online real-time video
CN110049371A (en) * 2019-05-14 2019-07-23 北京比特星光科技有限公司 Video Composition, broadcasting and amending method, image synthesizing system and equipment
WO2021031850A1 (en) * 2019-08-19 2021-02-25 北京字节跳动网络技术有限公司 Image processing method and apparatus, electronic device and storage medium
CN112419456B (en) * 2019-08-23 2024-04-16 腾讯科技(深圳)有限公司 Special effect picture generation method and device
CN112419456A (en) * 2019-08-23 2021-02-26 腾讯科技(深圳)有限公司 Special effect picture generation method and device
CN110930480A (en) * 2019-11-30 2020-03-27 航天科技控股集团股份有限公司 Direct rendering method for starting animation video of liquid crystal instrument
CN110996170A (en) * 2019-12-10 2020-04-10 Oppo广东移动通信有限公司 Video file playing method and related equipment
CN110996170B (en) * 2019-12-10 2022-02-15 Oppo广东移动通信有限公司 Video file playing method and related equipment
CN111080728A (en) * 2019-12-19 2020-04-28 上海米哈游天命科技有限公司 Map processing method, device, equipment and storage medium
CN111050179A (en) * 2019-12-30 2020-04-21 北京奇艺世纪科技有限公司 Video transcoding method and device
CN111050179B (en) * 2019-12-30 2022-04-22 北京奇艺世纪科技有限公司 Video transcoding method and device
CN111210381A (en) * 2019-12-31 2020-05-29 广州市百果园信息技术有限公司 Data processing method and device, terminal equipment and computer readable medium
CN111292387B (en) * 2020-01-16 2023-08-29 广州小鹏汽车科技有限公司 Dynamic picture loading method and device, storage medium and terminal equipment
CN111292387A (en) * 2020-01-16 2020-06-16 广州小鹏汽车科技有限公司 Dynamic picture loading method and device, storage medium and terminal equipment
CN111614906A (en) * 2020-05-29 2020-09-01 北京百度网讯科技有限公司 Image preprocessing method and device, electronic equipment and storage medium
US11593908B2 (en) 2020-05-29 2023-02-28 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Method for preprocessing image in augmented reality and related electronic device
CN111614906B (en) * 2020-05-29 2022-02-22 阿波罗智联(北京)科技有限公司 Image preprocessing method and device, electronic equipment and storage medium
CN111773691A (en) * 2020-07-03 2020-10-16 珠海金山网络游戏科技有限公司 Cloud game service system, cloud client and data processing method
CN113918442A (en) * 2020-07-10 2022-01-11 北京字节跳动网络技术有限公司 Image special effect parameter processing method, equipment and storage medium
CN112184854A (en) * 2020-09-04 2021-01-05 上海硬通网络科技有限公司 Animation synthesis method and device and electronic equipment
CN112218117A (en) * 2020-09-29 2021-01-12 北京字跳网络技术有限公司 Video processing method and device
CN112184856B (en) * 2020-09-30 2023-09-22 广州光锥元信息科技有限公司 Multimedia processing device supporting multi-layer special effect and animation mixing
CN112184856A (en) * 2020-09-30 2021-01-05 广州光锥元信息科技有限公司 Multimedia processing device supporting multi-layer special effect and animation mixing
CN112738624A (en) * 2020-12-23 2021-04-30 北京达佳互联信息技术有限公司 Method and device for special effect rendering of video
CN112738624B (en) * 2020-12-23 2022-10-25 北京达佳互联信息技术有限公司 Method and device for special effect rendering of video
CN112668474B (en) * 2020-12-28 2023-10-13 北京字节跳动网络技术有限公司 Plane generation method and device, storage medium and electronic equipment
CN112668474A (en) * 2020-12-28 2021-04-16 北京字节跳动网络技术有限公司 Plane generation method and device, storage medium and electronic equipment
WO2022160744A1 (en) * 2021-01-29 2022-08-04 稿定(厦门)科技有限公司 Gpu-based video synthesis system and method
CN112954233A (en) * 2021-01-29 2021-06-11 稿定(厦门)科技有限公司 Video synthesis system and method based on GPU
CN114845162B (en) * 2021-02-01 2024-04-02 北京字节跳动网络技术有限公司 Video playing method and device, electronic equipment and storage medium
CN114845162A (en) * 2021-02-01 2022-08-02 北京字节跳动网络技术有限公司 Video playing method and device, electronic equipment and storage medium
CN113038221B (en) * 2021-03-02 2023-02-28 Vidaa(荷兰)国际控股有限公司 Double-channel video playing method and display equipment
CN113038221A (en) * 2021-03-02 2021-06-25 海信电子科技(武汉)有限公司 Double-channel video playing method and display equipment
CN113382178A (en) * 2021-08-12 2021-09-10 江苏三步科技股份有限公司 Multi-channel video synthesis method, engine, device and readable storage medium
CN113473181A (en) * 2021-09-03 2021-10-01 北京市商汤科技开发有限公司 Video processing method and device, computer readable storage medium and computer equipment
WO2023035973A1 (en) * 2021-09-10 2023-03-16 北京字跳网络技术有限公司 Video processing method and apparatus, device, and medium
CN113946373A (en) * 2021-10-11 2022-01-18 成都中科合迅科技有限公司 Virtual reality multi-video-stream rendering method based on load balancing
CN113946373B (en) * 2021-10-11 2023-06-09 成都中科合迅科技有限公司 Virtual reality multiple video stream rendering method based on load balancing
WO2023098611A1 (en) * 2021-11-30 2023-06-08 北京字节跳动网络技术有限公司 Special effect display method and apparatus, and device and storage medium
CN114513614A (en) * 2022-02-14 2022-05-17 河南大学 Device and method for special effect rendering of video
CN114760526A (en) * 2022-03-31 2022-07-15 北京百度网讯科技有限公司 Video rendering method and device, electronic equipment and storage medium
CN115311758B (en) * 2022-06-29 2023-12-15 惠州市德赛西威汽车电子股份有限公司 Method, system and storage medium for recording DVR driving video built in Android vehicle-mounted platform
CN115311758A (en) * 2022-06-29 2022-11-08 惠州市德赛西威汽车电子股份有限公司 Method, system and storage medium for recording driving video of DVR built in Android vehicle-mounted platform
CN115361583A (en) * 2022-08-10 2022-11-18 吉林动画学院 Method for real-time rendering of video frame textures by aiming at APP and Unity
CN115361583B (en) * 2022-08-10 2024-05-17 吉林动画学院 Method for rendering video frame textures in real time aiming at APP and Unity
CN116708696A (en) * 2022-11-22 2023-09-05 荣耀终端有限公司 Video processing method and electronic equipment
CN116708696B (en) * 2022-11-22 2024-05-14 荣耀终端有限公司 Video processing method and electronic equipment
CN115809047A (en) * 2023-02-02 2023-03-17 麒麟软件有限公司 Wayland synthesizer

Similar Documents

Publication Publication Date Title
CN107277616A (en) Special video effect rendering intent, device and terminal
TWI264183B (en) Method for compressing data in a bit stream or bit pattern
CA2948903C (en) Method, system and apparatus for generation and playback of virtual reality multimedia
US7777750B1 (en) Texture arrays in a graphics library
Sen Silhouette maps for improved texture magnification
KR20210151114A (en) Hybrid rendering
US11164342B2 (en) Machine learning applied to textures compression or upscaling
KR20160049031A (en) Tessellation in tile-based rendering
NO328434B1 (en) Formatting language and object model for vector graphics
US20170200302A1 (en) Method and system for high-performance real-time adjustment of one or more elements in a playing video, interactive 360° content or image
WO2017105745A1 (en) Method and apparatus for color buffer compression
Noguera et al. Mobile volume rendering: past, present and future
US9679348B2 (en) Storage and compression methods for animated images
US20220036632A1 (en) Post-processing in a memory-system efficient manner
US10089964B2 (en) Graphics processor logic for encoding increasing or decreasing values
CN107767437B (en) Multilayer mixed asynchronous rendering method
CN117501312A (en) Method and device for graphic rendering
US9959590B2 (en) System and method of caching for pixel synchronization-based graphics techniques
CN108010113B (en) Deep learning model execution method based on pixel shader
US11978234B2 (en) Method and apparatus of data compression
Pulli New APIs for mobile graphics
CN115836317A (en) Incremental triple index compression
CN117065357A (en) Media data processing method, device, computer equipment and storage medium
KR102666871B1 (en) Method and apparatus for displaying massive 3d models for ar device
US7123268B2 (en) Hybrid procedural/pixel based textures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210402

Address after: 510665 Room 202, 59 Jianzhong Road, Tianhe District, Guangzhou City, Guangdong Province

Applicant after: Guangzhou lieyou Information Technology Co.,Ltd.

Address before: 510665 Room 202, west block, 59 Jianzhong Road, Tianhe Software Park, Tianhe District, Guangzhou City, Guangdong Province

Applicant before: GUANGZHOU AIPAI NETWORK TECHNOLOGY Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20171020