CN103700385A - Media player, playing method, and video post-processing method in hardware acceleration mode - Google Patents

Media player, playing method, and video post-processing method in hardware acceleration mode Download PDF

Info

Publication number
CN103700385A
CN103700385A CN201210369800.1A CN201210369800A CN103700385A CN 103700385 A CN103700385 A CN 103700385A CN 201210369800 A CN201210369800 A CN 201210369800A CN 103700385 A CN103700385 A CN 103700385A
Authority
CN
China
Prior art keywords
captions
image
video
video flowing
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201210369800.1A
Other languages
Chinese (zh)
Inventor
王云刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen QVOD Technology Co Ltd
Original Assignee
Shenzhen QVOD Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen QVOD Technology Co Ltd filed Critical Shenzhen QVOD Technology Co Ltd
Priority to CN201210369800.1A priority Critical patent/CN103700385A/en
Publication of CN103700385A publication Critical patent/CN103700385A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application discloses a video post-processing method, a media player and a media playing method. The media player comprises a separating module, a decoding module and a rendering module, wherein the separating module is used for separating input media source files and outputting the separated input media source files to the decoding module; the decoding module is used for calling a hardware decoder of a graphics processor to decode video streaming obtained after separation; the rendering module is used for rendering the video streaming obtained after decoding, calling a custom demonstration component to perform image post-processing on all images of the video streaming, and outputting the processed video streaming. As the hardware decoding function of the graphics processor is adopted to decode the video streaming, particularly the decoding of the video streaming is performed in the graphics processor rather than in a CPU (central processing unit), so that the CPU resource occupancy rate is reduced when a video is played; and after data in the graphics processor is subjected to the video post-processing through the custom demonstration component, the processing efficiency of the CPU is improved, and in addition, the effects of color conditioning and subtitle adding can also be realized in a hardware acceleration mode.

Description

The post-processing approach of media player and player method, the video under hardware-accelerated
Technical field
The application relates to multimedia technology field, relates in particular to a kind of post-processing approach of the video under hardware-accelerated, and adopts media player and media playing method in this way.
Background technology
Current most of video player adopts software decode, as shown in Figure 1, first by the encoded video of 101 pairs of specific formats of demoder, decode, next utilizes 102 pairs of videos of video post-processor to process, then by the video after 103 pairs of processing of renderer, played up, the video after playing up is outputed to display 104 and shown.This And Study of Soft Decode Technique is placed on whole video decodes and aftertreatment work in CPU and carries out.Yet on the one hand, the computation complexity of the video encryption algorithm that especially HD video adopts is high, on the other hand, the image pixel data amount of the large especially HD video of the image pixel quantity of video is larger; This makes CPU tend to operate at full capacity when decoded video especially HD video, causes PC to respond other user command and slowly occurs that picture disply is discontinuous, sound is play desultory phenomenon, and What is more occurs that operating system is without the situation of response.
Working load when reducing CPU decoded video, in recent years, each large GPU(graphic process unit, Graphic Processing Unit) production firm has researched and developed the GPU product of supporting vision hardware decoding one after another, powerful GPU decodes direct support hardware, can support the coded format to HD video source, as MPEG2, H.264, the coded format such as WMV and VC-1 decodes.Microsoft has worked out specially DXVA(hardware video for this reason and has accelerated, DirectX Video Acceleration) standard.If can utilize GPU to go decoding when video playback, the part or all of decoding work of being born by CPU transfers GPU to be born, thereby the cpu resource occupancy in the time of can greatly reducing displaying video makes it can respond in time other user command.Yet, owing to opening GPU video decode, accelerate, after function, the common processes such as video decode aftertreatment to be implemented, cannot carry out color adjustment, add the processing such as captions, picture effect may decline compared with pure software decoding process, can cause negative impact on the contrary.
Summary of the invention
Whether the application provides a kind of post-processing approach of video, and adopts media player and media playing method in this way, no matter adopt during video playback hardware-acceleratedly, all can realize the post processing of image effects such as color adjustment, captions interpolation.
According to the application's first aspect, a kind of media player is provided, comprising: separation module, for the source of media file of input is carried out by file type to separation and outputs to decoder module; Decoder module, decodes for the video flowing obtaining after separation being called to the hardware decoder that graphic process unit carries; Rendering module, the video flowing obtaining for playing up decoding, and call self-defined presentation component each width image of video flowing is carried out to post processing of image, the video flowing after output is processed.
Described self-defined presentation component comprises at least one in color elements, captions unit, deformation unit, described color elements is for carrying out color adjustment to each width image of video flowing, described captions unit is for captions being added to each width image of video flowing, described deformation unit for each width image of video flowing is rotated, convergent-divergent, translation.
Described captions unit comprises: create subelement, for the caption text data to introducing from external interface, add timestamp, the queue of formation time stamp captions, create the buffering of preserving captions image, the current captioned test finding from the queue of timestamp captions is formed to captions drawing image, and add captions effect, and the captions drawing image after additive effect is outputed in buffering, obtain final captions image; Load subelement, surperficial to backstage for each width image of replicating video stream, search captions image corresponding to current time, the captions image finding is synthesized to corresponding each width image in surface, backstage.
The pixel coloring device of described color elements based on Direct3D realized; VMR and the video of described rendering module based on DirectX strengthens renderer realization.
Described graphic process unit is supported DirectX 9.0 and above interface and Direct3D Shader 3.0 and above interface.
According to the application's second aspect, a kind of media playing method is provided, comprising: separating step, carries out the source of media file of input by file type separation and outputs to decoder module; Decoding step, calls to the video flowing obtaining after separation the hardware decoder that graphic process unit carries and decodes; Play up step, play up the video flowing that decoding obtains, and call self-defined presentation component each width image of video flowing is carried out to post processing of image, the video flowing after output is processed.
Described self-defined presentation component comprises at least one in color elements, captions unit, deformation unit, described color elements is for carrying out color adjustment to each width image of video flowing, described captions unit is for captions being added to each width image of video flowing, described deformation unit for each width image of video flowing is rotated, convergent-divergent, translation; The described step of playing up comprises: blend step, and collect the video flowing of input and notify described self-defined presentation component to process; Execution step, described self-defined presentation component judges whether to exist color control task, captions to add any number of task in task, deformation task, if there is color control task, call described color elements and carry out color adjustment, if there are captions, call described captions unit and carry out captions interpolation, if there is deformation task, call rotation, convergent-divergent, translation that described deformation unit carries out image; Output step, the video flowing after execution is processed outputs to display device and is shown.
According to the application's the third aspect, a kind of video post-process method under hardware-accelerated is provided, comprise: self-defined step, VMR based on DirectX and video strengthen renderer, self-defined to support post processing of image interface to presentation component, described post processing of image interface comprises color interface, captions interface and deformation interface, the color adjustment of described color Interface realization to each width image in video flowing, described captions Interface realization adds captions to each width image of video flowing, the rotation of described deformation Interface realization to each width image in video flowing, convergent-divergent, translation.
Described hardware-accelerated relied on graphic process unit is supported DirectX9.0 and above interface and Direct3DShader3.0 and above interface.
The application's beneficial effect is: because being adopts hardware decoding function that graphic process unit carries to decoding video stream, i.e. the decoding of video stream data is in graphic process unit, to carry out but not in CPU, thus the cpu resource occupation rate while having reduced displaying video; Data in graphic process unit are carried out to video aftertreatment by calling self-defined presentation component, make when promoting CPU treatment effeciency, under hardware-accelerated pattern, also can realize color adjustment, add captions texts.
Accompanying drawing explanation
Fig. 1 is the processing procedure schematic diagram of conventional software decode;
Fig. 2 is the structural representation of the VMR/EVR of DirectX;
Fig. 3 is the schematic flow sheet of VMR/EVR processing media data;
Fig. 4 is the structural representation of the media player of a kind of embodiment of the application;
Fig. 5 is the treatment scheme schematic diagram of rendering module in a kind of embodiment of the application;
Fig. 6 is the treatment scheme schematic diagram of the media player of a kind of embodiment of the application;
Fig. 7 is the treatment scheme schematic diagram of the captions unit of a kind of embodiment of the application;
Fig. 8 is the application example schematic diagram of the media player of a kind of embodiment of the application.
Embodiment
Below by embodiment, by reference to the accompanying drawings the application is described in further detail.
The problem that adopts hardware to decode and cannot carry out color adjustment, add the aftertreatments such as captions for current multimedia player, whether the application realizes and when no matter multimedia player is play, adopts hardware-acceleratedly based on Direct3D, all can carry out color adjustment, add the aftertreatments such as captions.
First some technical terms that will use the application describe:
(1) DirectX, Direct3D, DirectDraw, DirectShow:DirectX Shi You Microsoft create the Multimedia Programming interface of exploitation, by demonstration, sound, input and network four major parts, formed, display section is divided into again DirectDraw and Direct3D, the former is responsible for 2D and accelerates, the latter is responsible for 3D and accelerates, sound partly comprises DirectSound, DirectShow is that the Streaming Media of new generation being based upon on DirectDraw and DirectSound basis is processed kit, issues together with DirectX kit.
(2) surface: referring to the predetermined space distributing into storage composograph, is exactly in fact a video memory region.
(3) video image is signed in to surface, backstage: refer to view data from a video card region duplication to another piece region.
(4) captions: comprise external subtitle and inner captions, external subtitle refers to independent subtitle file, i.e. outside subtitle file independently; Inner captions refer to the caption stream writing in video file, are to be directly embedded in image by Video coding.
Here in conjunction with Fig. 2 and Fig. 3, illustrate that VMR/EVR(VMR/video of DirectX strengthens renderer, Video Mixing Renderer/Enhanced Video Renderer) structure and the processing procedure of media data.
For adapting to various application, VMR has adopted modular design strategy, except several input pins (Pin0, Pin1 ..., Pin n) outside, VMR also comprises 2 to 5 sub-components, as shown in Figure 2, VMR comprises mixer (Mixer), image synthesizer (Compositor), distributes demonstration device (Allocator Presenter claims again image display unit), core isochronous controller (Core Synchronization Unit) and window manager (Window Manager).
Mixer is used for being responsible for collecting the information of every road inlet flow, and they are carried out to Z sequence (determining that image is presented at the position of window); Mixer also determines when each input pin accepts to input sample, meanwhile, also will carry out actual mixing at appropriate time announcement image synthesizer; Mixer is also for each output image adds timestamp; When application program provides a width bitmap when being presented at the superiors of the image after synthetic, even if the Z of inlet flow has occurred in sequence change, mixer also must guarantee that bitmap is in topmost.
Image synthesizer is for carrying out actual mixing, multichannel inlet flow is synthesized to a single DirectDraw or Direct3D surface, DirectDraw or Direct3D surface are by distributing demonstration device to provide, and wherein, surface refers to the predetermined space distributing into storage composograph.Conventionally VMR/EVR inside provides a default image synthesizer to carry out two-dimentional Alpha mixing for application program, and application program can custom images compositor.
Distribute demonstration device for distribute DirectDraw or Direct3D surface to image synthesizer, be responsible for communicating by letter with graphics card simultaneously.Application program can self-defined distribution demonstration device, for distributing DirectDraw or Direct3D surface, and before showing, obtains the ability of accessing video data.
Core isochronous controller to image synthesizer, makes it truly complete the complex functionality of image for computed image timestamp.
Window manager only has when VMR is operated under window scheme just and uses.This modular structure of VMR can operate application program as required in a different manner to it, VMR supports three kinds of different operator schemes (have window, present without window and nothing) and two kinds of mixed modes (being single stream and a plurality of stream).Having window scheme is default mode, and VMR can create an independent window, and video is presented on this window; Without window scheme by video is directly presented on application window, for application program provides stronger control ability; And without the self-defining distribution demonstration device of presentation modes service routine object.
As shown in Figure 3, two-way or multichannel inlet flow mix at mixer data flow in whole VMR/EVR, through synthetic the sending into of image synthesizer, distribute demonstration device to process, and distribute demonstration device again the image of processing to be outputed to display.
VMR/EVR based on DirectX, the application's design philosophy is: in order to realize before showing the processing of image (as color adjustment, add captions etc.), what first will realize is exactly self-defined distribution demonstration device, distribution demonstration device for replacement system acquiescence, before video image is rendered into display, obtain video data, realize the aftertreatments such as captions processing, color adjustment, rotation convergent-divergent.
Embodiment 1
The embodiment that is illustrated in figure 4 a kind of media player of the application, comprising: separation module 101, decoder module 102 and rendering module 103.
Separation module 101 is for the source of media file of input is carried out by file type to separation and outputs to decoder module, can obtain any number of in video stream data, audio stream data (audio frequency that comprises different sound channel/languages) and caption stream data after separation.Separation module can adopt the general relevant art of those skilled in the art to realize, and the application is not construed as limiting this.
Decoder module 102 is decoded for the hardware decoder that the video flowing obtaining after separation is called to graphic process unit and carry, should understand, for the audio stream obtaining after separation, can call corresponding audio decoder decodes, decoded audio stream can be play through audio output apparatus, certainly, decoded video flowing can be the data without video aftertreatment, can be also the data through video aftertreatment, and concrete composition algorithm can adopt the corresponding technology in this area to realize.The specific implementation of decoder module can adopt the general relevant art of those skilled in the art to realize, and the application is also not construed as limiting this.Graphic process unit is supported DirectX9.0 and above interface and Direct3D Shader3.0 and above interface.
Because being adopts hardware decoding function that graphic process unit carries to decoding video stream, i.e. the decoding of video stream data is at GPU but not carries out in CPU, thereby the cpu resource occupation rate while having reduced displaying video has improved CPU treatment effeciency.
Rendering module 103 is for playing up the video flowing that obtains of decoding, and each width image that calls 111 pairs of video flowings of self-defined presentation component carries out post processing of image, the video flowing after output is processed.VMR and the video of the rendering module of the present embodiment based on DirectX strengthens renderer realization, comprise equally mixer, image synthesizer, core isochronous controller and window manager as the aforementioned, difference is that the distribution demonstration device of this rendering module is self-defining presentation component.Self-defining presentation component comprises at least one in color elements, captions unit, deformation unit, color elements is for carrying out color adjustment to each width image of video flowing, captions unit is for captions being added to each width image of video flowing, deformation unit for each width image of video flowing is rotated, convergent-divergent, translation.
The present embodiment adopts self-defining presentation component to replace the distribution demonstration device assembly in the VMR/EVR of DirectX, and the function of the distribution demonstration device assembly in the VMR/EVR of the compatible DirectX of self-defined presentation component, also provides post processing of image interface.When video stream data arrives self-defined presentation component, data are in GPU, if rendering module need to show image, the interface calling in self-defined presentation component carries out image demonstration, for VMR7, use DirectDraw, for VMR9/EVR, use Direct3D to carry out image demonstration.When receiving the processing instruction of user interface, the interface that rendering module calls bottom hardware carries out respective handling to image.Self-defined presentation component is exactly specifically on the basis of the distribution demonstration device assembly in the VMR/EVR based on DirectX, creates image Processing Interface, as created interface IColorControl, to process color, controls, and creates interface ISubtitle and adds with processing caption.
The flow process of processing in rendering module can be with reference to figure 5, first adopt mixer to complete mixing, then self-defined presentation component judges whether to exist color control task, captions add task, any number of task in other tasks (as deformation task), if there is color control task, call color elements and carry out color adjustment, if there are captions, call captions unit and carry out captions interpolation, if there is deformation task, call the rotation that deformation interface carries out image, convergent-divergent, the processing such as translation, handle after all tasks, certainly, if there is no any task direct outputting video streams.That is to say that self-defined presentation component walks abreast to the processing of task, certainly, also can there is sequencing, for example can first carry out deformation task, carry out color control task again, finally carry out captions and add task, or other orders, or first self-defined presentation component judges whether color control task when processing, if have, carry out color adjustment; If nothing, then judged whether that external subtitle/inner captions need to show, if had, load the captions image generating, then synthetic with video image.A kind for the treatment of scheme of media player as shown in Figure 6, comprises the steps:
Step S101, utilizes the hardware decoding function of system video card to decode to the video source of input;
Step S102, judges whether colored control task, if perform step S103, performs step if not S104;
Step S103, carries out color adjustment to image, continues to carry out S104;
Step S104, has judged whether that captions add task, if perform step S105, perform step S106 if not;
Step S105, loading caption is synthesized to image, continues to carry out S106;
Step S106, has judged whether deformation task, if perform step S107, performs step if not S108;
Step S107, to video image be rotated, the processing such as convergent-divergent, translation;
Step S108, exports image rendering;
Step S109, outputs to display by the video after playing up and is shown.
In a kind of specific implementation, rendering module is used the step of DirectDraw or Direct3D rendering image as follows:
1) create display device system object, when self-defined presentation component initialization, carry out.
2) create display surface IDirectDrawSurface7, IDirect3DSurface9.While creating IDirectDrawSurface7 surface, be appointed as 3D device flag, the surface of DirectDraw also can be processed together with Direct3D like this.
3) after each frame of video has been mixed, image is kept in surface.Effects on surface image carries out a series of images aftertreatment as the aforementioned, and composograph, to surface, is rendered into display device after completing and plays.
4) discharge display device system object, when self-defined presentation component is destroyed, carry out, be i.e. Free up Memory after current file broadcasting finishes.
The algorithm adopting in color elements in self-defined presentation component 111, captions unit and deformation unit can be processed relevant algorithms most in use by video image and realize, also can adopt the application's scheme, respectively color elements and captions unit are further described below.
In the present embodiment, the pixel coloring device (Pixel Shader) of color elements based on Direct3D realized.Pixel coloring device is a kind of program of carrying out on the GPU of video card during each random gratingization is processed, and it is unlike vertex shader (vertex shader), and Direct3D can not imitate with software approach the function of pixel coloring device.It has replaced in essence the multitexture stage in fixed-piping function and provides the independent pixel of direct control and the ability of each pixel texture coordinate of access.This direct access pixel and texture coordinate have allowed various special-effect, such as multitexture, per-pixel lighting, pentrution, cloud layer emulation, flame simulation and complicated Shadow Techniques.
In a kind of specific implementation, the step of application pixel coloring device is as follows:
1) use senior shading language (HLSL, High Level Shader Language) to write out and compile pixel shader code;
2) the tinter code establishing IDirect3DPixelShader9 surface based on after compiling is to represent pixel coloring device;
3) by IDirect3DDevice9::SetPixelShader method on-pixel tinter.
4) current video finishes this pixel coloring device of rear destruction.
The graphic process unit of the present embodiment is supported DirectX9.0 and above interface and Direct3D Shader3.0 and above interface.That is to say, use the version that will first judge this pixel coloring device before pixel coloring device be whether Direct3D Shader3.0 and more than.Can be by checking PixelShaderVersion member and the grand pixel coloring device version that detects video card support of D3DVS_VERSION of D3DCAPS9 structure, code snippet below for example understands this judgement.
Figure BDA00002213270100081
Then according to the algorithm of HLSL language rule and color adjustment, write HLSL code, then compile, finally arrange to D3D object to complete color adjustment.It should be noted that in HLSL, color is substantially all to use XRGB to represent (X representative retains position), is generally expressed as a four-vector, and data type is float4.Such as float4color, so next just can R, G, tri-components of B have been accessed respectively with color.x, color.y, color.z.For 24 or 32 bit images, each passage of RGB is 8, is converted into the integer that 10 systems are exactly 0-255, but in HLSL, situation is different, 0 correspondence be float (0.0) and 255 correspondences is float (1.0), so the expression of white is no longer (255,255,255) but float3 (1.0,1.0,1.0) or float4 (1.0,1.0,1.0,1.0).
Four kinds of the most basic color adjustment algorithms are brightness regulation, contrast adjustment, colourity regulates and saturation degree regulates.In a kind of specific implementation, for brightness regulation, only need to add to the Y in YUV component a constant, scope is-1 to 1; For contrast adjustment, contrast need to make bright brighter, and dark is darker, during adjusting, need seamlessly transit, and the simplest algorithmic formula is k* (color – 0.5)+color, and the algorithmic formula after expansion is k* (color – 0.5)+0.5; For colourity, regulate, its basic calculating formula is U'=U x Cos (H)+V x Sin (H), V'=V x Cos (H)-U x Sin (H), and wherein, H is the color adjustment parameter representing with angle; And regulate for saturation degree, its U, V can be multiplied by respectively a constant.Below provide color and control HLSL code sample.
Figure BDA00002213270100082
Figure BDA00002213270100091
Figure BDA00002213270100101
In the present embodiment, when having captions (no matter external subtitle or inner captions) to load, video image need to be signed in to surface, backstage, be rendered into that to carry out captions before first type surface synthetic, the captions image of generation is synthesized in surface, backstage.As shown in Figure 7, captions unit comprises two treatment schemees: dotted line frame has represented the step that captions load outward, has represented the constructive process of captions images in dotted line frame.
The constructive process of captions image is: 1. obtain the caption text data of introducing from external interface; 2. add caption text data to timestamp; 3. in timestamp queue, search current captioned test; 4. create the buffering of preserving captions image; 5. the size of captions image is set, position; 6. the current captioned test 3. finding based on step, forms a captions drawing image, and adds captions effect; 7. by the captions drawing image Buffer output after additive effect, obtain final captions image.
The process that captions load is: 1. replicating video image is to surface, backstage; 2. search the captions image of current time; If 3. find the captions image of current time, this captions image is synthesized to surface, backstage; 4. backstage surface rendering is outputed on display.
Processing procedure shown in Fig. 7 both can be used for external subtitle and also can be used for inner captions.External subtitle needs separate modular to carry out subtitle file type analysis, and file content reads, and forms with (timestamp, captioned test) list of form, when playing up every width image, from list, take out the captioned test of correspondent time, then synthetic image, merges in video image.
In the situation that thering is inner caption stream, need extra filter (Filter) to realize caption stream and read, captions image generates and buffering.When operation, by self-defined presentation component Presenter and filter interactive communication, obtain captions image and be synthesized in video surface.Treatment scheme and external subtitle are basic identical, difference be exactly external subtitle be captions source disposable extraction from file while loading.And inner captions are by input pin (PIN) Real-time Obtaining of filter, calling interface function, as CreateSubtitleFilter () creates filter, has created afterwards and just it can have been added in image (Graph) and connect caption stream.
In a kind of example, the Graph figure after completing can be with reference to figure 8.As can be seen from Figure 8, first the various data analysis that video file " seabed roaming .mkv " comprised, the video data that separation obtains, the voice data of different sound channel/languages are sent to corresponding demoder and decode, caption stream is delivered to self-defined presentation component and is processed, decoded audio frequency is play via audio output apparatus, for the video that does not need to carry out video aftertreatment, decoded video sends renderer to, after the captions image after processing merges, plays up and exports to display with self-defined presentation component.
In the present embodiment, because the processing of playing up to video flowing is still positioned at GPU, that is to say, after opening GPU video decode acceleration function, can also realize color adjustment, captions and add the processing of texts, therefore, the present embodiment can be realized the video aftertreatment after hardware-accelerated, and operational efficiency is high, greatly save cpu resource.In addition, no matter whether adopt during video playback hardware-acceleratedly, all can realize the post processing of image effects such as color adjustment, captions interpolation.
Embodiment 2
The present embodiment provides a kind of media playing method, and it comprises the steps:
Separating step S201, carries out the source of media file of input by file type separation and outputs to decoder module;
Decoding step S202, calls to the video flowing obtaining after separation the hardware decoder that graphic process unit carries and decodes;
Play up step S203, play up the video flowing that decoding obtains, and call self-defined presentation component each width image of video flowing is carried out to post processing of image, the video flowing after output is processed.Particularly, by mixer, collect the video flowing of input and notify described self-defined presentation component to process, then by self-defined presentation component, judge whether to exist color control task, captions add task, any number of task in deformation task, if there is color control task, call color elements and carry out color adjustment, if there are captions, call captions unit and carry out captions interpolation, if there is deformation task, call the rotation that deformation interface carries out image, convergent-divergent, translation, wherein, color elements is for carrying out color adjustment to each width image of video flowing, captions unit is for adding captions to each width image of video flowing, deformation unit is for being rotated each width image of video flowing, convergent-divergent, translation.
Output step S204, the video flowing after execution is processed outputs to display device and is shown.
Above the specific implementation of each step can reference example 1 in implementation procedure corresponding to each module in media player, at this, no longer repeat.
Based on the present embodiment, the application also provides a kind of video post-process method under hardware-accelerated, it is except conventional hardware-accelerated lower adopted step, also comprise self-defined step, VMR based on DirectX and video strengthen renderer, self-defined to support post processing of image interface to presentation component, this post processing of image interface comprises color interface, captions interface and deformation interface, the color adjustment of color Interface realization to each width image in video flowing, captions Interface realization adds captions to each width image of video flowing, the rotation of deformation Interface realization to each width image in video flowing, convergent-divergent, translation.According to actual video processing demands, call the aftertreatment of corresponding Interface realization video, hardware-accelerated relied on graphic process unit is supported DirectX 9.0 and above interface and Direct3D Shader 3.0 and above interface.The realization of each step that this method embodiment relates to still can reference example 1, at this, no longer repeats.
The video post-process method of the above embodiments of the present application can be used for software decode and hardware decoding, when software or hardware-accelerated switching, do not need extra step, only decoder component need to be set, just its current state is hardware decoding or software decode, does not affect any function of renderer below.
It will be appreciated by those skilled in the art that, in above-mentioned embodiment, all or part of step of the whole bag of tricks can come instruction related hardware to complete by program, this program can be stored in a computer-readable recording medium, and storage medium can comprise: ROM (read-only memory), random access memory, disk or CD etc.
Above content is in conjunction with concrete embodiment further description made for the present invention, can not assert that specific embodiment of the invention is confined to these explanations.For general technical staff of the technical field of the invention, without departing from the inventive concept of the premise, can also make some simple deduction or replace.

Claims (9)

1. a media player, is characterized in that, comprising:
Separation module, for carrying out the source of media file of input by file type separation and output to decoder module;
Decoder module, decodes for the video flowing obtaining after separation being called to the hardware decoder that graphic process unit carries;
Rendering module, the video flowing obtaining for playing up decoding, and call self-defined presentation component each width image of video flowing is carried out to post processing of image, the video flowing after output is processed.
2. media player as claimed in claim 1, it is characterized in that, described self-defined presentation component comprises at least one in color elements, captions unit, deformation unit, described color elements is for carrying out color adjustment to each width image of video flowing, described captions unit is for captions being added to each width image of video flowing, described deformation unit for each width image of video flowing is rotated, convergent-divergent, translation.
3. media player as claimed in claim 2, is characterized in that, described captions unit comprises:
Create subelement, for the caption text data to introducing from external interface, add timestamp, the queue of formation time stamp captions, create the buffering of preserving captions image, the current captioned test finding from the queue of timestamp captions is formed to captions drawing image, and add captions effect, and the captions drawing image after additive effect is outputed in buffering, obtain final captions image;
Load subelement, surperficial to backstage for each width image of replicating video stream, search captions image corresponding to current time, the captions image finding is synthesized to corresponding each width image in surface, backstage.
4. media player as claimed in claim 2 or claim 3, is characterized in that, the pixel coloring device of described color elements based on Direct3D realized; VMR and the video of described rendering module based on DirectX strengthens renderer realization.
5. media player as claimed in claim 1, is characterized in that, described graphic process unit is supported DirectX 9.0 and above interface and Direct3D Shader 3.0 and above interface.
6. a media playing method, is characterized in that, comprising:
Separating step, carries out the source of media file of input by file type separation and outputs to decoder module;
Decoding step, calls to the video flowing obtaining after separation the hardware decoder that graphic process unit carries and decodes;
Play up step, play up the video flowing that decoding obtains, and call self-defined presentation component each width image of video flowing is carried out to post processing of image, the video flowing after output is processed.
7. media playing method as claimed in claim 6, it is characterized in that, described self-defined presentation component comprises at least one in color elements, captions unit, deformation unit, described color elements is for carrying out color adjustment to each width image of video flowing, described captions unit is for captions being added to each width image of video flowing, described deformation unit for each width image of video flowing is rotated, convergent-divergent, translation;
The described step of playing up comprises:
Blend step, collects the video flowing of input and notifies described self-defined presentation component to process;
Execution step, described self-defined presentation component judges whether to exist color control task, captions to add any number of task in task, deformation task, if there is color control task, call described color elements and carry out color adjustment, if there are captions, call described captions unit and carry out captions interpolation, if there is deformation task, call rotation, convergent-divergent, translation that described deformation unit carries out image;
Output step, the video flowing after execution is processed outputs to display device and is shown.
8. the video post-process method under hardware-accelerated, is characterized in that, comprising:
Self-defined step, VMR based on DirectX and video strengthen renderer, self-defined to support post processing of image interface to presentation component, described post processing of image interface comprises color interface, captions interface and deformation interface, the color adjustment of described color Interface realization to each width image in video flowing, described captions Interface realization adds captions to each width image of video flowing, rotation, convergent-divergent, the translation of described deformation Interface realization to each width image in video flowing.
9. the video post-process method under hardware-accelerated as claimed in claim 8, is characterized in that, described hardware-accelerated relied on graphic process unit is supported DirectX 9.0 and above interface and Direct3DShader 3.0 and above interface.
CN201210369800.1A 2012-09-27 2012-09-27 Media player, playing method, and video post-processing method in hardware acceleration mode Pending CN103700385A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210369800.1A CN103700385A (en) 2012-09-27 2012-09-27 Media player, playing method, and video post-processing method in hardware acceleration mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210369800.1A CN103700385A (en) 2012-09-27 2012-09-27 Media player, playing method, and video post-processing method in hardware acceleration mode

Publications (1)

Publication Number Publication Date
CN103700385A true CN103700385A (en) 2014-04-02

Family

ID=50361886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210369800.1A Pending CN103700385A (en) 2012-09-27 2012-09-27 Media player, playing method, and video post-processing method in hardware acceleration mode

Country Status (1)

Country Link
CN (1) CN103700385A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104540028A (en) * 2014-12-24 2015-04-22 上海影卓信息科技有限公司 Mobile platform based video beautifying interactive experience system
CN104683853A (en) * 2015-02-04 2015-06-03 广州酷狗计算机科技有限公司 Multimedia file acquisition device and terminal
CN105678681A (en) * 2015-12-30 2016-06-15 广东威创视讯科技股份有限公司 GPU data processing method, GPU, PC architecture processor and GPU data processing system
CN106303659A (en) * 2016-08-22 2017-01-04 暴风集团股份有限公司 The method and system of picture and text captions are loaded in player
CN106899875A (en) * 2017-02-06 2017-06-27 合网络技术(北京)有限公司 The display control method and device of plug-in captions
CN107728997A (en) * 2017-10-31 2018-02-23 万兴科技股份有限公司 A kind of video player rendering system
CN107872691A (en) * 2017-11-09 2018-04-03 暴风集团股份有限公司 A kind of advertisement loading processing method, apparatus and system
CN108810652A (en) * 2018-06-04 2018-11-13 深圳汇通九州科技有限公司 A kind of information processing method and terminal device
CN109151574A (en) * 2018-10-15 2019-01-04 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and storage medium
CN109429037A (en) * 2017-09-01 2019-03-05 杭州海康威视数字技术股份有限公司 A kind of image processing method, device, equipment and system
CN110620954A (en) * 2018-06-20 2019-12-27 北京优酷科技有限公司 Video processing method and device for hard solution
CN111526420A (en) * 2019-02-01 2020-08-11 北京右划网络科技有限公司 Video rendering method, electronic device and storage medium
CN112262570A (en) * 2018-06-12 2021-01-22 E·克里奥斯·夏皮拉 Method and system for automatic real-time frame segmentation of high-resolution video streams into constituent features and modification of features in individual frames to create multiple different linear views from the same video source simultaneously
CN113747198A (en) * 2021-06-25 2021-12-03 航天时代飞鸿技术有限公司 Unmanned aerial vehicle cluster video picture rendering and post-processing method, medium and device
CN114418887A (en) * 2022-01-19 2022-04-29 北京百度网讯科技有限公司 Image enhancement method and device, electronic equipment and storage medium
CN117241068A (en) * 2023-11-15 2023-12-15 北京医百科技有限公司 Video subtitle generating method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101577110A (en) * 2009-05-31 2009-11-11 腾讯科技(深圳)有限公司 Method for playing videos and video player
CN101778282A (en) * 2010-01-12 2010-07-14 北京暴风网际科技有限公司 Method for concurrently playing different media files
CN102055941A (en) * 2009-11-03 2011-05-11 腾讯科技(深圳)有限公司 Video player and video playing method
CN102523416A (en) * 2011-11-21 2012-06-27 苏州希图视鼎微电子有限公司 Seamless playing method for multiple segments of media streams

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101577110A (en) * 2009-05-31 2009-11-11 腾讯科技(深圳)有限公司 Method for playing videos and video player
CN102055941A (en) * 2009-11-03 2011-05-11 腾讯科技(深圳)有限公司 Video player and video playing method
CN101778282A (en) * 2010-01-12 2010-07-14 北京暴风网际科技有限公司 Method for concurrently playing different media files
CN102523416A (en) * 2011-11-21 2012-06-27 苏州希图视鼎微电子有限公司 Seamless playing method for multiple segments of media streams

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
庞然: "基于DirectX的音视频播放器软件设计", 《浙江大学硕士学位论文》, 15 May 2006 (2006-05-15) *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104540028A (en) * 2014-12-24 2015-04-22 上海影卓信息科技有限公司 Mobile platform based video beautifying interactive experience system
CN104540028B (en) * 2014-12-24 2018-04-20 上海影卓信息科技有限公司 A kind of video beautification interactive experience system based on mobile platform
CN104683853A (en) * 2015-02-04 2015-06-03 广州酷狗计算机科技有限公司 Multimedia file acquisition device and terminal
CN105678681A (en) * 2015-12-30 2016-06-15 广东威创视讯科技股份有限公司 GPU data processing method, GPU, PC architecture processor and GPU data processing system
CN106303659A (en) * 2016-08-22 2017-01-04 暴风集团股份有限公司 The method and system of picture and text captions are loaded in player
CN106899875A (en) * 2017-02-06 2017-06-27 合网络技术(北京)有限公司 The display control method and device of plug-in captions
CN109429037A (en) * 2017-09-01 2019-03-05 杭州海康威视数字技术股份有限公司 A kind of image processing method, device, equipment and system
CN107728997A (en) * 2017-10-31 2018-02-23 万兴科技股份有限公司 A kind of video player rendering system
CN107872691A (en) * 2017-11-09 2018-04-03 暴风集团股份有限公司 A kind of advertisement loading processing method, apparatus and system
CN108810652A (en) * 2018-06-04 2018-11-13 深圳汇通九州科技有限公司 A kind of information processing method and terminal device
CN112262570B (en) * 2018-06-12 2023-11-14 E·克里奥斯·夏皮拉 Method and computer system for automatically modifying high resolution video data in real time
CN112262570A (en) * 2018-06-12 2021-01-22 E·克里奥斯·夏皮拉 Method and system for automatic real-time frame segmentation of high-resolution video streams into constituent features and modification of features in individual frames to create multiple different linear views from the same video source simultaneously
CN110620954A (en) * 2018-06-20 2019-12-27 北京优酷科技有限公司 Video processing method and device for hard solution
CN110620954B (en) * 2018-06-20 2021-11-26 阿里巴巴(中国)有限公司 Video processing method, device and storage medium for hard solution
CN109151574B (en) * 2018-10-15 2020-03-24 Oppo广东移动通信有限公司 Video processing method, video processing device, electronic equipment and storage medium
US11562772B2 (en) 2018-10-15 2023-01-24 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Video processing method, electronic device, and storage medium
CN109151574A (en) * 2018-10-15 2019-01-04 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and storage medium
CN111526420A (en) * 2019-02-01 2020-08-11 北京右划网络科技有限公司 Video rendering method, electronic device and storage medium
CN113747198A (en) * 2021-06-25 2021-12-03 航天时代飞鸿技术有限公司 Unmanned aerial vehicle cluster video picture rendering and post-processing method, medium and device
CN113747198B (en) * 2021-06-25 2024-02-09 航天时代飞鸿技术有限公司 Unmanned aerial vehicle cluster video picture rendering and post-processing method, medium and device
CN114418887A (en) * 2022-01-19 2022-04-29 北京百度网讯科技有限公司 Image enhancement method and device, electronic equipment and storage medium
CN114418887B (en) * 2022-01-19 2022-12-20 北京百度网讯科技有限公司 Image enhancement method and device, electronic equipment and storage medium
CN117241068A (en) * 2023-11-15 2023-12-15 北京医百科技有限公司 Video subtitle generating method and device
CN117241068B (en) * 2023-11-15 2024-01-19 北京医百科技有限公司 Video subtitle generating method and device

Similar Documents

Publication Publication Date Title
CN103700385A (en) Media player, playing method, and video post-processing method in hardware acceleration mode
US9584785B2 (en) One pass video processing and composition for high-definition video
US8159505B2 (en) System and method for efficient digital video composition
JP4680407B2 (en) Coding with variable bit field
KR100604102B1 (en) Methods and apparatus for processing DVD video
CN106416216B (en) Transform method and converting means
CN110291563B (en) Multiple shader processes in graphics processing
US9355493B2 (en) Device and method for compositing video planes
US20120218381A1 (en) Independent Layered Content for Hardware-Accelerated Media Playback
CN109600666A (en) Video broadcasting method, device, medium and electronic equipment in scene of game
CN106233741A (en) Data reproducing method and transcriber
WO2019069482A1 (en) Image display system and image display method
CN107113470A (en) For the method for coding, video processor, method for decoding, Video Decoder
CN103597812A (en) Graphics processing for high dynamic range video
MXPA06003719A (en) Reproducing device, reproducing method, reproducing program, and recording medium.
CN104244087A (en) Video rendering method and device
CN114598937A (en) Animation video generation and playing method and device
US6590580B2 (en) Data creation device for image display and record medium
KR101577012B1 (en) Method of composing multimedia data and video player for playing moving pictures in an android operating system
CN109379622A (en) The method and apparatus of video are played in game
JP2000148134A (en) Image display method and image processing device
Jeong et al. Development of a 3D virtual studio system for experiential learning
WO2020091939A1 (en) Method and apparatus for an hdr hardware processor inline to hardware encoder and decoder
CN111526420A (en) Video rendering method, electronic device and storage medium
CN115150666A (en) Multimedia information processing method, system and media playing equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20140402