CN114827716A - Method and device for creating video player in WPF (Windows presentation Foundation) and related components - Google Patents

Method and device for creating video player in WPF (Windows presentation Foundation) and related components Download PDF

Info

Publication number
CN114827716A
CN114827716A CN202210220261.9A CN202210220261A CN114827716A CN 114827716 A CN114827716 A CN 114827716A CN 202210220261 A CN202210220261 A CN 202210220261A CN 114827716 A CN114827716 A CN 114827716A
Authority
CN
China
Prior art keywords
filter
video
audio
stream data
creating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210220261.9A
Other languages
Chinese (zh)
Other versions
CN114827716B (en
Inventor
谭志文
李盛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ruan Niu Technology Group Co ltd
Original Assignee
Afirstsoft Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Afirstsoft Co Ltd filed Critical Afirstsoft Co Ltd
Priority to CN202210220261.9A priority Critical patent/CN114827716B/en
Publication of CN114827716A publication Critical patent/CN114827716A/en
Application granted granted Critical
Publication of CN114827716B publication Critical patent/CN114827716B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • H04N21/4341Demultiplexing of audio and video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a method and a device for creating a video player in WPF and related components. Reading an audio/video file, and separating to obtain audio stream data and video stream data; decoding audio stream data, rendering the audio stream data by using an audio rendering filter, and inputting the rendered audio stream data to a sound card player for outputting; decoding video stream data, receiving and processing the decoded video stream data by using a callback function in a data acquisition filter, and throwing the decoded video stream data out of a target application program to enable the target application program to output; and the target application program receives video stream data based on a callback function in the data acquisition filter and sends the video stream data to a preset empty rendering filter, so that the empty rendering filter destroys and discards the received video stream data. The method can create a video player which supports all common audio and video formats to be played, and the player does not depend on a video renderer created by DirectX to perform video rendering.

Description

Method and device for creating video player in WPF (Windows presentation Foundation) and related components
Technical Field
The invention relates to the field of computer audio and video, in particular to a method and a device for creating a video player in WPF (Wireless Fidelity) and related components.
Background
At present, in a windows desktop application developed by WPF, a WPF is provided with a MediaElement multimedia player for playing multimedia audio/video files, but the player has a series of problems, specifically as follows: 1. only some common video formats are supported, such as avi, mpeg and other video formats, and other video formats such as h265, mov and the like are not supported, and the common video formats are supported only by manually installing an audio/video decoder of a third party; 2. depending on a DirectX graphics driver in the windows, if the windows operating system does not install the DirectX graphics driver or fails to create video rendering through DirectX, the video playing will fail.
That is to say, the MediaElement multimedia player in WPF has limited format support, needs to manually install a third-party audio/video decoder, and relies on DirectX for video rendering.
Disclosure of Invention
The invention aims to provide a method, a device and related components for creating a video player in WPF (Windows presentation framework), and aims to solve the problem that the existing WPF has limited support of a multimedia player format and relies on DirectX to perform video rendering, so that the practicability is poor.
In order to solve the technical problems, the invention aims to realize the following technical scheme: there is provided a method of creating a video player in a WPF, comprising:
reading an input audio/video file, analyzing the audio/video file by using an audio/video separation filter, and separating to obtain audio stream data and video stream data;
decoding the audio stream data based on a preset audio decoder filter, rendering the audio stream data by using an audio rendering filter, and inputting the rendered audio stream data to a sound card player for outputting;
decoding the video stream data based on a preset video decoder filter, receiving and processing the decoded video stream data by using a callback function in a data acquisition filter, and throwing the decoded video stream data out of a target application program to enable the target application program to output the video stream data;
and the target application program sends the received video stream data to a preset empty rendering filter, so that the empty rendering filter destroys and discards the received video stream data.
In addition, an object of the present invention is to provide an apparatus for creating a video player in a WPF, including:
the separation unit is used for reading an input audio/video file, analyzing the audio/video file by using an audio/video separation filter and separating to obtain audio stream data and video stream data;
the audio processing unit is used for decoding the audio stream data based on a preset audio decoder filter, rendering the audio stream data by using an audio rendering filter, and inputting the rendered audio stream data to a sound card player for outputting;
the video processing unit is used for decoding the video stream data based on a preset video decoder filter, receiving and processing the decoded video stream data by utilizing a callback function in the data acquisition filter, and throwing the decoded video stream data out of a target application program to enable the target application program to output the video stream data;
and the data discarding unit is used for sending the received video stream data to a preset empty rendering filter by the target application program, so that the empty rendering filter destroys and discards the received video stream data.
In addition, an embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor, when executing the computer program, implements the method for creating a video player in a WPF according to the first aspect.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, causes the processor to execute the method for creating a video player in WPF according to the first aspect.
The embodiment of the invention discloses a method, a device and related components for creating a video player in WPF, wherein the method comprises the following steps: reading an input audio/video file, analyzing the audio/video file by using an audio/video separation filter, and separating to obtain audio stream data and video stream data; decoding the audio stream data based on a preset audio decoder filter, rendering the audio stream data by using an audio rendering filter, and inputting the rendered audio stream data to a sound card player for outputting; decoding the video stream data based on a preset video decoder filter, receiving and processing the decoded video stream data by using a callback function in a data acquisition filter, and throwing the decoded video stream data out of a target application program to enable the target application program to output the video stream data; and the target application program sends the received video stream data to a preset empty rendering filter, so that the empty rendering filter destroys and discards the received video stream data. The method can create a video player supporting all common audio and video formats to be played, is compatible with a win7 and all windows operating systems above, and does not depend on a video renderer created by DirectX to perform video rendering.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of a conventional multimedia player according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a method for creating a video player in a WPF according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a method for creating a video player in a WPF according to an embodiment of the present invention;
fig. 4 is a schematic effect diagram of an image before horizontal flipping processing in a method for creating a video player in a WPF according to an embodiment of the present invention;
fig. 5 is a schematic diagram illustrating an effect of performing horizontal flipping processing on an image in a method for creating a video player in a WPF according to an embodiment of the present invention;
fig. 6 is a schematic block diagram of an apparatus for creating a video player in a WPF according to an embodiment of the present invention;
FIG. 7 is a schematic block diagram of a computer device provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Description of the prior art:
with reference to fig. 1, the multimedia player of the system will automatically call the audio/video decoder to perform the audio/video rendering and playing process as follows: firstly, reading a testvideo file (mp4), analyzing information of the Audio and Video file through an Audio and Video separation filter (LAV Splitter), separating Audio stream data and Video stream data, decoding the Video stream data through a Video Decoder filter (LAV Video Decoder), outputting the decoded Video stream data to a Video Renderer filter (Video Mixing render 9) for Video playing, decoding the Audio stream data through an Audio Decoder (LAV Audio Decoder), and outputting the decoded Audio stream data to an Audio Renderer filter (Default DirectSound Device) for sound playing.
The playing process of the MediaElement multimedia player in the WPF is limited in format support, a third-party audio/video decoder needs to be manually installed, and video rendering is performed depending on DirectX. Meanwhile, most third-party video players play videos by using a WinForm window handle, and display videos by embedding WinForm controls in a WPF desktop application program.
In order to solve the problems, the application provides a method for creating a video player in a WPF, the method can be used for creating a video player which supports all common audio and video formats to be played in the WPF, and is compatible with a win7 and all windows operating systems above, and the method does not depend on a video renderer created by DirectX to perform video rendering.
The method comprises the following specific steps:
referring to fig. 2, fig. 2 is a flowchart illustrating a method for creating a video player in a WPF according to an embodiment of the present invention;
as shown in fig. 2, the method includes steps S101 to S104.
S101, reading an input audio and video file, analyzing the audio and video file by using an audio and video separation filter, and separating to obtain audio stream data and video stream data;
s102, decoding the audio stream data based on a preset audio decoder filter, rendering the audio stream data by using an audio rendering filter, and inputting the rendered audio stream data to a sound card player for outputting;
s103, decoding the video stream data based on a preset video decoder filter, receiving and processing the decoded video stream data by using a callback function in a data acquisition filter, and throwing out the decoded video stream data to a target application program to enable the target application program to output the video stream data;
s104, the target application program sends the received video stream data to a preset empty rendering filter, and the empty rendering filter destroys and discards the received video stream data.
Referring to fig. 3, in this embodiment, an audio/video separation filter (LAV Splitter) is used to analyze an input audio/video file (testvideo. mp4), and then the audio/video file is separated into audio stream data and video stream data.
For Audio stream data, firstly, decoding the compressed Audio stream data by using a pre-established Audio Decoder filter (LAV Audio Decoder), then rendering the Audio stream data by using a pre-established Audio rendering filter (Default direct sound Device), and finally outputting the rendered Audio stream data to a sound card player for sound playing.
For Video stream data, firstly, a pre-established Video Decoder filter (LAV Video Decoder) is used for decoding compressed Video stream data, then, a pre-established data acquisition filter (samplegrobbber) is used for acquiring decoded Video stream data, and finally, an application program receives the Video stream data through a callback function in the data acquisition filter.
Meanwhile, the target application program sends the received video stream data to a pre-established Null rendering filter (Null render), the Null rendering filter discards each received video stream data, namely, data destruction is carried out on each video stream data, memory data is released, and each video stream data is not displayed or not rendered, namely, the Null rendering filter skips the step of video rendering by using a video Renderer created by DirectX, so that the data acquisition filter acquires the video data after the video stream data are decoded, converts the video data into an image format supported by WPF, and then displays a video picture.
It should be noted that, in the DirectShow rendering pipeline, a complete data stream rendering pipeline is required, and the data stream can flow in the pipeline, so the collected data is processed in the callback, the original data is still in the pipeline, and the memory and the like are not released.
In a specific embodiment, before the step S101, the method includes the following steps:
s10, creating a multimedia playing class, and creating a filter manager in the multimedia playing class;
s11, enumerating all filters registered in the windows operating system through an ICreateDevEnum interface in the DirectShow API to obtain a filter sequence;
s12, searching and creating a file source filter in the filter sequence;
s13, searching and creating a data acquisition filter in the filter sequence, and setting corresponding acquisition parameters for the data acquisition filter through an ISampleGrabber interface in a DirectShow API;
s14, searching and creating a null rendering filter in the filter sequence;
s15, adding the file source filter, the data acquisition filter and the empty rendering filter into the filter manager;
s16, creating an audio and video separation filter, an audio decoder filter and a video decoder filter, and adding the audio and video separation filter, the audio decoder filter and the video decoder filter to the filter manager;
s17, after each filter is added into the filter manager, connecting the input pin and the output pin of each filter, wherein the output pin of the file source filter is connected with the input pin of the audio/video separation filter; an audio output pin of the audio and video separation filter is connected with an input pin of the audio decoder filter; an input pin of the video decoder filter is connected with a video output pin of the audio and video separation filter; the input pin of the data acquisition filter is connected with the output pin of the video decoder filter; and an input pin of the empty rendering filter is connected with an output pin of the data acquisition filter.
In this embodiment, the DirectShow API is first packaged in C # as COM API interface, then a multimedia play class (MediaPlayer) is created, which inherits the callback ISample video cb interface of the data acquisition filter, and then a filter manager (FilterGraph) is created in the multimedia play class, which is used to manage all filters, including adding audio and video decoders, renderers, etc., and controlling the connection, play, pause, stop, etc., of the filters, where the filter manager adds and connects the filters through IFilterGraph2 COM interface, it should be noted that IFilterGraph2 interface cannot create an instance, and needs to create an instance of the filter manager first, and then IFilterGraph2 interface can be forcibly converted into imedidacorol COM interface, which is used to control the play, pause, stop of the filters.
Then enumerating all filters registered in the windows operating system through an ICreateDevEnum interface in a DirectShow API to obtain a filter sequence, searching and creating a File Source filter according to a name File Source (Async.), setting an IFileSource Filter COM interface provided by the File Source filter to load an audio and video File (such as testvideo. mp4), and simultaneously adding the File Source filter to a filter manager (Filter graph) to read the audio and video File through the File Source filter.
An audio video separation filter (LAV Splitter) is created and added to the filter manager (FilterGraph).
An Audio Decoder filter (LAV Audio Decoder) is created and added to the filter manager (FilterGraph).
Enumerating all filters registered in the windows operating system through an ICreateDedevenum interface in the DirectShow API, and searching and creating audio rendering filters according to the name Default DirectSound Device.
A Video Decoder filter (LAV Video Decoder) is created and added to the filter manager (FilterGraph).
Enumerate all filters registered in the windows operating system through the ICreateDevEnum interface in the DirectShow API, find and create a data collection filter according to the name SampleGrabber, and add to a filter manager (FilterGraph).
Enumerating all filters registered in the windows operating system through an ICreateDevEnum interface in the DirectShow API, and finding and creating a Null rendering filter according to the name Null render.
It should be noted that in the DirectShow streaming media development kit, IPin interfaces are required to be implemented for input and output of all filters, and chinese is translated into pins, and the pins contain audio and video format information of input and output, so that the output pins of the file source filter are connected with the input pins of the audio and video separation filter; an audio output pin of the audio and video separation filter is connected with an input pin of the audio decoder filter; an input pin of the video decoder filter is connected with a video output pin of the audio/video separation filter; the input pin of the data acquisition filter is connected with the output pin of the video decoder filter; and an input pin of the air rendering filter is connected with an output pin of the data acquisition filter.
In a specific embodiment, the step S13 includes the following steps:
s20, calling a SetOneShot method to set the data acquisition filter to continuously receive the decoded video stream data;
s21, calling a SetBufferSamples method to set the data acquisition filter not to copy a buffer area after receiving the decoded video stream data;
s22, calling a SetCallback method to set the data acquisition filter to throw out decoded video stream data through a callback function after receiving the decoded video stream data;
and S23, calling a SetMediaType method to set the video format of the decoded video stream data received by the data acquisition filter to be RGB 32.
In this embodiment, to obtain the ISampleGrabber interface from the data acquisition filter, and to convert the acquired video data into a video image that can be displayed in the WPF, first, 4 attributes need to be set through the ISampleGrabber interface, as follows:
1. the method is realized by calling a SetOnEset (false) method after the acquisition of the video stream data is not suspended.
2. Setting the collected video stream data not to copy the buffer area, and calling a SetBufferSamples (false) method to realize the setting.
3. Setting collected video stream data to be thrown out through a callback function, and realizing the method by calling a SetCallback (this, 1), wherein this represents a multimedia playing class (mediaPlayer), and 1 represents that a 1 st method int buffer CB (double sample time, IntPtr pBuffer, int buffer Len) in an ISampleGrabberCB interface is used.
4. Setting the format of the collected video to be RGB32, and realizing the video by calling a SetMediaType (AM _ MEDIA _ TYPE) method, wherein a main MEDIA TYPE (majortype) needs to be designated as the video in an AM _ MEDIA _ TYPE structural body, and a sub MEDIA TYPE (subtype) needs to be RGB 32; it should be noted that the reason why the format is RGB32 is that the output pin of the Video Decoder filter (LAV Video Decoder) supports the output in RGB32 format, and this is also provided to facilitate the conversion of the display Video in the WPF.
In a specific embodiment, the step S103 includes the following steps:
s30, the data acquisition filter acquires video stream data with a format of RGB32 through an ISamppleGrabberCB interface in a DirectShow API by using a BufferCB callback method, and copies the video stream data;
and S31, the data acquisition filter performs format conversion processing on the acquired video data and throws the video data out of the target application program, so that the target application program outputs video pictures.
In this embodiment, after receiving RGB32 video stream data by the buffer cb callback method, the video stream data is copied, and then the video stream data is thrown out by the asynchronous callback, so as to avoid the situation of video stream data blocking.
In a specific embodiment, before the step S101, the method further includes the following steps:
s40, creating a multimedia control class, and inheriting an Image control Image class in the WPF;
s41, creating a graph conversion group based on a constructor, adding a rotation conversion function and a zooming conversion function in the graph conversion group, and setting a graph conversion attribute of the graph conversion group to an Image class;
s42, creating a multimedia playing class in the multimedia control class, and registering the multimedia playing class to receive the callback of the RGB32 image format;
s43, creating a write image format class, and setting the image format to pixelformats.
In this embodiment, after the video file is pipelined with the audio/video data through the DirectShow API, a Run method is called to start playing the audio/video data, that is, rendering the audio/video data, through an imediatiacontrol interface in a filter manager (FilterGraph), because the WPF does not support the RGB32 image pixel format (where r (red) represents red, g (green) represents green, and b (blue) represents blue), and only supports the BGR32 image pixel format, as opposed to the RGB32 image pixel format, the data collected through the data collection filter (samplebbber) callback method is directly written through the WriteableBitmap class in the WPF, and the displayed video picture is inverted, as shown in fig. 4, so the image data needs to be horizontally flipped and rotated by 180 degrees, and the general purpose is achieved by traversing the collected image pixel data and modifying the pixel data to achieve the purpose of converting the general pixel format, although the function can be achieved, but the method occupies a higher CPU, the scheme reduces the CPU occupation by modifying the layout method of the controls in the WPF without modifying the pixel format of the collected RGB32 image, and the specific implementation method is as follows:
creating a customized multimedia control class (MediaElement), inheriting an Image control Image class carried by the WPF, the set of graphics transformations is constructed in the constructor by creating a set of graphics transformations, adding a rotation transformation to the set of transformations, and setting the rotation angle to 180 degrees, adding scaling conversion in the conversion group, setting the horizontal scaling to be-1 and the vertical conversion scaling to be 1, setting the created graphic conversion group to the graphic conversion attribute of the Image class, then creates a multimedia playing class (MediaPlayer) in the customized multimedia control class (MediaElement), and registers a callback for receiving RGB32 image format in the multimedia playing class, and creates a write image format class (WriteableBitmap), sets the image format to pixelformats.bgr32, after the data acquisition filter pin connection is completed, the width and the height of the Image format are acquired by a GetConnectedMediaType method in an ISampleGrabbe interface, the Writeabeltmap is set to the Source attribute in the Image class, and finally the video Image rendering effect modified by the method is shown in FIG. 5.
According to the method, in a windows desktop application program developed by WPF, based on a DirectShow streaming media development kit, a third-party audio and video decoder is automatically loaded, a video renderer created by DirectX is skipped for video rendering, video data is collected after the video is decoded, the video data is converted into an image format supported by the WPF, and then a video picture is displayed.
Embodiments of the present invention further provide an apparatus for creating a video player in a WPF, where the apparatus is configured to execute any one of the foregoing embodiments of the method for creating a video player in a WPF. Specifically, referring to fig. 6, fig. 6 is a schematic block diagram of an apparatus for creating a video player in a WPF according to an embodiment of the present invention.
As shown in fig. 6, an apparatus 500 for creating a video player in WPF includes:
the separation unit 501 is configured to read an input audio/video file, analyze the audio/video file by using an audio/video separation filter, and separate the audio/video file to obtain audio stream data and video stream data;
the audio processing unit 502 is configured to decode and render the audio stream data based on a preset audio decoder filter, and input the rendered audio stream data to a sound card player for output;
the video processing unit 503 is configured to decode the video stream data based on a preset video decoder filter, receive and process the decoded video stream data by using a callback function in the data acquisition filter, and throw the decoded video stream data out of a target application program, so that the target application program outputs the video stream data;
a data discarding unit 504, configured to send the received video stream data to a preset empty rendering filter by the target application program, so that the empty rendering filter destroys and discards the received video stream data.
The device can create a video player supporting all common audio and video formats, is compatible with a win7 and all windows operating systems above, and does not depend on a video renderer created by DirectX to perform video rendering.
In a specific embodiment, the method further includes the following units:
the class unit is used for creating a multimedia control class and inheriting an Image control Image class in the WPF;
the conversion unit is used for creating a graph conversion group based on a construction function, adding a rotation conversion function and a zooming conversion function in the graph conversion group, and setting a graph conversion attribute of the graph conversion group in an Image class;
the registration unit is used for creating a multimedia playing class in the multimedia control class and registering a callback of receiving an RGB32 image format in the multimedia playing class;
a format setting unit for creating a write image format class and setting the image format to pixelformats.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above-described apparatus for creating a video player in WPF may be implemented in the form of a computer program which may be run on a computer device as shown in fig. 7.
Referring to fig. 7, fig. 7 is a schematic block diagram of a computer device according to an embodiment of the present invention. The computer device 1100 is a server, and the server may be an independent server or a server cluster including a plurality of servers.
Referring to fig. 7, the computer device 1100 includes a processor 1102, memory and network interface 1105 connected by a system bus 1101, where the memory may include non-volatile storage media 1103 and internal memory 1104.
The non-volatile storage medium 1103 may store an operating system 11031 and computer programs 11032. The computer program 11032, when executed, may cause the processor 1102 to perform a method of creating a video player in WPF.
The processor 1102 is configured to provide computing and control capabilities that support the operation of the overall computing device 1100.
The internal memory 1104 provides an environment for running the computer program 11032 in the non-volatile storage medium 1103, and the computer program 11032, when executed by the processor 1102, may cause the processor 1102 to perform a method of creating a video player in WPF.
The network interface 1105 is used for network communications, such as to provide for the transmission of data information. Those skilled in the art will appreciate that the configuration shown in fig. 7 is a block diagram of only a portion of the configuration associated with aspects of the present invention and is not intended to limit the computing device 1100 to which aspects of the present invention may be applied, and that a particular computing device 1100 may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
Those skilled in the art will appreciate that the embodiment of a computer device illustrated in fig. 7 does not constitute a limitation on the specific construction of the computer device, and that in other embodiments a computer device may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. For example, in some embodiments, the computer device may only include a memory and a processor, and in such embodiments, the structures and functions of the memory and the processor are consistent with those of the embodiment shown in fig. 7, and are not described herein again.
It should be appreciated that in embodiments of the present invention, the Processor 1102 may be a Central Processing Unit (CPU), and the Processor 1102 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In another embodiment of the present invention, a computer-readable storage medium is provided. The computer readable storage medium may be a non-volatile computer readable storage medium. The computer readable storage medium stores a computer program, wherein the computer program, when executed by a processor, implements a method of creating a video player in a WPF according to an embodiment of the present invention.
The storage medium is an entity and non-transitory storage medium, and may be various entity storage media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, devices and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method of creating a video player in a WPF, comprising:
reading an input audio and video file, analyzing the audio and video file by using an audio and video separation filter, and separating to obtain audio stream data and video stream data;
decoding the audio stream data based on a preset audio decoder filter, rendering the audio stream data by using an audio rendering filter, and inputting the rendered audio stream data to a sound card player for outputting;
decoding the video stream data based on a preset video decoder filter, receiving and processing the decoded video stream data by using a callback function in a data acquisition filter, and throwing the decoded video stream data out of a target application program to enable the target application program to output the video stream data;
and the target application program sends the received video stream data to a preset empty rendering filter, so that the empty rendering filter destroys and discards the received video stream data.
2. The method for creating a video player in a WPF as claimed in claim 1, wherein the reading of the input audio/video file and the parsing of the file information using the audio/video separation filter, before obtaining the audio stream data and the video stream data, comprises:
creating a multimedia playing class and creating a filter manager in the multimedia playing class;
enumerating all filters registered in a windows operating system through an ICreateDevEnum interface in a DirectShow API to obtain a filter sequence;
searching and creating a file source filter in the filter sequence;
searching and creating a data acquisition filter in the filter sequence, and setting corresponding acquisition parameters for the data acquisition filter through an ISampleGrabber interface in a DirectShow API;
searching and creating an empty rendering filter in the filter sequence;
adding the file source filter, the data acquisition filter and the empty rendering filter into the filter manager;
creating an audio and video separation filter, an audio decoder filter and a video decoder filter, and adding the audio and video separation filter, the audio decoder filter and the video decoder filter to the filter manager;
after each filter is added into the filter manager, connecting input and output pins of each filter, wherein the output pin of the file source filter is connected with the input pin of the audio and video separation filter; an audio output pin of the audio and video separation filter is connected with an input pin of the audio decoder filter; an input pin of the video decoder filter is connected with a video output pin of the audio and video separation filter; the input pin of the data acquisition filter is connected with the output pin of the video decoder filter; and an input pin of the empty rendering filter is connected with an output pin of the data acquisition filter.
3. The method of claim 2, wherein searching and creating a data acquisition filter in the filter sequence, and setting corresponding acquisition parameters for the data acquisition filter through an ISampleGrabber interface in DirectShow API comprises:
calling a SetOneShot method to set the data acquisition filter to continuously receive the decoded video stream data;
calling a SetBufferSamples method to set the data acquisition filter not to copy a buffer area after receiving the decoded video stream data;
calling a SetCallback method to set the data acquisition filter to throw out the decoded video stream data through a callback function after receiving the decoded video stream data;
and calling a SetMediaType method to set the video format of the decoded video stream data received by the data acquisition filter to be RGB 32.
4. The method of claim 3, wherein the receiving and processing the decoded video stream data by using the callback function in the data collection filter and throwing the decoded video stream data to the target application program to enable the target application program to output the video stream data comprises:
the data acquisition filter acquires video stream data with a format of RGB32 through an ISampleGrabberCB interface in a DirectShow API by using a BufferCB callback method, and copies the video stream data;
and the data acquisition filter performs format conversion processing on the acquired video data and throws the video data out of the target application program, so that the target application program outputs video pictures.
5. The method for creating a video player in a WPF as claimed in claim 1, wherein the method further comprises, before reading the input audio/video file and analyzing the file information using an audio/video separation filter to obtain audio stream data and video stream data:
creating a multimedia control class and inheriting an Image control Image class in the WPF;
creating a graph conversion group based on a construction function, adding a rotation conversion function and a zooming conversion function in the graph conversion group, and setting a graph conversion attribute of the graph conversion group in an Image class;
creating a multimedia playing class in the multimedia control class, and registering a callback of receiving an RGB32 image format in the multimedia playing class;
create a write image format class and set the image format to pixelformats.
6. The method of creating a video player in a WPF of claim 2, wherein said creating an audiovisual separation filter, an audio decoder filter, a video decoder filter, comprises:
installing an audio decoder to issue an installation package to obtain an audio and video decoder, and copying a directory file of the audio decoder to a directory of a target application program;
dynamically reading the ax suffix file and creating an audio and video separation filter, a video decoder filter and an audio decoder filter;
loading a corresponding filter ax file through a system kernel API function LoadLibraryW;
returning a module pointer of the filter ax file through a LoadLibraryW function, transmitting the module pointer to a system kernel API function GetProcAddress, and then returning a class object access address;
and acquiring an IClasFactory interface from the class object access address, and creating a filter object through the IClasFactory interface.
7. An apparatus for creating a video player in a WPF, comprising:
the separation unit is used for reading an input audio/video file, analyzing the audio/video file by using an audio/video separation filter and separating to obtain audio stream data and video stream data;
the audio processing unit is used for decoding the audio stream data based on a preset audio decoder filter, rendering the audio stream data by using an audio rendering filter, and inputting the rendered audio stream data to a sound card player for outputting;
the video processing unit is used for decoding the video stream data based on a preset video decoder filter, receiving and processing the decoded video stream data by utilizing a callback function in the data acquisition filter, and throwing the decoded video stream data out of a target application program to enable the target application program to output the video stream data;
and the data discarding unit is used for sending the received video stream data to a preset empty rendering filter by the target application program, so that the empty rendering filter destroys and discards the received video stream data.
8. The apparatus for creating a video player in a WPF as claimed in claim 7, further comprising:
the class unit is used for creating a multimedia control class and inheriting an Image control Image class in the WPF;
the conversion unit is used for creating a graph conversion group based on a construction function, adding a rotation conversion function and a zooming conversion function in the graph conversion group, and setting a graph conversion attribute of the graph conversion group in an Image class;
the registration unit is used for creating a multimedia playing class in the multimedia control class and registering a callback of receiving an RGB32 image format in the multimedia playing class;
a format setting unit for creating a write image format class and setting the image format to pixelformats.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of creating a video player in a WPF as claimed in any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to carry out the method of creating a video player in a WPF according to any one of claims 1 to 6.
CN202210220261.9A 2022-03-08 2022-03-08 Method, device and related components for creating video player in WPF Active CN114827716B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210220261.9A CN114827716B (en) 2022-03-08 2022-03-08 Method, device and related components for creating video player in WPF

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210220261.9A CN114827716B (en) 2022-03-08 2022-03-08 Method, device and related components for creating video player in WPF

Publications (2)

Publication Number Publication Date
CN114827716A true CN114827716A (en) 2022-07-29
CN114827716B CN114827716B (en) 2023-08-11

Family

ID=82528623

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210220261.9A Active CN114827716B (en) 2022-03-08 2022-03-08 Method, device and related components for creating video player in WPF

Country Status (1)

Country Link
CN (1) CN114827716B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080281592A1 (en) * 2007-05-11 2008-11-13 General Instrument Corporation Method and Apparatus for Annotating Video Content With Metadata Generated Using Speech Recognition Technology
US20110078532A1 (en) * 2009-09-29 2011-03-31 Musigy Usa, Inc. Method and system for low-latency transfer protocol
US20160021334A1 (en) * 2013-03-11 2016-01-21 Video Dubber Ltd. Method, Apparatus and System For Regenerating Voice Intonation In Automatically Dubbed Videos
CN109448613A (en) * 2019-01-10 2019-03-08 成都腾木科技有限公司 A kind of bar optical projection system based on WPF
US20200260149A1 (en) * 2017-12-29 2020-08-13 Tencent Technology (Shenzhen) Company Limited Live streaming sharing method, and related device and system
CN112153447A (en) * 2020-09-27 2020-12-29 海信视像科技股份有限公司 Display device and sound and picture synchronous control method
CN113473226A (en) * 2021-08-09 2021-10-01 深圳软牛科技有限公司 Method and device for improving video rendering efficiency, computer equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080281592A1 (en) * 2007-05-11 2008-11-13 General Instrument Corporation Method and Apparatus for Annotating Video Content With Metadata Generated Using Speech Recognition Technology
US20110078532A1 (en) * 2009-09-29 2011-03-31 Musigy Usa, Inc. Method and system for low-latency transfer protocol
US20160021334A1 (en) * 2013-03-11 2016-01-21 Video Dubber Ltd. Method, Apparatus and System For Regenerating Voice Intonation In Automatically Dubbed Videos
US20200260149A1 (en) * 2017-12-29 2020-08-13 Tencent Technology (Shenzhen) Company Limited Live streaming sharing method, and related device and system
CN109448613A (en) * 2019-01-10 2019-03-08 成都腾木科技有限公司 A kind of bar optical projection system based on WPF
CN112153447A (en) * 2020-09-27 2020-12-29 海信视像科技股份有限公司 Display device and sound and picture synchronous control method
CN113473226A (en) * 2021-08-09 2021-10-01 深圳软牛科技有限公司 Method and device for improving video rendering efficiency, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姜洪涛: "《基于WPF的云游戏时延可视化系统设计与实现》", 《工业控制计算机》, pages 76 - 77 *

Also Published As

Publication number Publication date
CN114827716B (en) 2023-08-11

Similar Documents

Publication Publication Date Title
US10721282B2 (en) Media acceleration for virtual computing services
US11347370B2 (en) Method and system for video recording
CN109194960B (en) Image frame rendering method and device and electronic equipment
KR101034080B1 (en) Uniform video decoding and display
US11457272B2 (en) Video processing method, electronic device, and computer-readable medium
US9048859B2 (en) Method and apparatus for compressing and decompressing data
US20100235820A1 (en) Hosted application platform with extensible media format
CN110166810B (en) Video rendering engine switching method, device and equipment and readable storage medium
CN109151966B (en) Terminal control method, terminal control device, terminal equipment and storage medium
US10554713B2 (en) Low latency application streaming using temporal frame transformation
US20180146243A1 (en) Method and system for managing buffers
CN109361950B (en) Video processing method and device, electronic equipment and storage medium
GB2528558A (en) Sampling, fault management, and/or context switching via a computer pipeline
CN110933495A (en) Video playing method and device based on embedded system
CN113226501A (en) Streaming media image providing device and method for application program
CN109587561B (en) Video processing method and device, electronic equipment and storage medium
US20040264383A1 (en) Media foundation topology
CN114827716B (en) Method, device and related components for creating video player in WPF
KR20160131827A (en) System for cloud streaming service, method of image cloud streaming service using alpha level of color bit and apparatus for the same
US10560727B2 (en) Server structure for supporting multiple sessions of virtualization
US8880789B2 (en) Optimal power usage in decoding a content stream stored in a secondary storage
US7920747B2 (en) Pre-distribution image scaling for screen size
TWI506442B (en) Multiple simultaneous displays on the same screen
CN111641867B (en) Video output method, device, electronic equipment and storage medium
CN114615546B (en) Video playing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 1301-1310, building 2, jinlitong financial center building, 1100 Xingye Road, Haiwang community, Xin'an street, Bao'an District, Shenzhen, Guangdong 518000

Patentee after: Shenzhen Ruan Niu Technology Group Co.,Ltd.

Address before: 1301-1310, building 2, jinlitong financial center building, 1100 Xingye Road, Haiwang community, Xin'an street, Bao'an District, Shenzhen, Guangdong 518000

Patentee before: AFIRSTSOFT CO.,LTD.

CP01 Change in the name or title of a patent holder