CN117641010A - Front-end plug-in-free streaming media player and working method thereof - Google Patents

Front-end plug-in-free streaming media player and working method thereof Download PDF

Info

Publication number
CN117641010A
CN117641010A CN202311540653.4A CN202311540653A CN117641010A CN 117641010 A CN117641010 A CN 117641010A CN 202311540653 A CN202311540653 A CN 202311540653A CN 117641010 A CN117641010 A CN 117641010A
Authority
CN
China
Prior art keywords
video
data
streaming media
media player
stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311540653.4A
Other languages
Chinese (zh)
Inventor
王冠冠
陶富
戴金林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qifeng Technology Co ltd
Original Assignee
Qifeng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qifeng Technology Co ltd filed Critical Qifeng Technology Co ltd
Priority to CN202311540653.4A priority Critical patent/CN117641010A/en
Publication of CN117641010A publication Critical patent/CN117641010A/en
Pending legal-status Critical Current

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention provides a front-end plug-in-free streaming media player and a working method thereof, wherein the front-end plug-in-free streaming media player comprises a streaming media player, and further comprises a data acquisition and audio/video display system, wherein the data acquisition and audio/video display system comprises a data acquisition module, a video display module and a code stream processing module, and the code stream processing module is connected with the video display module; the invention can realize a plug-in-free streaming media player without installing plug-ins, effectively improves development efficiency, can be better compatible with a browser, and greatly saves the joint aligning intelligence burden and the complexity of expanding functions of developers.

Description

Front-end plug-in-free streaming media player and working method thereof
Technical Field
The invention belongs to the technical field of Web development and computer multimedia, and relates to a media stream transmission display video picture and a system, in particular to a front-end plug-in-free streaming media player and a working method thereof, which realize the function of playing audio and video files in a browser without additional plug-ins by utilizing a Web standard technology.
Background
The method is characterized in that an installation plug-in is used in the current transformer substation monitoring system to play streaming media and video, and a plug-in-free player is packaged based on a browser kernel method and does not have the installation plug-in; plug-in players are extremely unfriendly to browsers, and difficult for front-end developers to debug video streams. The plug-in-free player provides a web video streaming media scheme supporting business system business, has complete functions, is open in source, easy to expand, smooth in playing, high in performance, free of blocking after long-time playing, free of collapsing of a browser, good in interaction with the browser and integrated. The cost of docking between developers is greatly saved, the cost of debugging is greatly saved, the cost of browser consumption is greatly saved, and the method is more visually integrated.
Disclosure of Invention
In order to solve the technical problems, the invention provides the front-end plug-in-free streaming media player and the working method thereof, which can realize the plug-in-free streaming media player without installing plug-ins, effectively improve the development efficiency, better be compatible with a browser and greatly save the joint aligning intelligence burden and the complexity of expanding functions of developers.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
the front-end plug-in-free streaming media player comprises a streaming media player and is characterized in that: the system comprises a stream media player, a stream media server, a stream media processing module, a video display module and a video display module, wherein the stream media player is in data connection with the stream media server through a stream media service, the data acquisition module is used for collecting stream media data transmitted by the stream media player, the data acquisition module is connected with the stream media processing module, when the stream media data are sent to the data acquisition module, the data acquisition module stores current frame data of the stream media data, monitors changes of the stream media data, then transmits the stream media data to the stream media processing module, the stream media processing module judges whether the stream media data are transmitted for the first time after receiving the stream media data of the stream media processing module, if so, a codec is created, video carrier elements in ml5 of the video display module are acquired, a media source is initialized, then the stream media source is transmitted for encoding and decoding, queue localization queuing and zoning orderly marking n1, and subsequent data are sequentially [ n1, n2. ] are sequentially transmitted to the stream media processing module, and the stream media processing module is arranged to be a first-in-out manner, and the data is processed by a video stream processing module is arranged to have a media attribute of a data stream object;
the code stream processing module is connected with the video display module, the video display module is used for rendering and displaying the video stream, and when the decoded audio and video data need to be rendered on a screen, the video display module renders binary data of the audio and video data;
the device control link system comprises a device control module and a device camera, wherein the device control module controls the device camera through a streaming media player, the device control module is connected with the streaming media player and transmits a control signal to the streaming media player through user mouse input, the streaming media player transmits a control instruction to streaming media service, and the streaming media service controls the device camera to move and zoom a visual angle;
the video control system comprises a video control module, the video control module is connected with a streaming media player, the streaming media player sends a control instruction to streaming media service after being controlled by a video screen control module, the streaming media player is connected with a video display module, and the streaming media player displays after the audio and video data are rendered.
As a preferable technical scheme of the invention: and after being processed by the code stream processing module, the video clips which are played by the video display module are thrown away and are not stored until the transmission of the stream is manually closed, and then the processing of data and the playing of video are not carried out.
As a preferable technical scheme of the invention: when the decoded Audio and video data are cached Audio and video data, the video data are rendered by adopting a GPU acceleration technology, and the Audio is rendered through a Web Audio API provided by a browser.
As a preferable technical scheme of the invention: when the decoded audio and video data are online audio and video data, the front-end plug-in-free streaming media player has network adaptability, and the code rate can be adjusted according to the network condition.
As a preferable technical scheme of the invention: the video control module comprises a control button and a display screen, wherein the control button comprises a play/pause button, a mute button, a full screen button, a speed control button, an image quality selection button, a dialogue button, a picture grabbing button and a video recording button, and the display screen is used for displaying a progress bar, real-time and error and loading state prompts.
The invention provides a front-end plug-in-free streaming media player, which comprises a streaming media player, a data acquisition and audio/video display system, a device control link system and a video control system, wherein the data acquisition and audio/video display system comprises a data acquisition module, a video display module and a code stream processing module, the device control link system comprises a device control module and a device camera, the video control system comprises a video control module, and the video control module is connected with the streaming media player.
The data acquisition module acquires the data transmitted by the media stream, then carries out queue processing on the encoded and decoded data transmitted to the code stream processing module, asynchronously acquires the data according to the transmitted service address, such as the binary data transmitted by websocket, http, and connects and transmits the data to the code stream processing module, and the code stream processing module integrates and encodes and decodes the binary data and carries out data processing. When the code stream processing module acquires the first code stream acquisition data, a corresponding decoder is selected for decoding. And (3) acquiring the video tag of the original html5, calling a native media source technology to generate a link address to the video tag of the original html5 in the video display module, playing, feeding the video to a queuing decoder for playing by a subsequent code stream entering queue, taking the video as a base point, and then generating real-time pictures in sequence according to a method of dynamically generating pictures by time sequence acquisition data until live broadcast monitoring is manually closed, wherein a video playback function needs time sequence acquisition data transmission to be completed.
A front-end plug-in-free streaming media player working method is characterized in that:
differential Pulse Coding (DPCM) is performed using the difference between the predicted value (P (n)) and the current pixel (X (n)),
p (n) =Σ [ W (n, m) ×x (n-m) ] (m is a pixel adjacent to X (n))
In this formula, W (n, m) is a given weight, which varies according to the distance of the pixel from the center pixel, this predicted value P (n) is then used as the basis for DPCM encoding,
in addition, quantization errors generated during decoding can be represented by:
x’(n)=x(n)+ΔQ
where x' (n) is the decoded output, x (n) is the original signal, Δq is the quantization error,
motion compensation (Motion Compensation):
motion vector: MV, representing a motion vector between two frames,
current frame pixel value: i (x, y), representing the pixel value at coordinates (x, y) on the current frame image,
motion compensated frame pixel values:
I'(x,y)=I(x+MV_x,y+MV_y)
this formulation, by finding the pixel values in the corresponding reference frame on the current frame based on the information of the motion vectors, achieves motion compensation,
transform (Transform):
transform coefficients: t (u, v), representing coefficients of an image transform, typically using a Discrete Cosine Transform (DCT),
image block pixel values: i (x, y), representing the pixel values at coordinates (x, y) on the input image,
transformed coefficients:
c (u, v) is a transformed frequency domain coefficient representing the amplitude at the position (u, v) in the frequency domain,
i (x, y) is the pixel value of the input image in the spatial domain, representing the intensity at the location (x, y) in the spatial domain,
u and v denote frequencies in the horizontal and vertical directions in the frequency domain respectively,
in image compression and video coding, the coefficients of the DCT typically undergo quantization to reduce the range of representation of the values,
(DCT) video decoding algorithm formula
N-1
DCT[I]=2*Z cos(/N*(k-1/2)*i+T/N*(-1/2)*j)*input[k][]
i=0,j=0,k=0,l=0
Where N is the size of the macroblock, i and i are the position index of the current macroblock, k and the index of the input data, input [ k ] [ is the value of the input data, DCTOD is the output value after DCT transformation,
this formula represents the discrete cosine transform calculation process, converts the image block into transform coefficients of the frequency domain,
entropy Coding (Entropy Coding):
entropy coding algorithms, such as huffman coding or context-adaptive binary arithmetic coding (CABAC),
an example formula for the entropy coding algorithm is shannon's theoretical value calculation formula for entropy coding:
H(x)=-∑i=1npilog2(pi),
the video rendering interface control adopts a matrix algorithm:
the formula is as follows:
the 2 x 2 matrix linear transformation in two dimensions uses the following matrix transformation formula:
[x′]=[a b][x]
[y′]=[c d][y]
where a, b, c and d are elements of the matrix, x and y are input coordinates, x 'and y' are output coordinates,
video image rendering algorithm:
color mixing formula:
R=S+D*(1-Sa)
this formula is used to calculate the color mix for two pixels, assuming that there are two pixels S and D, S being relatively forward in the z-axis direction, i.e. above, D being relatively backward in the z-axis direction, i.e. below, the final color value is S, i.e. the color + D of the upper pixel is the transparency of the colors 1-S of the lower pixel,
the illumination formula:
L(v)=Ls(v)+Lt(v)+Lb(v)
texture mapping: UV mapping is a texture mapping algorithm, and the formula of UV mapping is as follows:
(u,v)=(u',v')×T
where (u, v) is texture coordinates, (u ', v') is vertex coordinates, T is a transformation matrix,
z-buffering algorithm: the Z-buffer algorithm is a depth test algorithm, and the formula of the Z-buffer algorithm is as follows:
Znew=Zold+Znear*(1-alpha)
where Znew is the new depth value, zold is the old depth value, znear is the depth value of the object, and alpha is the transparency of the object.
The invention provides
Compared with the prior art, the invention has the beneficial effects that:
the invention realizes the plug-in-free streaming media player by connecting and matching the video display module, the video control module, the code stream processing module and the equipment control module, does not need to install plug-ins, improves the development efficiency, can be better compatible with a browser, and greatly saves the joint aligning intelligence burden of developers and the complexity of expanding functions.
Drawings
FIG. 1 is an audio-visual presentation of data acquisition;
FIG. 2 is a device control link diagram;
FIG. 3 is a diagram of a code stream processing code core method;
FIG. 4 is a player effects diagram of FIG. 1;
fig. 5 is a player effect diagram of fig. 2.
Detailed Description
The invention is described in further detail below with reference to the attached drawings and detailed description:
as shown in fig. 1-5, the invention provides a front-end plug-in-free streaming media player, which comprises a streaming media player, and further comprises a data acquisition and audio/video display system, wherein the data acquisition and audio/video display system comprises a data acquisition module, a video display module and a code stream processing module, the streaming media player is in data connection with the data acquisition module through streaming media service, the data acquisition module is used for collecting code stream data transmitted by the streaming media player, the data acquisition module is connected with the code stream processing module, when the code stream data is sent to the data acquisition module, the data acquisition module stores the current frame data of the code stream data, monitors the change of the code stream data, then transmits the code stream data to the code stream processing module, the code stream processing module judges whether the first transmission is carried out after receiving the code stream data of the data acquisition module, a codec is created for the first transmission, video carrier elements in html5 are acquired, a media source is initialized, the code stream is encoded and decoded to the media source, and is sequentially marked and queued in sequence, and the subsequent transmission is arranged in a first-in a sequence and a reverse queue (38 m 1) and is a video queue, and the subsequent data is subjected to stream attribute is sequentially processed according to the received by the method;
the code stream processing module is connected with the video display module, the video display module is used for rendering and displaying the video stream, and when the decoded audio and video data need to be rendered on a screen, the video display module renders binary data of the audio and video data;
the device control link system comprises a device control module and a device camera, wherein the device control module controls the device camera through a streaming media player, the device control module is connected with the streaming media player and transmits a control signal to the streaming media player through user mouse input, the streaming media player transmits a control instruction to streaming media service, and the streaming media service controls the device camera to move and zoom a visual angle;
the video control system comprises a video control module, the video control module is connected with a streaming media player, the streaming media player sends a control instruction to streaming media service after being controlled by a video screen control module, the streaming media player is connected with a video display module, and the streaming media player displays after the audio and video data are rendered.
And after being processed by the code stream processing module, the video clips which are played by the video display module are thrown away and are not stored until the transmission of the stream is manually closed, and then the processing of data and the playing of video are not carried out.
When the decoded audio and video data are cached audio and video data, the video data are rendered by adopting a GPU acceleration technology, and the audio is rendered by a WebAudio API provided by a browser.
When the decoded audio and video data are online audio and video data, the front-end plug-in-free streaming media player has network adaptability, and the code rate can be adjusted according to the network condition.
The video control module comprises a control button and a display screen, wherein the control button comprises a play/pause button, a mute button, a full screen button, a speed control button, an image quality selection button, a dialogue button, a picture grabbing button and a video recording button, and the display screen is used for displaying a progress bar, real-time and error and loading state prompts.
The invention provides a front-end plug-in-free streaming media player, which comprises a streaming media player, a data acquisition and audio/video display system, a device control link system and a video control system, wherein the data acquisition and audio/video display system comprises a data acquisition module, a video display module and a code stream processing module, the device control link system comprises a device control module and a device camera, the video control system comprises a video control module, and the video control module is connected with the streaming media player.
The data acquisition module acquires the data transmitted by the media stream, then carries out queue processing on the encoded and decoded data transmitted to the code stream processing module, asynchronously acquires the data according to the transmitted service address, such as the binary data transmitted by websocket, http, and connects and transmits the data to the code stream processing module, and the code stream processing module integrates and encodes and decodes the binary data and carries out data processing. When the code stream processing module acquires the first code stream acquisition data, a corresponding decoder is selected for decoding. And (3) acquiring the video tag of the original html5, calling a native media source technology to generate a link address to the video tag of the original html5 in the video display module, playing, feeding the video to a queuing decoder for playing by a subsequent code stream entering queue, taking the video as a base point, and then generating real-time pictures in sequence according to a method of dynamically generating pictures by time sequence acquisition data until live broadcast monitoring is manually closed, wherein a video playback function needs time sequence acquisition data transmission to be completed.
The data acquisition module is a front-end service data acquisition layer, a data acquisition module and a code stream processing module are respectively loaded in a front-end initialization player of the data acquisition module, the data acquisition module is used for acquiring all media stream data acquisition inlets, the code stream data aiming at stream media transmission are collected, when stream media are sent to the data acquisition module, the data acquisition module stores the current frame data aiming at the code stream, the front-end function monitors the change of the data, and the data is transmitted to the next step for processing.
The code stream processing module belongs to the code stream data processing layer in the whole combination, and relates to a series of processes of video data acquisition, decoding, rendering and the like. Video source acquisition, the player first needs to acquire the video source. This may be a local video file, an online video stream, a live source, etc. For online video, protocols such as HTTP, WS, etc. are typically used. Once the video source is obtained, the player needs to parse the video file format to understand the basic information of the video, such as resolution, frame rate, encoding mode, etc. Common video file formats include MP4, FMP4, FLV, mpeg ts, etc. After parsing the video file format, the player needs to extract the bitstream data from the video file. This includes audio and video streams. For multimedia container formats, such as MP4, a container format decapsulator (demux) is typically used to extract the audio-video data. After the audio and video code stream is obtained, the player needs to decode, and converts the compressed audio and video data into original data which can be played or rendered. Common video coding formats include h.264, H.265 (HEVC), VP9, etc., and audio coding formats include AAC, MP3, opus, etc. According to the video coding format, a corresponding decoder is selected for decoding. The player performs play control according to the decoded data, including play start, pause, fast forward, fast backward, etc. The specific implementation principle is that a module judges whether the data is transmitted for the first time after receiving the data of the data acquisition module, if the data is transmitted for the first time, a codec is created, video carrier elements in html5 of a video display module are obtained, a media source (media source) is initialized, then a code stream is transmitted to the media source (media source) for encoding and decoding, queue localization queuing and zoning ordered marking n1 are carried out, and the subsequent data is analogized in sequence ([ n1, n2...nn ]), and collected data is transmitted to the media source (media source) for processing according to a first-in first-out principle; the processed video clip is completed by url of the set src attribute as a media source object. The processed video clips played by the video display module are thrown away and are not stored. Until the transmission of the stream is manually turned off, the processing of the data and the playing of the video are not performed.
The video presentation module belongs to the layer of rendering and presenting video streams, and comprises rendering, wherein when decoded audio and video data need to be rendered on a screen. For video, rendering is performed using GPU acceleration techniques. The audio is rendered through WebAudioAPI provided by the browser. For online video streaming, the player has certain network adaptability, and the code rate can be adjusted according to the network condition so as to ensure smooth playing. Error handling: handling possible errors, such as network connection interruption, decoding failure, etc., provides a user-friendly error indication.
The equipment control module belongs to the management control layer of equipment and is used for controlling and operating the functions of the visual angles of cameras in videos. The device control module allows the user to change the camera view in the video by entering a mouse. This includes up and down movements, panning and zooming, etc. The device control module needs to monitor the user's input events such as mouse movements, keyboard keys, etc. to adjust the camera's viewing angle accordingly. The transition of the camera should be smooth at the time of user interaction to provide a good user experience. This may involve a difference algorithm to make the camera motion look more natural. Depending on the needs of the application scenario, it may be desirable to limit the range of view movement of the user to ensure that the user does not see what should not be seen. The device control module supports an event callback mechanism, so that an application program can be notified when the state of the camera changes, and corresponding operation can be executed.
In some cases, the motion of the camera may need to be interpolated by an animation effect to produce a smoother transition.
The video control module belongs to a control layer of a player for video streaming, such as video playing, pausing, stopping and the like, and comprises a playing/pausing button: buttons for starting and pausing video playback are provided, which is one of the most basic and common functions in control.
Mute button: allowing the user to mute or un-mute one key.
Progress bar: the current playing progress of the video is displayed and the user is allowed to jump to a different position of the video by dragging.
Full screen button: allowing the user to switch the video to full screen mode, providing a larger viewing area.
And (3) time display: the current playing time and the total duration of the video are displayed.
A speed control button that allows the user to speed up or slow down the playback speed of the video.
An image quality selection button allows the user to select an image quality suitable for his bandwidth if the video provides multiple resolution or quality options.
Dialogue button: and starting a voice call function between the device and the voice call function.
Error and load status hint: and displaying corresponding prompts to the user when errors or loading state changes occur so as to provide good user experience.
Capture button: and allowing the user to save the screenshot frame images by one key.
Video button: allowing a user to view a video content save by recording a video by one key.
A front-end plug-in-free streaming media player operating method uses the difference between the predicted value (P (n)) and the current pixel (X (n)) for Differential Pulse Coding (DPCM),
p (n) =Σ [ W (n, m) ×x (n-m) ] (m is a pixel adjacent to X (n))
In this formula, W (n, m) is a given weight, which varies according to the distance of the pixel from the center pixel, this predicted value P (n) is then used as the basis for DPCM encoding,
in addition, quantization errors generated during decoding can be represented by:
x’(n)=x(n)+ΔQ
where x' (n) is the decoded output, x (n) is the original signal, Δq is the quantization error,
motion compensation (Motion Compensation):
motion vector: MV, representing a motion vector between two frames,
current frame pixel value: i (x, y), representing the pixel value at coordinates (x, y) on the current frame image,
motion compensated frame pixel values:
I'(x,y)=I(x+MV_x,y+MV_y)
this formulation, by finding the pixel values in the corresponding reference frame on the current frame based on the information of the motion vectors, achieves motion compensation,
transform (Transform):
transform coefficients: t (u, v), representing coefficients of an image transform, typically using a Discrete Cosine Transform (DCT),
image block pixel values: i (x, y), representing the pixel values at coordinates (x, y) on the input image,
transformed coefficients:
c (u, v) is a transformed frequency domain coefficient representing the amplitude at the position (u, v) in the frequency domain,
i (x, y) is the pixel value of the input image in the spatial domain, representing the intensity at the location (x, y) in the spatial domain,
u and v represent frequencies in the horizontal and vertical directions, respectively, in the frequency domain, this formula describing the transformation of an 8 x 8 block. In actual image and video compression, the image is typically divided into blocks, each of which is then discrete cosine transformed,
in image compression and video coding, the coefficients of DCT typically reduce the range of representation of values by quantization, for better entropy coding and compression,
(DCT) video decoding algorithm formula
N-1
DCT[I]=2*Z cos(/N*(k-1/2)*i+T/N*(-1/2)*j)*input[k][]
i=0,j=0,k=0,l=0
Where N is the size of the macroblock, i and i are the position index of the current macroblock, k and the index of the input data, input [ k ] [ is the value of the input data, DCTOD is the output value after DCT conversion, the formula converts the data from the time domain to the frequency domain by discrete cosine transforming the input data, thereby reducing redundancy of the data and improving compression ratio of the data,
this formula represents the discrete cosine transform calculation process, converts the image block into transform coefficients of the frequency domain,
entropy Coding (Entropy Coding):
entropy coding algorithms, such as huffman coding or context-adaptive binary arithmetic coding (CABAC),
an example formula for the entropy coding algorithm is shannon's theoretical value calculation formula for entropy coding:
H(x)=-∑i=1npilog2(pi),
furthermore, huffman coding is the fastest entropy coding, the basic principle of which is to construct a binary tree based on the statistical frequencies, and finally the characters with the highest frequencies are represented by the shortest codes and the characters with the lowest frequencies are represented by the longest codes. The basic operation is the process of building a binary tree,
the video rendering interface control adopts a matrix algorithm:
the formula is as follows:
the 2 x 2 matrix linear transformation in two dimensions uses the following matrix transformation formula:
[x′]=[a b][x]
[y′]=[c d][y]
where a, b, c and d are elements of the matrix, x and y are input coordinates, x 'and y' are output coordinates,
video image rendering algorithm:
color mixing formula:
R=S+D*(1-Sa)
this formula is used to calculate the color mix for two pixels, assuming that there are two pixels S and D, S being relatively forward in the z-axis direction, i.e. above, D being relatively backward in the z-axis direction, i.e. below, the final color value is S, i.e. the color + D of the upper pixel is the transparency of the colors 1-S of the lower pixel,
the illumination formula:
L(v)=Ls(v)+Lt(v)+Lb(v)
texture mapping: UV mapping is a texture mapping algorithm that maps texture coordinates onto a two-dimensional plane, and the formula of UV mapping is as follows:
(u,v)=(u',v')×T
where (u, v) is texture coordinates, (u ', v') is vertex coordinates, T is a transformation matrix,
z-buffering algorithm: the Z-buffer algorithm is a depth test algorithm that can effectively deal with the problem of overlap between objects, and the formula of the Z-buffer algorithm is as follows:
Znew=Zold+Znear*(1-alpha)
where Znew is the new depth value, zold is the old depth value, znear is the depth value of the object, and alpha is the transparency of the object.
The invention realizes the plug-in-free streaming media player by connecting and matching the video display module, the video control module, the code stream processing module and the equipment control module, does not need to install plug-ins, improves the development efficiency, can be better compatible with a browser, and greatly saves the joint aligning intelligence burden of developers and the complexity of expanding functions.
The above description is only of the preferred embodiment of the present invention, and is not intended to limit the present invention in any other way, but is intended to cover any modifications or equivalent variations according to the technical spirit of the present invention, which fall within the scope of the present invention as defined by the appended claims.

Claims (6)

1. The front-end plug-in-free streaming media player comprises a streaming media player and is characterized in that: the system comprises a stream media player, a stream media server, a stream media processing module, a video display module and a video display module, wherein the stream media player is in data connection with the stream media server through a stream media service, the data acquisition module is used for collecting stream media data transmitted by the stream media player, the data acquisition module is connected with the stream media processing module, when the stream media data are sent to the data acquisition module, the data acquisition module stores current frame data of the stream media data, monitors changes of the stream media data, then transmits the stream media data to the stream media processing module, the stream media processing module judges whether the stream media data are transmitted for the first time after receiving the stream media data of the stream media processing module, if so, a codec is created, video carrier elements in ml5 of the video display module are acquired, a media source is initialized, then the stream media source is transmitted for encoding and decoding, queue localization queuing and zoning orderly marking n1, and subsequent data are sequentially [ n1, n2. ] are sequentially transmitted to the stream media processing module, and the stream media processing module is arranged to be a first-in-out manner, and the data is processed by a video stream processing module is arranged to have a media attribute of a data stream object;
the code stream processing module is connected with the video display module, the video display module is used for rendering and displaying the video stream, and when the decoded audio and video data need to be rendered on a screen, the video display module renders binary data of the audio and video data;
the device control link system comprises a device control module and a device camera, wherein the device control module controls the device camera through a streaming media player, the device control module is connected with the streaming media player and transmits a control signal to the streaming media player through user mouse input, the streaming media player transmits a control instruction to streaming media service, and the streaming media service controls the device camera to move and zoom a visual angle;
the video control system comprises a video control module, the video control module is connected with a streaming media player, the streaming media player sends a control instruction to streaming media service after being controlled by a video screen control module, the streaming media player is connected with a video display module, and the streaming media player displays after the audio and video data are rendered.
2. The front-end plug-in-free streaming media player of claim 1, wherein: and after being processed by the code stream processing module, the video clips which are played by the video display module are thrown away and are not stored until the transmission of the stream is manually closed, and then the processing of data and the playing of video are not carried out.
3. The front-end plug-in-free streaming media player of claim 1, wherein: when the decoded Audio and video data are cached Audio and video data, the video data are rendered by adopting a GPU acceleration technology, and the Audio is rendered through a Web Audio API provided by a browser.
4. The front-end plug-in-free streaming media player of claim 1, wherein: when the decoded audio and video data are online audio and video data, the front-end plug-in-free streaming media player has network adaptability, and the code rate can be adjusted according to the network condition.
5. The front-end plug-in-free streaming media player of claim 1, wherein: the video control module comprises a control button and a display screen, wherein the control button comprises a play/pause button, a mute button, a full screen button, a speed control button, an image quality selection button, a dialogue button, a picture grabbing button and a video recording button, and the display screen is used for displaying a progress bar, real-time and error and loading state prompts.
6. A method for operating a front-end plugin-free streaming media player according to any one of claims 1-5, wherein:
differential Pulse Coding (DPCM) is performed using the difference between the predicted value (P (n)) and the current pixel (X (n)),
p (n) =Σ [ W (n, m) ×x (n-m) ] (m is a pixel adjacent to X (n))
In this formula, W (n, m) is a given weight, which varies according to the distance of the pixel from the center pixel, this predicted value P (n) is then used as the basis for DPCM encoding,
in addition, quantization errors generated during decoding can be represented by:
x’(n)=x(n)+ΔQ
where x' (n) is the decoded output, x (n) is the original signal, Δq is the quantization error,
motion compensation (Motion Compensation):
motion vector: MV, representing a motion vector between two frames,
current frame pixel value: i (x, y), representing the pixel value at coordinates (x, y) on the current frame image,
motion compensated frame pixel values:
I'(x,y)=I(x+MV_x,y+MV_y)
this formulation, by finding the pixel values in the corresponding reference frame on the current frame based on the information of the motion vectors, achieves motion compensation,
transform (Transform):
transform coefficients: t (u, v), representing coefficients of an image transform, typically using a Discrete Cosine Transform (DCT),
image block pixel values: i (x, y), representing the pixel values at coordinates (x, y) on the input image,
transformed coefficients:
c (u, v) is a transformed frequency domain coefficient representing the amplitude at the position (u, v) in the frequency domain,
i (x, y) is the pixel value of the input image in the spatial domain, representing the intensity at the location (x, y) in the spatial domain,
u and v denote frequencies in the horizontal and vertical directions in the frequency domain respectively,
in image compression and video coding, the coefficients of the DCT typically undergo quantization to reduce the range of representation of the values,
(DCT) video decoding algorithm formula
N-1
DCT[I]=2*Z cos(/N*(k-1/2)*i+T/N*(-1/2)*j)*input[k][]
i=0,j=0,k=0,l=0
Where N is the size of the macroblock, i and i are the position index of the current macroblock, k and the index of the input data, input [ k ] [ is the value of the input data, DCTOD is the output value after DCT transformation,
this formula represents the discrete cosine transform calculation process, converts the image block into transform coefficients of the frequency domain,
entropy Coding (Entropy Coding):
entropy coding algorithms, such as huffman coding or context-adaptive binary arithmetic coding (CABAC),
an example formula for the entropy coding algorithm is shannon's theoretical value calculation formula for entropy coding:
H(x)=-∑i=1npilog2(pi),
the video rendering interface control adopts a matrix algorithm:
the formula is as follows:
the 2 x 2 matrix linear transformation in two dimensions uses the following matrix transformation formula:
[x′]=[a b][x]
[y′]=[c d][y]
where a, b, c and d are elements of the matrix, x and y are input coordinates, x 'and y' are output coordinates,
video image rendering algorithm:
color mixing formula:
R=S+D*(1-Sa)
this formula is used to calculate the color mix for two pixels, assuming that there are two pixels S and D, S being relatively forward in the z-axis direction, i.e. above, D being relatively backward in the z-axis direction, i.e. below, the final color value is S, i.e. the color + D of the upper pixel is the transparency of the colors 1-S of the lower pixel,
the illumination formula:
L(v)=Ls(v)+Lt(v)+Lb(v)
texture mapping: UV mapping is a texture mapping algorithm, and the formula of UV mapping is as follows:
(u,v)=(u',v')×T
where (u, v) is texture coordinates, (u ', v') is vertex coordinates, T is a transformation matrix,
z-buffering algorithm: the Z-buffer algorithm is a depth test algorithm, and the formula of the Z-buffer algorithm is as follows:
Znew=Zold+Znear*(1-alpha)
where Znew is the new depth value, zold is the old depth value, znear is the depth value of the object, and alpha is the transparency of the object.
CN202311540653.4A 2023-11-20 2023-11-20 Front-end plug-in-free streaming media player and working method thereof Pending CN117641010A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311540653.4A CN117641010A (en) 2023-11-20 2023-11-20 Front-end plug-in-free streaming media player and working method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311540653.4A CN117641010A (en) 2023-11-20 2023-11-20 Front-end plug-in-free streaming media player and working method thereof

Publications (1)

Publication Number Publication Date
CN117641010A true CN117641010A (en) 2024-03-01

Family

ID=90026204

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311540653.4A Pending CN117641010A (en) 2023-11-20 2023-11-20 Front-end plug-in-free streaming media player and working method thereof

Country Status (1)

Country Link
CN (1) CN117641010A (en)

Similar Documents

Publication Publication Date Title
US20180192063A1 (en) Method and System for Virtual Reality (VR) Video Transcode By Extracting Residual From Different Resolutions
US7836193B2 (en) Method and apparatus for providing graphical overlays in a multimedia system
US10200744B2 (en) Overlay rendering of user interface onto source video
CN109510990B (en) Image processing method and device, computer readable storage medium and electronic device
CN109074161B (en) Hybrid graphics and pixel domain architecture for 360 degree video
US11700419B2 (en) Re-encoding predicted picture frames in live video stream applications
US8111932B2 (en) Digital image decoder with integrated concurrent image prescaler
CN112235585B (en) Live broadcasting method, device and system for virtual scene
CN103581570A (en) Large-size screen splice system and method based on multi-media communication
CN112601096B (en) Video decoding method, device, equipment and readable storage medium
US9226003B2 (en) Method for transmitting video signals from an application on a server over an IP network to a client device
KR100746005B1 (en) Apparatus and method for managing multipurpose video streaming
CN111970573A (en) Cloud game method and system
CN117641010A (en) Front-end plug-in-free streaming media player and working method thereof
Kim et al. A real-time MPEG encoder using a programmable processor
CN113497932B (en) Method, system and medium for measuring video transmission time delay
US20090074054A1 (en) Resolution-converting apparatus, resolution-converting method and previewing apparatus
CN110798715A (en) Video playing method and system based on image string
CN105323635B (en) Video processing system and method
JP2003299082A (en) Method for image data converting processing
Lau et al. MPEG-4 coding of ultrasound sequences
CN113194326A (en) Panoramic live broadcast method and device, computer equipment and computer readable storage medium
KR101827244B1 (en) HD CCTV Video Surveillance System
KR20230086792A (en) Method and Apparatus for Supporting Pre-Roll and Mid-Roll During Media Streaming and Playback
JPH09182017A (en) Moving picture editing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination