CN112714338B - Video transmission method, video playing method, video transmission device, video playing device, computer equipment and storage medium - Google Patents

Video transmission method, video playing method, video transmission device, video playing device, computer equipment and storage medium Download PDF

Info

Publication number
CN112714338B
CN112714338B CN202011558432.6A CN202011558432A CN112714338B CN 112714338 B CN112714338 B CN 112714338B CN 202011558432 A CN202011558432 A CN 202011558432A CN 112714338 B CN112714338 B CN 112714338B
Authority
CN
China
Prior art keywords
pixel point
video
array
frame
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011558432.6A
Other languages
Chinese (zh)
Other versions
CN112714338A (en
Inventor
林泽全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Puhui Enterprise Management Co Ltd
Original Assignee
Ping An Puhui Enterprise Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Puhui Enterprise Management Co Ltd filed Critical Ping An Puhui Enterprise Management Co Ltd
Priority to CN202011558432.6A priority Critical patent/CN112714338B/en
Publication of CN112714338A publication Critical patent/CN112714338A/en
Application granted granted Critical
Publication of CN112714338B publication Critical patent/CN112714338B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream

Abstract

The invention discloses a video transmission method, a video playing method, a video transmission device, a video playing device, computer equipment and a storage medium, and belongs to the technical field of data processing. The video transmission method compares the pixel point arrays of two adjacent video frames to obtain a pixel point difference array so as to reduce data in the pixel point arrays of the adjacent video frames; compressing a plurality of adjacent and same pixel point difference arrays into a polymerization pixel point difference array so as to reduce video frames with the same pixel points; the initial frame pixel array of the video data and the pixel point difference array and/or the aggregation pixel point difference array are combined to obtain an initial video data structure, and switching data of the video data are added into the initial video data structure, so that the purposes of reducing the transmission volume and improving the transmission speed and effect on the premise of not changing the definition of a video image are achieved. The invention can improve the video transmission speed and ensure the image quality definition of the video image.

Description

Video transmission method, video playing method, video transmission device, video playing device, computer equipment and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a video transmission method, a video playback method, a video transmission device, a video playback device, a computer device, and a storage medium.
Background
At present, the video animation generally adopts a GIF format or a javascript language. For the video animation in the GIF format, the video animation is generally a solidified picture, has a large volume, is low in definition after being compressed and transmitted, and cannot play high-definition image quality; for video animation of javascript language, each frame of the moving picture is split during playing, javascript language and css (Cascading Style Sheets) are combined, each frame of picture is continuously played through a timer, and the user is visually perceived as the animation.
Disclosure of Invention
Aiming at the problem that the existing video animation has large transmission volume and low loading speed, the video transmission and playing method, the video transmission and playing device, the computer equipment and the storage medium are provided for improving the video transmission speed and ensuring the image quality definition of a video image.
To achieve the above object, the present invention provides a video transmission method, including:
acquiring pixel point arrays of all video frames in video data frame by frame;
comparing the difference between the pixel point arrays of two adjacent video frames frame by frame according to the sequencing sequence of the video frames to obtain the pixel point difference array of the next video frame;
compressing a plurality of adjacent and same pixel point difference arrays into a polymerization pixel point difference array;
sequentially merging an initial frame pixel array of the video data with a pixel point difference array and/or a polymerization pixel point difference array according to the sequencing sequence of the video frames to generate an initial video data structure;
and adding the switching data of the video data into the initial video data structure to form video transmission data.
Optionally, the step of comparing, frame by frame, differences between pixel point arrays of two adjacent video frames according to the sorting order of the video frames to obtain a pixel point difference array of a subsequent video frame includes:
comparing two corresponding pixel point elements in the pixel point arrays of two adjacent video frames according to the sequencing sequence of the video frames, and identifying the pixel point element in the next video frame in the two corresponding and same pixel point elements in the pixel point arrays of the two video frames by adopting a first identifier;
identifying whether the pixel point elements continuously marked with the first mark exist in the pixel point array of the next video frame;
if yes, combining the pixel point elements continuously marked with the first mark in the pixel point array of the next video frame to obtain a pixel point difference array of the next video frame;
and if not, taking the pixel point array of the current identified next video frame as the pixel point difference array of the next video frame.
Optionally, compressing the plurality of adjacent and identical pixel point difference arrays into one aggregate pixel point difference array includes:
determining that a plurality of adjacent pixel point difference arrays are the same, adopting a second identifier to identify the pixel point difference array of the last video frame, taking the pixel point difference array carrying the second identifier as a polymerization pixel point difference array, and recording the number of the adjacent video frames with the same pixel point difference array.
Optionally, the handover data includes: playing time and motion data of each video frame;
adding the switching data of the video data to the initial video data structure to form video transmission data, including:
generating play time arrays corresponding to the arrays in the initial video data structure according to the play time of each video frame in the video data;
generating action arrays corresponding to the arrays in the initial video data structure according to the action data of each video frame in the video data;
and adding the playing time array and the action array into the initial video data structure to form video transmission data.
In order to achieve the above object, the present invention further provides a video playing method, including:
receiving video transmission data, wherein the video transmission data consists of an initial video data structure, a playing time array and an action array;
analyzing the initial video data structure in the video transmission data frame by frame to obtain the pixel point arrays of each array in the initial video data structure, and generating a frame image according to the pixel point arrays;
and playing the frame image according to the sequencing sequence of each array in the initial video data structure, the playing time array and the action array.
Optionally, the initial video data structure includes an initial frame pixel array, a pixel difference array carrying a first identifier, and/or a aggregated pixel difference array carrying a second identifier;
the analyzing the initial video data structure in the video transmission data frame by frame to obtain the pixel point arrays of each array in the initial video data structure, and generating a frame image according to the pixel point arrays, including:
when the initial video data structure comprises a pixel point difference array, restoring the pixel point difference array according to the initial frame pixel array and the first identifier to obtain the corresponding pixel point array;
when the initial video data structure comprises a polymerization pixel point difference array, restoring the polymerization pixel point difference array according to the initial frame pixel array and the second identification to obtain a corresponding pixel point array;
and converting the obtained pixel point array into a frame image.
To achieve the above object, the present invention also provides a video transmission apparatus, comprising:
the acquisition unit is used for acquiring pixel point arrays of all video frames in the video data frame by frame;
the comparison unit is used for comparing the difference between the pixel point arrays of two adjacent video frames frame by frame according to the sequencing sequence of the video frames to obtain the pixel point difference array of the next video frame;
the compression unit is used for compressing the adjacent and same pixel point difference arrays into a polymerization pixel point difference array;
a merging unit, configured to sequentially merge an initial frame pixel array of the video data with a pixel point difference array and/or a aggregated pixel point difference array according to the sorting order of the video frames, so as to generate an initial video data structure;
and the generating unit is used for adding the switching data of the video data into the initial video data structure to form video transmission data.
In order to achieve the above object, the present invention further provides a video playing device, including:
the receiving unit is used for receiving video transmission data, and the video transmission data consists of an initial video data structure, a playing time array and an action array;
the analysis unit is used for analyzing the initial video data structure in the video transmission data frame by frame so as to obtain the pixel point arrays of each array in the initial video data structure and generate a frame image according to the pixel point arrays;
and the playing unit is used for playing the frame image according to the sorting sequence of each array in the initial video data structure, the playing time array and the action array.
To achieve the above object, the present invention also provides a computer device, which includes a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to implement the steps of the method.
To achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the above method.
The video transmission and playing method, the video transmission and playing device, the computer equipment and the storage medium can ensure the image quality definition of the video image while improving the video transmission speed. The video transmission method compares pixel point arrays of two adjacent video frames to obtain a pixel point difference array so as to reduce data in the pixel point arrays of the two adjacent video frames; compressing a plurality of adjacent and same pixel point difference arrays into a polymerization pixel point difference array so as to reduce video frames with the same pixel points; the initial frame pixel array of the video data and the pixel point difference array and/or the aggregation pixel point difference array are combined to obtain an initial video data structure, and switching data of the video data are added into the initial video data structure, so that the purposes of reducing the transmission volume and improving the transmission speed and effect on the premise of not changing the definition of a video image are achieved. The video playing method analyzes an initial video data structure in received video transmission data frame by frame to obtain pixel point arrays of all arrays in the initial video data structure so as to generate frame images consistent with the definition of the original video images according to the pixel point arrays, and plays the frame images according to the sequencing sequence of all arrays in the initial video data structure, the playing time arrays and the action arrays, so that the purpose of playing frame image videos with high-quality image quality is achieved, and meanwhile, the loading speed of the images and user experience are improved.
Drawings
Fig. 1 is a flowchart of an embodiment of a video transmission method according to the present invention;
FIG. 2 is a flow chart of one embodiment of adding switching data of video data to an initial video data structure to obtain video transmission data according to the present invention;
FIG. 3 is a flowchart illustrating a video playing method according to an embodiment of the present invention;
FIG. 4 is a block diagram of an embodiment of a video transmission device according to the present invention;
FIG. 5 is a block diagram of an embodiment of a video playback device according to the present invention;
FIG. 6 is a diagram of the hardware architecture of one embodiment of the computer apparatus of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The video transmission and playing method, the video transmission and playing device, the computer equipment and the storage medium can ensure the definition of the image quality of the video image while improving the video transmission speed, and are suitable for the business fields of finance, medical treatment, insurance and the like. The video transmission method compares the pixel point arrays of two adjacent video frames to obtain a pixel point difference array so as to reduce data in the pixel point arrays of the adjacent video frames; compressing a plurality of adjacent and same pixel point difference arrays into a polymerization pixel point difference array so as to reduce video frames with the same pixel points; the initial frame pixel array of the video data and the pixel point difference array and/or the aggregation pixel point difference array are combined to obtain an initial video data structure, and switching data of the video data are added into the initial video data structure, so that the purposes of reducing the transmission volume and improving the transmission speed and the transmission effect on the premise of not changing the definition of a video image are achieved. The video playing method analyzes an initial video data structure in received video transmission data frame by frame to obtain pixel point arrays of each array in the initial video data structure so as to generate frame images consistent with the definition of original video images according to the pixel point arrays, and plays the frame images according to the sequencing sequence of each array in the initial video data structure, the playing time array and the action array, so that the purpose of playing frame image videos with high quality image quality is achieved, and meanwhile, the loading speed of the images and the user experience are improved.
Example one
Referring to fig. 1, a video transmission method of the present embodiment includes:
A1. and acquiring pixel point arrays of all video frames in the video data frame by frame.
Specifically, the following data structure can be obtained by acquiring the pixel data of the video frame by using a getImageData method of an HTML5 (a language description mode for constructing Web content) canvas element tag:
ImageData{width=imgWidth,height=imgHeight,data=Uint8ClampedArray}。
wherein, width represents the range width of the image pixel points of the video frame; height represents the range height of the video frame image pixel points; the data represents a pixel point array, if 10 pixel points exist in the video frame, 10 array elements exist in the pixel point array, and each array element contains an rgba value.
For example: [ { (r) i 1,g i 1,b i 1,a i 1),(r i 2,g i 2,b i 2,a i 2),…,(r i y,g i y,b i y,a i y),…,(r i n,g i n,b i n,a i n)}]Wherein i represents the ordering of the video frames, i =0,1,2, \8230, n, n are positive integers; y represents the ordering of the pixels in the frame, and y ≦ n.
It is emphasized that, to further ensure the privacy and security of the video data, the video data may also be stored in a node of a blockchain.
A2. And comparing the difference between the pixel point arrays of two adjacent video frames frame by frame according to the sequencing sequence of the video frames to obtain the pixel point difference array of the next video frame.
In this embodiment, the pixel point difference array is obtained by comparing the pixel point arrays of two adjacent video frames, so as to reduce data in the pixel point arrays of the adjacent video frames.
Further, step A2 may comprise the steps of:
A21. and comparing two corresponding pixel point elements in the pixel point arrays of two adjacent video frames according to the sequencing sequence of the video frames, and identifying the pixel point element in the next video frame in the two corresponding and same pixel point elements in the pixel point arrays of the two video frames by adopting a first identifier.
A22. Identifying whether the pixel point elements continuously marked with the first mark exist in the pixel point array of the next video frame, if so, executing the step A23; if not, executing step A24.
A23. And combining the pixel point elements continuously marked with the first mark in the pixel point array of the next video frame to obtain a pixel point difference array of the next video frame.
A24. And taking the pixel point array of the current identified next video frame as the pixel point difference array of the next video frame.
In this embodiment, the pixel difference array of the subsequent video frame may be composed of a plurality of pixel elements carrying the first identifier and the original pixel elements in the pixel array of the subsequent video frame; the pixel point difference array of the next video frame can also be composed of the merged pixel point element carrying the first identifier and the original pixel point element in the pixel point array of the next video frame.
By way of example and not limitation, the pixel array of the initial frame (i.e., first video frame i = 0) is: { (r) 0 1,g 0 1,b 0 1,a 0 1),(r 0 2,g 0 2,b 0 2,a 0 2),…,(r 0 y,g 0 y,b 0 y,a 0 y),…,(r 0 n,g 0 n,b 0 n,a 0 n), the second frame i =1 pixel array is: { (r) 1 1,g 1 1,b 1 1,a 1 1),(r 1 2,g 1 2,b 1 2,a 1 2),…,(r 1 y,g 1 y,b 1 y,a 1 y),…,(r 1 n,g 1 n,b 1 n,a 1 n), the first identification adopts (0):
when the rgba value of the first pixel point of the initial frame is the same as that of the first pixel point of the second frame, the first pixel point of the second frame can be recorded as (r) 1 1,g 1 1,b 1 1,a 1 1)(0);
When the rgba values of the first pixel point to the y pixel point of the initial frame and the second frame are the same, the first pixel point to the y pixel point of the second frame can be recorded as ((r) 1 1,g 1 1,b 1 1,a 1 1)-(r 1 y,g 1 y,b 1 y,a 1 y))(0);
When the rgba value of the nth pixel point of the initial frame is different from that of the nth pixel point of the second frame, the nth pixel point of the second frame can be recorded as (r) 1 n,g 1 n,b 1 n,a 1 n)。
According to the above algorithm rules, the data structure of the initial frame and the second frame is obtained as follows:
[{(r 0 1,g 0 1,b 0 1,a 0 1),(r 0 2,g 0 2,b 0 2,a 0 2),…,(r 0 y,g 0 y,b 0 y,a 0 y),…,(r 0 n,g 0 n,b 0 n,a 0 n)},{((r 1 1,g 1 1,b 1 1,a 1 1)-(r 1 y,g 1 y,b 1 y,a 1 y))(0),…,(r 1 n,g 1 n,b 1 n,a 1 n)}]. And so on to obtain the pixel point difference array of each video frame.
A3. And compressing a plurality of adjacent and same pixel point difference arrays into a polymerization pixel point difference array.
In this embodiment, the video frames with the same pixels are reduced by compressing the adjacent and same pixel difference arrays into a polymeric pixel difference array.
Further, step A3 may comprise: determining that a plurality of adjacent pixel point difference arrays are the same, adopting a second identifier to identify the pixel point difference array of the last video frame, taking the pixel point difference array carrying the second identifier as a polymerization pixel point difference array, and recording the number of the adjacent video frames with the same pixel point difference array.
By way of example and not limitation, if the video data includes 10 frames of images, the pixel array of the initial frame (i.e., the first video frame i = 0) is: { (r) 0 1,g 0 1,b 0 1,a 0 1),(r 0 2,g 0 2,b 0 2,a 0 2),…,(r 0 y,g 0 y,b 0 y,a 0 y),…,(r 0 n,g 0 n,b 0 n,a 0 n), the pixel point difference array of the second frame i =1 is: { ((r) 1 1,g 1 1,b 1 1,a 1 1)-(r 1 y,g 1 y,b 1 y,a 1 y))(0),…,(r 1 n,g 1 n,b 1 n,a 1 n), the tenth frame i =9 pixel array is: { (r) 9 1,g 9 1,b 9 1,a 9 1),(r 9 2,g 9 2,b 9 2,a 9 2),…,(r 9 y,g 9 y,b 9 y,a 9 y),…,(r 9 n,g 9 n,b 9 n,a 9 n), the second flag is {0}:
when the pixel point difference arrays of the second frame to the fourth frame are the same,the difference array of aggregated pixels from the second frame to the fourth frame may be recorded as: { { rgba1} - { ((r) 3 1,g 3 1,b 3 1,a 3 1)-(r 3 y,g 3 y,b 3 y,a 3 y))(0),…,(r 3 n,g 3 n,b 3 n,a 3 n) } {0 }; wherein { rgba1} represents a pixel point difference array of the second frame, and { rgbai } represents a pixel point difference array of the i +1 th frame.
A4. And sequentially combining the initial frame pixel array of the video data with the pixel point difference array and/or the aggregation pixel point difference array according to the sequencing sequence of the video frames to generate an initial video data structure.
In this embodiment, when the pixel point difference arrays of the adjacent video frames in the video data are not the same, the initial video data structure is composed of an initial frame pixel array and a pixel point difference array; when the pixel point difference arrays of at least two adjacent video frames in the video data are the same, the initial video data structure is composed of an initial frame pixel array, a pixel point difference array and a polymerization pixel point difference array; when the pixel point difference arrays of all adjacent video frames in the video data are the same, the initial video data structure is composed of an initial frame pixel array and a polymerization pixel point difference array.
For example: the video data comprises 10 video frames, wherein pixel point difference arrays of a second frame to a fourth frame are the same, pixel point difference arrays of a fifth frame to a tenth frame are different, and the corresponding initial video data structure is as follows:
[{(r 0 1,g 0 1,b 0 1,a 0 1),(r 0 2,g 0 2,b 0 2,a 0 2),…,(r 0 y,g 0 y,b 0 y,a 0 y),…,(r 0 n,g 0 n,b 0 n,a 0 n)},{{rgba1}-{((r 3 1,g 3 1,b 3 1,a 3 1)-(r 3 y,g 3 y,b 3 y,a 3 y))(0),…,(r 3 n,g 3 n,b 3 n,a 3 n)}{0}},{((r 4 1,g 4 1,b 4 1,a 4 1)-(r 4 y,g 4 y,b 4 y,a 4 y))(0),…,(r 4 n,g 4 n,b 4 n,a 4 n)},…,{(r 9 1,g 9 1,b 9 1,a 9 1),(r 9 2,g 9 2,b 9 2,a 9 2),…,(r 9 y,g 9 y,b 9 y,a 9 y),…,(r 9 n,g 9 n,b 9 n,a 9 n)}]。
wherein { (r) 0 1,g 0 1,b 0 1,a 0 1),(r 0 2,g 0 2,b 0 2,a 0 2),…,(r 0 y,g 0 y,b 0 y,a 0 y),…,(r 0 n,g 0 n,b 0 n,a 0 n) } is the initial frame pixel array; { rgba1} - { ((r) 3 1,g 3 1,b 3 1,a 3 1)-(r 3 y,g 3 y,b 3 y,a 3 y))(0),…,(r 3 n,g 3 n,b 3 n,a 3 n) } {0} } represents a polymerization pixel point difference array with the same pixel point difference array from the second frame to the fourth frame; { (r) 9 1,g 9 1,b 9 1,a 9 1),(r 9 2,g 9 2,b 9 2,a 9 2),…,(r 9 y,g 9 y,b 9 y,a 9 y),…,(r 9 n,g 9 n,b 9 n,a 9 n) } represents the pixel point difference array of the tenth frame.
A5. And adding the switching data of the video data into the initial video data structure to form video transmission data.
Wherein the handover data comprises: the playing time and motion data of each video frame.
Further, the step A5 shown in fig. 2 may include the following steps:
A51. and generating playing time arrays corresponding to the arrays in the initial video data structure according to the playing time of each video frame in the video data.
Playing 20 frames of images in 2s, wherein the video frames played in 2s are as follows: p1| P2| P3| P4| P5| P6| P7| P8| P9| P10| P11| P12| P13| P14| P15| P16| P17| P18| P19| P20. Since the adjacent video frame images may occur in the same way, such as: the P1 to P3 frames are identical, the P4 to P8 frames are identical, the P9 to P13 frames are identical, and the P14 to P20 frames are identical. The existing video transmission mode is frame-by-frame transmission, so that the overall transmitted video data is large. By adopting the mode of the embodiment, only P3, P8, P13, P20,4 frame images can be output by playing time arrays { time (0.3, 0.5, 0.7) }, and then the time for playing each frame is represented by the time, so that the effect of playing 2 seconds of video images by only transmitting 4 frame images is achieved.
A52. And generating action arrays corresponding to the arrays in the initial video data structure according to the action data of each video frame in the video data.
By way of example and not limitation, the action array may be { animate (default, fadeIn, rotate, \8230;, slide) }, where default represents a directly occurring switching action; fade indicates a switching operation of fade; rotate represents the switching action in which rotation occurs; slide indicates a switching action in which a slip occurs. Each parameter in the action array corresponds to one frame of image.
A53. And adding the playing time array and the action array into the initial video data structure to form video transmission data.
By way of example and not limitation, adding motion data animate and play time of each frame switch to the initial video data structure results in:
{data:[{(r 0 1,g 0 1,b 0 1,a 0 1),(r 0 2,g 0 2,b 0 2,a 0 2),…,(r 0 y,g 0 y,b 0 y,a 0 y),…,(r 0 n,g 0 n,b 0 n,a 0 n)},{{rgba1}-{((r 3 1,g 3 1,b 3 1,a 3 1)-(r 3 y,g 3 y,b 3 y,a 3 y))(0),…,(r 3 n,g 3 n,b 3 n,a 3 n)}{0}},{((r 4 1,g 4 1,b 4 1,a 4 1)-(r 4 y,g 4 y,b 4 y,a 4 y))(0),…,(r 4 n,g 4 n,b 4 n,a 4 n)},…,{(r 9 1,g 9 1,b 9 1,a 9 1),(r 9 2,g 9 2,b 9 2,a 9 2),…,(r 9 y,g 9 y,b 9 y,a 9 y),…,(r 9 n,g 9 n,b 9 n,a 9 n)}],action:{time:(0.3,0.5,0.5,…,0,1),animate:(default,fadeIn,default,…,default)}}。
wherein, the data: [ { (r 01, g01, b01, a 01), (r 02, g02, b02, a 02), \ 8230 { (r 0y, g0y, b0y, a0 y), \8230 { (r 0n, g0n, b0n, a0 n) }, { { rgba1} - { ((r 31, g31, b31, a 31) - (r 3y, g3y, b3y, a3 y)) (0), \\ 8230 { (r 3n, g3n, b3n, a3 n) } {0} }, { ((r 41, g41, b41, a 41) - (r 4y, g4y, b4y, a4 y)) (0), \8230; (r 4n, g4n, b4n, a4 n) }, \8230 { (r 91, g91, b91, a 91), (r 92, g92, b92, a 92); \8230; (r 9y, g9y, b9y, a9 y) \8230; (r 9n, g9n, b9n, a9 n) } denotes the initial video data structure; action { time (0.3, 0.5, \ 8230;, 0, 1), animate (default, fadeIn, default, \ 8230;, default) } represents the switch data structure; time (0.3, 0.5, \ 8230;, 0, 1) represents the playing time array; an imate (default, fadeIn, default, \8230;, default) } represents an action array.
The video transmission method of the embodiment is mainly used for transmitting the webpage animation video data.
In this embodiment, the video transmission method compares the pixel point arrays of two adjacent video frames to obtain a pixel point difference array, so as to reduce data in the pixel point arrays of the adjacent video frames; compressing a plurality of adjacent and same pixel point difference arrays into a polymerization pixel point difference array so as to reduce video frames with the same pixel points; the initial frame pixel array of the video data and the pixel point difference array and/or the aggregation pixel point difference array are combined to obtain an initial video data structure, and switching data of the video data are added into the initial video data structure, so that the purposes of reducing the transmission volume and improving the transmission speed and the transmission effect on the premise of not changing the definition of a video image are achieved.
The video pictures can be greatly compressed by the video transmission method, and high-definition and high-quality video image presentation is realized. Meanwhile, the transmission volume of the video data is reduced, the webpage animation pictures can be quickly responded and loaded under the condition of poor network bandwidth, and the user experience effect is improved.
Example two
Referring to fig. 3, a video playing method of the present embodiment includes the following steps:
B1. receiving video transmission data, wherein the video transmission data consists of an initial video data structure, a playing time array and an action array.
For example, receiving video transmission data is:
{data:[{(r 0 1,g 0 1,b 0 1,a 0 1),(r 0 2,g 0 2,b 0 2,a 0 2),…,(r 0 y,g 0 y,b 0 y,a 0 y),…,(r 0 n,g 0 n,b 0 n,a 0 n)},{{rgba1}-{((r 3 1,g 3 1,b 3 1,a 3 1)-(r 3 y,g 3 y,b 3 y,a 3 y))(0),…,(r 3 n,g 3 n,b 3 n,a 3 n)}{0}},{((r 4 1,g 4 1,b 4 1,a 4 1)-(r 4 y,g 4 y,b 4 y,a 4 y))(0),…,(r 4 n,g 4 n,b 4 n,a 4 n)},…,{(r 9 1,g 9 1,b 9 1,a 9 1),(r 9 2,g 9 2,b 9 2,a 9 2),…,(r 9 y,g 9 y,b 9 y,a 9 y),…,(r 9 n,g 9 n,b 9 n,a 9 n)}],action:{time:(0.3,0.5,0.5,…,0,1),animate:(default,fadeIn,default,…,default)}}。
wherein, data [ { (r) 0 1,g 0 1,b 0 1,a 0 1),(r 0 2,g 0 2,b 0 2,a 0 2),…,(r 0 y,g 0 y,b 0 y,a 0 y),…,(r 0 n,g 0 n,b 0 n,a 0 n)},{{rgba1}-{((r 3 1,g 3 1,b 3 1,a 3 1)-(r 3 y,g 3 y,b 3 y,a 3 y))(0),…,(r 3 n,g 3 n,b 3 n,a 3 n)}{0}},{((r 4 1,g 4 1,b 4 1,a 4 1)-(r 4 y,g 4 y,b 4 y,a 4 y))(0),…,(r 4 n,g 4 n,b 4 n,a 4 n)},…,{(r 9 1,g 9 1,b 9 1,a 9 1),(r 9 2,g 9 2,b 9 2,a 9 2),…,(r 9 y,g 9 y,b 9 y,a 9 y),…,(r 9 n,g 9 n,b 9 n,a 9 n)}]Is an initial video data structure; time (0.3, 0.5, \ 8230;, 0, 1) is a play time array; default (default, fadeIn, default, \ 8230, default) is an action array.
B2. Analyzing the initial video data structure in the video transmission data frame by frame to obtain the pixel point arrays of each array in the initial video data structure, and generating a frame image according to the pixel point arrays.
The initial video data structure comprises an initial frame pixel array, a pixel point difference array carrying a first identifier and/or a polymerization pixel point difference array carrying a second identifier.
In this embodiment, the frame pixel array corresponding to the pixel point difference array may be restored based on the initial frame pixel array and the first identifier; the frame pixel array corresponding to the aggregated pixel point difference array may be restored based on the initial frame pixel array and the second identifier.
The difference array of the aggregated pixel points is { { rgba1} - { ((r) 3 1,g 3 1,b 3 1,a 3 1)-(r 3 y,g 3 y,b 3 y,a 3 y))(0),…,(r 3 n,g 3 n,b 3 n,a 3 n) } {0}, the restored fourth frame pixel array is: { (r) 0 1,g 0 1,b 0 1,a 0 1),(r 0 2,g 0 2,b 0 2,a 0 2),…,(r 0 y,g 0 y,b 0 y,a 0 y),…,(r 0 n,g 0 n,b 0 n,a 0 n)}。
Further, step B2 may comprise:
when the initial video data structure comprises a pixel point difference array, restoring the pixel point difference array according to the initial frame pixel array and the first identification to obtain the corresponding pixel point array, and converting the obtained pixel point array into a frame image.
For example: the initial frame pixel array is { (r) 0 1,g 0 1,b 0 1,a 0 1),(r 0 2,g 0 2,b 0 2,a 0 2),…,(r 0 y,g 0 y,b 0 y,a 0 y),…,(r 0 n,g 0 n,b 0 n,a 0 n) and the pixel point difference array is { ((r) 1 1,g 1 1,b 1 1,a 1 1)-(r 1 y,g 1 y,b 1 y,a 1 y))(0),…,(r 1 n,g 1 n,b 1 n,a 1 n), the restored second frame pixel array is: { (r) 0 1,g 0 1,b 0 1,a 0 1),(r 0 2,g 0 2,b 0 2,a 0 2),…,(r 0 y,g 0 y,b 0 y,a 0 y),…,(r 1 n,g 1 n,b 1 n,a 1 n)}。
Further, step B2 may comprise: when the initial video data structure comprises a polymerized pixel point difference array, restoring the polymerized pixel point difference array according to the initial frame pixel array and the second identification to obtain a corresponding pixel point array, and converting the obtained pixel point array into a frame image.
For example: the initial frame pixel array is { (r) 0 1,g 0 1,b 0 1,a 0 1),(r 0 2,g 0 2,b 0 2,a 0 2),…,(r 0 y,g 0 y,b 0 y,a 0 y),…,(r 0 n,g 0 n,b 0 n,a 0 n), and the aggregation pixel point difference array is { { rgba1} - { ((r) 3 1,g 3 1,b 3 1,a 3 1)-(r 3 y,g 3 y,b 3 y,a 3 y))(0),…,(r 3 n,g 3 n,b 3 n,a 3 n) } {0}, the restored fourth frame pixel array is: { (r) 0 1,g 0 1,b 0 1,a 0 1),(r 0 2,g 0 2,b 0 2,a 0 2),…,(r 0 y,g 0 y,b 0 y,a 0 y),…,(r 0 n,g 0 n,b 0 n,a 0 n)}。
B3. And playing the frame image according to the sequencing sequence of each array in the initial video data structure, the playing time array and the action array.
Further, the elements in the play time array are associated with corresponding arrays in the initial video data structure, and the action arrays are associated with corresponding arrays in the initial video data structure. The step B3 comprises the following steps: and playing the frame images corresponding to the corresponding array according to the playing time in the playing time array, and controlling the display mode of the frame images corresponding to the corresponding array according to the action data in the action array.
Playing 20 frames of images in 2s, wherein the video frames played in 2s are as follows: p1| P2| P3| P4| P5| P6| P7| P8| P9| P10| P11| P12| P13| P14| P15| P16| P17| P18| P19| P20. Since the adjacent video frame images may occur in the same way, such as: the P1 to P3 frames are identical, the P4 to P8 frames are identical, the P9 to P13 frames are identical, and the P14 to P20 frames are identical. The existing video transmission mode is frame-by-frame transmission, so that the overall transmitted video data is large. By adopting the mode of the embodiment, only P3, P8, P13, P20,4 frame images can be output by playing time arrays { time (0.3, 0.5, 0.7) }, and then the time for playing each frame is represented by the time, so that the effect of playing 2 seconds of video images by only transmitting 4 frame images is achieved.
In practical application, the playing time of each frame of image can be obtained by analyzing the time field, and the setInterval timer is adopted to control the corresponding frame of image to be played according to the obtained playing time. The action data animate may be implemented to control a display switching manner of the frame image corresponding to the corresponding array by performing the video animation of the CSS attribute set using an animate () method.
In this embodiment, the video playing method analyzes the initial video data structure in the received video transmission data frame by frame to obtain the pixel point arrays of each array in the initial video data structure, so as to generate a frame image consistent with the definition of the original video image according to the pixel point arrays, and plays the frame image according to the sorting order of each array in the initial video data structure, the play time array and the action array, thereby achieving the purpose of playing a frame image video with high quality, and improving the loading speed of the image and the user experience.
EXAMPLE III
Referring to fig. 4, a video transmission apparatus 1 of the present embodiment includes: an acquisition unit 11, a comparison unit 12, a compression unit 13, a merging unit 14 and a generation unit 15.
The obtaining unit 11 is configured to obtain pixel arrays of each video frame in the video data frame by frame.
Specifically, the pixel data of the video frame can be obtained by the getImageData method of HTML5 (a language description way of constructing Web contents) canvas element tags.
It is emphasized that, to further ensure the privacy and security of the video data, the video data may also be stored in a node of a blockchain.
And the comparison unit 12 is configured to compare differences between pixel point arrays of two adjacent video frames frame by frame according to the sorting order of the video frames to obtain a pixel point difference array of a subsequent video frame.
In this embodiment, the pixel point difference array is obtained by comparing the pixel point arrays of two adjacent video frames, so as to reduce data in the pixel point arrays of the adjacent video frames.
Further, the comparing unit 12 is configured to compare two corresponding pixel elements in the pixel arrays of two adjacent video frames according to the sorting order of the video frames, and identify the pixel element in a subsequent video frame of the two corresponding and same pixel elements in the pixel arrays of the two video frames by using a first identifier.
The comparing unit 12 is further configured to identify whether the pixel element continuously identified with the first identifier exists in the pixel array of the subsequent video frame; if yes, the comparing unit 12 merges the pixel point elements continuously identified with the first identifier in the pixel point array of the next video frame to obtain a pixel point difference array of the next video frame; if not, the comparing unit 12 uses the pixel point array of the current identified next video frame as the pixel point difference array of the next video frame.
In this embodiment, the pixel difference array of the subsequent video frame may be composed of a plurality of pixel elements carrying the first identifier and the original pixel elements in the pixel array of the subsequent video frame; the pixel point difference array of the next video frame can also be composed of the merged pixel point element carrying the first identifier and the original pixel point element in the pixel point array of the next video frame.
The compressing unit 13 is configured to compress a plurality of adjacent and identical pixel point disparity arrays into a polymeric pixel point disparity array.
In this embodiment, the video frames with the same pixels are reduced by compressing the adjacent and same pixel difference arrays into a polymeric pixel difference array.
Further, when a plurality of adjacent pixel point difference arrays are the same, the compression unit 13 identifies the pixel point difference array of the last video frame by using a second identifier, takes the pixel point difference array carrying the second identifier as a aggregated pixel point difference array, and records the number of the adjacent video frames with the same pixel point difference array.
A merging unit 14, configured to sequentially merge the initial frame pixel array of the video data with the pixel difference array and/or the aggregate pixel difference array according to the sorting order of the video frames, so as to generate an initial video data structure.
In this embodiment, when the pixel point difference arrays of adjacent video frames in the video data are not the same, the initial video data structure is composed of an initial frame pixel array and a pixel point difference array; when the pixel point difference arrays of at least two adjacent video frames in the video data are the same, the initial video data structure is composed of an initial frame pixel array, a pixel point difference array and a polymerization pixel point difference array; when the pixel point difference arrays of all adjacent video frames in the video data are the same, the initial video data structure is composed of an initial frame pixel array and a polymerization pixel point difference array.
A generating unit 15, configured to add switching data of the video data to the initial video data structure to form video transmission data.
Wherein the handover data comprises: the playing time and motion data of each video frame.
Further, the generating unit 15 is configured to generate a play time array corresponding to each array in the initial video data structure according to the play time of each video frame in the video data; generating an action array corresponding to each array in the initial video data structure according to the action data of each video frame in the video data; adding the play time array and the action array into the initial video data structure to form video transmission data
In this embodiment, the video transmission device 1 compares the pixel point arrays of two adjacent video frames by the comparison unit 12 to obtain a pixel point difference array, so as to reduce data in the pixel point arrays of the adjacent video frames; compressing a plurality of adjacent and same pixel point difference arrays into a polymeric pixel point difference array through a compression unit 13 so as to reduce video frames with the same pixel points; the merging unit 14 merges the initial frame pixel array of the video data with the pixel point difference array and/or the aggregate pixel point difference array to obtain an initial video data structure, and the generating unit 15 is adopted to add the switching data of the video data into the initial video data structure, so that the purposes of reducing the transmission volume and improving the transmission speed and effect on the premise of not changing the definition of the video image are achieved.
Example four
Referring to fig. 5, a video playing apparatus 2 of the present embodiment includes: a receiving unit 21, a parsing unit 22 and a playing unit 23.
The receiving unit 21 is configured to receive video transmission data, where the video transmission data includes an initial video data structure, a play time array, and an action array.
An analyzing unit 22, configured to analyze the initial video data structure in the video transmission data frame by frame to obtain the pixel point arrays of each array in the initial video data structure, and generate a frame image according to the pixel point arrays.
The initial video data structure comprises an initial frame pixel array, a pixel point difference array carrying a first identification and/or an aggregation pixel point difference array carrying a second identification.
In this embodiment, the frame pixel array corresponding to the pixel point difference array may be restored based on the initial frame pixel array and the first identifier; the frame pixel array corresponding to the aggregated pixel point difference array may be restored based on the initial frame pixel array and the second identifier.
And the playing unit 23 is configured to play the frame image according to the sorting order of the arrays in the initial video data structure, the playing time array, and the action array.
Further, the elements in the play time array are associated with corresponding arrays in the initial video data structure, and the action arrays are associated with corresponding arrays in the initial video data structure. The playing unit 23 can play the frame image corresponding to the corresponding array according to the playing time in the playing time array, and control the display mode of the frame image corresponding to the corresponding array according to the action data in the action array.
In this embodiment, the video playing apparatus 2 parses the initial video data structure in the video transmission data sent by the video transmission apparatus 1 frame by frame through the receiving unit 21 to obtain the pixel point arrays of each array in the initial video data structure, so that the parsing unit 22 generates the frame images with the definition consistent with that of the original video images according to the pixel point arrays, and plays the frame images according to the sorting order of each array in the initial video data structure, the playing time array and the action array, thereby achieving the purpose of playing the frame image video with high quality, and simultaneously improving the loading speed of the images and the user experience.
EXAMPLE five
In order to achieve the above object, the present invention further provides a computer device 3, where the computer device 3 includes a plurality of computer devices 3, components of the video transmission apparatus 1 according to the third embodiment may be dispersed in different computer devices 3, and the computer device 3 may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack server, a blade server, a tower server, or a rack server (including an independent server or a server cluster formed by a plurality of servers) that executes a program, or the like. The computer device 3 of the present embodiment includes at least, but is not limited to: a memory 31, a processor 33, a network interface 32, and the video transmission device 1 (refer to fig. 6) that can be communicatively connected to each other through a system bus. It is noted that fig. 6 only shows the computer device 3 with components, but it is understood that not all of the shown components are required to be implemented, and more or less components may be implemented instead.
In this embodiment, the memory 31 includes at least one type of computer-readable storage medium, which includes a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the storage 31 may be an internal storage unit of the computer device 3, such as a hard disk or a memory of the computer device 3. In other embodiments, the memory 31 may also be an external storage device of the computer device 3, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the computer device 3. Of course, the memory 31 may also comprise both an internal storage unit of the computer device 3 and an external storage device thereof. In this embodiment, the memory 31 is generally used for storing an operating system and various application software installed in the computer device 3, such as a program code of the video transmission method of the first embodiment. Further, the memory 31 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 33 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 33 is typically arranged to control the overall operation of the computer device 3, such as to perform control and processing related to data interaction or communication with said computer device 3. In this embodiment, the processor 33 is configured to operate the program codes stored in the memory 31 or process data, for example, operate the video transmission apparatus 1.
The network interface 32 may comprise a wireless network interface or a wired network interface, and the network interface 32 is typically used to establish a communication connection between the computer device 3 and other computer devices 3. For example, the network interface 32 is used to connect the computer device 3 to an external terminal through a network, establish a data transmission channel and a communication connection between the computer device 3 and the external terminal, and the like. The network may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), a Global System of Mobile communication (GSM), wideband Code Division Multiple Access (WCDMA), a 4G network, a 5G network, bluetooth (Bluetooth), wi-Fi, and the like.
It is noted that fig. 6 only shows the computer device 3 with components 21-23, but it is to be understood that not all shown components are required to be implemented, and that more or less components may be implemented instead.
In this embodiment, the video transmission apparatus 1 stored in the memory 31 may be further divided into one or more program modules, and the one or more program modules are stored in the memory 31 and executed by one or more processors (in this embodiment, the processor 33) to complete the present invention.
EXAMPLE six
To achieve the above objects, the present invention also provides a computer-readable storage medium including a plurality of storage media such as a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application store, etc., on which a computer program is stored, which when executed by a processor 33 implements corresponding functions. The computer-readable storage medium of the present embodiment is used for storing the video transmission apparatus 1, and when being executed by the processor 33, the computer-readable storage medium implements the video transmission method of the first embodiment.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a string of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, which is used for verifying the validity (anti-counterfeiting) of the information and generating a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes performed by the present invention or directly or indirectly applied to other related technical fields are also included in the scope of the present invention.

Claims (8)

1. A video transmission method, comprising:
acquiring pixel point arrays of all video frames in video data frame by frame;
comparing the difference between the pixel point arrays of two adjacent video frames frame by frame according to the sequencing sequence of the video frames to obtain the pixel point difference array of the next video frame;
compressing a plurality of adjacent and same pixel point difference arrays into a polymerization pixel point difference array;
sequentially merging an initial frame pixel array of the video data with a pixel point difference array and/or a polymerization pixel point difference array according to the sequencing sequence of the video frames to generate an initial video data structure;
adding switching data of the video data to the initial video data structure to form video transmission data, wherein the switching data comprises: the playing time and the motion data of each video frame;
adding the switching data of the video data to the initial video data structure to form video transmission data, comprising: generating play time arrays corresponding to the arrays in the initial video data structure according to the play time of each video frame in the video data; generating action arrays corresponding to the arrays in the initial video data structure according to the action data of each video frame in the video data; and adding the playing time array and the action array into the initial video data structure to form video transmission data.
2. The video transmission method according to claim 1, wherein said comparing the difference between the pixel arrays of two adjacent video frames frame by frame according to the sorting order of the video frames to obtain the pixel difference array of the next video frame comprises:
comparing two corresponding pixel point elements in the pixel point arrays of two adjacent video frames according to the sequencing sequence of the video frames, and identifying the pixel point element in the next video frame in the two corresponding and same pixel point elements in the pixel point arrays of the two video frames by adopting a first identifier;
identifying whether the pixel point elements continuously marked with the first mark exist in the pixel point array of the next video frame;
if yes, combining the pixel point elements continuously marked with the first mark in the pixel point array of the next video frame to obtain a pixel point difference array of the next video frame;
and if not, taking the pixel point array of the current identified next video frame as the pixel point difference array of the next video frame.
3. The method of claim 1, wherein compressing the same plurality of pixel difference arrays into an aggregate pixel difference array comprises:
determining that a plurality of adjacent pixel point difference arrays are the same, adopting a second identification to identify the pixel point difference array of the last video frame, taking the pixel point difference array carrying the second identification as a polymerization pixel point difference array, and recording the number of the adjacent video frames with the same pixel point difference array.
4. A video playback method, comprising:
acquiring pixel point arrays of all video frames in video data frame by frame;
comparing the difference between the pixel point arrays of two adjacent video frames frame by frame according to the sequencing sequence of the video frames to obtain the pixel point difference array of the next video frame;
compressing a plurality of adjacent and same pixel point difference arrays into a polymerization pixel point difference array;
sequentially combining an initial frame pixel array of the video data with a pixel point difference array and/or a polymerization pixel point difference array according to the sequencing sequence of the video frames to generate an initial video data structure;
receiving video transmission data, wherein the video transmission data consists of an initial video data structure, a playing time array and an action array, and the initial video data structure comprises an initial frame pixel array, a pixel point difference array carrying a first identifier and/or an aggregation pixel point difference array carrying a second identifier;
analyzing the initial video data structure in the video transmission data frame by frame to obtain the pixel point arrays of each array in the initial video data structure, and generating a frame image according to the pixel point arrays, wherein the method comprises the following steps: when the initial video data structure comprises a pixel point difference array, restoring the pixel point difference array according to the initial frame pixel array and the first identifier to obtain the corresponding pixel point array; when the initial video data structure comprises a polymeric pixel point difference array, restoring the polymeric pixel point difference array according to the initial frame pixel array and the second identifier to obtain a corresponding pixel point array; converting the obtained pixel point array into a frame image;
and playing the frame image according to the sequencing sequence of each array in the initial video data structure, the playing time array and the action array.
5. A video transmission apparatus, comprising:
the acquisition unit is used for acquiring pixel point arrays of all video frames in the video data frame by frame;
the comparison unit is used for comparing the difference between the pixel point arrays of two adjacent video frames frame by frame according to the sequencing sequence of the video frames to obtain the pixel point difference array of the next video frame;
the compression unit is used for compressing the adjacent and same pixel point difference arrays into a polymerization pixel point difference array;
a merging unit, configured to sequentially merge an initial frame pixel array of the video data with a pixel point difference array and/or an aggregate pixel point difference array according to the sorting order of the video frames, so as to generate an initial video data structure;
a generating unit, configured to add switching data of the video data to the initial video data structure to form video transmission data, where the switching data includes: the playing time and the motion data of each video frame;
the generating unit is used for generating playing time arrays corresponding to the arrays in the initial video data structure according to the playing time of each video frame in the video data; generating action arrays corresponding to the arrays in the initial video data structure according to the action data of each video frame in the video data; and adding the playing time array and the action array into the initial video data structure to form video transmission data.
6. A video playback apparatus, comprising:
the acquisition unit is used for acquiring pixel point arrays of all video frames in the video data frame by frame;
the comparison unit is used for comparing the difference between the pixel point arrays of two adjacent video frames frame by frame according to the sequencing sequence of the video frames to obtain the pixel point difference array of the next video frame;
the compression unit is used for compressing the adjacent and same pixel point difference arrays into a polymerization pixel point difference array;
a merging unit, configured to sequentially merge an initial frame pixel array of the video data with a pixel point difference array and/or an aggregate pixel point difference array according to the sorting order of the video frames, so as to generate an initial video data structure;
the receiving unit is used for receiving video transmission data, and the video transmission data consists of an initial video data structure, a playing time array and an action array;
the analysis unit is used for analyzing the initial video data structure in the video transmission data frame by frame so as to obtain the pixel point arrays of all arrays in the initial video data structure and generate a frame image according to the pixel point arrays, wherein the initial video data structure comprises an initial frame pixel array, a pixel point difference array carrying a first identification and/or an aggregation pixel point difference array carrying a second identification;
a playing unit, configured to play the frame image according to the sorting order of each array in the initial video data structure, the play time array, and the action array, including: when the initial video data structure comprises a pixel point difference array, restoring the pixel point difference array according to the initial frame pixel array and the first identification to obtain a corresponding pixel point array; when the initial video data structure comprises a polymeric pixel point difference array, restoring the polymeric pixel point difference array according to the initial frame pixel array and the second identifier to obtain a corresponding pixel point array; and converting the obtained pixel point array into a frame image.
7. A computer arrangement comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method of any one of claims 1 to 3 when executing the computer program.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 3.
CN202011558432.6A 2020-12-25 2020-12-25 Video transmission method, video playing method, video transmission device, video playing device, computer equipment and storage medium Active CN112714338B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011558432.6A CN112714338B (en) 2020-12-25 2020-12-25 Video transmission method, video playing method, video transmission device, video playing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011558432.6A CN112714338B (en) 2020-12-25 2020-12-25 Video transmission method, video playing method, video transmission device, video playing device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112714338A CN112714338A (en) 2021-04-27
CN112714338B true CN112714338B (en) 2022-11-25

Family

ID=75545811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011558432.6A Active CN112714338B (en) 2020-12-25 2020-12-25 Video transmission method, video playing method, video transmission device, video playing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112714338B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114222131B (en) * 2021-12-16 2023-10-27 北京数码视讯技术有限公司 Low-delay video coding method and device
CN114051145B (en) * 2022-01-11 2022-04-22 苏州浪潮智能科技有限公司 Video compression processing method, device and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106686451A (en) * 2016-12-28 2017-05-17 努比亚技术有限公司 Terminal and video playing control method
CN109218748A (en) * 2017-06-30 2019-01-15 京东方科技集团股份有限公司 Video transmission method, device and computer readable storage medium
CN109859236A (en) * 2019-01-02 2019-06-07 广州大学 Mobile object detection method, calculates equipment and storage medium at system
CN110113617A (en) * 2019-06-13 2019-08-09 腾讯科技(深圳)有限公司 A kind of method and device of compression of images and decompression

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105359533A (en) * 2013-08-12 2016-02-24 英特尔公司 Techniques for low power image compression and display

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106686451A (en) * 2016-12-28 2017-05-17 努比亚技术有限公司 Terminal and video playing control method
CN109218748A (en) * 2017-06-30 2019-01-15 京东方科技集团股份有限公司 Video transmission method, device and computer readable storage medium
CN109859236A (en) * 2019-01-02 2019-06-07 广州大学 Mobile object detection method, calculates equipment and storage medium at system
CN110113617A (en) * 2019-06-13 2019-08-09 腾讯科技(深圳)有限公司 A kind of method and device of compression of images and decompression

Also Published As

Publication number Publication date
CN112714338A (en) 2021-04-27

Similar Documents

Publication Publication Date Title
CN112714338B (en) Video transmission method, video playing method, video transmission device, video playing device, computer equipment and storage medium
US20170180746A1 (en) Video transcoding method and electronic apparatus
CN113051236B (en) Method and device for auditing video and computer-readable storage medium
CN111263243B (en) Video coding method and device, computer readable medium and electronic equipment
CN108650460B (en) Server, panoramic video storage and transmission method and computer storage medium
CN110876078B (en) Animation picture processing method and device, storage medium and processor
CN109685714B (en) Picture compression method, device, equipment and computer readable storage medium
CN111741247B (en) Video playback method and device and computer equipment
CN116567170A (en) Video super-resolution parallel scheduling method and device for cloud platform and cloud platform
CN115115968A (en) Video quality evaluation method and device and computer readable storage medium
US10764578B2 (en) Bit rate optimization system and method
CN111447444A (en) Image processing method and device
CN114125493B (en) Distributed storage method, device and equipment for streaming media
CN114095763B (en) Video list pushing method and play quality assessment method
CN114363697B (en) Video file generation and playing method and device
CN117176979B (en) Method, device, equipment and storage medium for extracting content frames of multi-source heterogeneous video
CN115866298B (en) Video processing method, system, electronic equipment and storage medium
US11394977B2 (en) Safe and cross device consistent video compression
CN116320536B (en) Video processing method, device, computer equipment and computer readable storage medium
US20240137619A1 (en) Bullet-screen comment display
CN117176979A (en) Method, device, equipment and storage medium for extracting content frames of multi-source heterogeneous video
CN114220048A (en) Method and apparatus for video content processing, electronic device, and storage medium
CN115599252A (en) Interaction method and device
CN116489477A (en) Holographic video generation method, device, equipment and storage medium
CN115410122A (en) Video abstract generation method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant