CN109089153B - PS data stream decoding method, device, computer equipment and storage medium - Google Patents

PS data stream decoding method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN109089153B
CN109089153B CN201811010989.9A CN201811010989A CN109089153B CN 109089153 B CN109089153 B CN 109089153B CN 201811010989 A CN201811010989 A CN 201811010989A CN 109089153 B CN109089153 B CN 109089153B
Authority
CN
China
Prior art keywords
packet
data
frame
video data
header information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811010989.9A
Other languages
Chinese (zh)
Other versions
CN109089153A (en
Inventor
陈林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201811010989.9A priority Critical patent/CN109089153B/en
Publication of CN109089153A publication Critical patent/CN109089153A/en
Application granted granted Critical
Publication of CN109089153B publication Critical patent/CN109089153B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/438Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving MPEG packets from an IP network
    • H04N21/4385Multiplex stream processing, e.g. multiplex stream decrypting

Abstract

The invention discloses a method, a device, computer equipment and a storage medium for decoding PS data streams, wherein the method comprises the following steps: analyzing the PS data stream to obtain a packet set containing H264 video data packets; taking an H264 video data packet in the packet set as a test packet, using a preset filtering condition set to filter and analyze the test packet, and determining a mapping relation between a frame type of the H264 video data packet and packet header information of the H264 video data packet according to a filtering and analyzing result; according to the header information of each H264 video data packet and the mapping relation, carrying out data extraction on each H264 video data packet to obtain frame data; and recombining the frame data according to a preset H264 coding requirement to obtain an H264data stream. The technical scheme of the invention decodes the PS data stream by a simple and efficient decoding method so as to solve the problem of low operation efficiency of decoding by using an SDK library.

Description

PS data stream decoding method, device, computer equipment and storage medium
Technical Field
The present invention relates to the field of information processing, and in particular, to a PS data stream decoding method, apparatus, computer device, and storage medium.
Background
At present, most camera manufacturers provide a matched SDK (Software Development Kit) library when providing camera equipment, so as to provide a complete solution for data transmission and decoding.
When the camera outputs data, the data is firstly packaged by a private SDK library provided by a manufacturer and transmitted in a PS data stream form, and then the PS data stream is decoded by a decoding library defined by the manufacturer. Among them, the PS, MPEG2-PS, is an abbreviation, MPEG2-PS is a kind of packaging container for multiplexing digital audio, video, etc., and the PS is a abbreviation of Program Stream. The PS data stream is a data stream encapsulated with an MPEG2-PS encapsulation container for transmitting video/audio data.
Generally, an SDK library provided by a camera manufacturer is usually huge, and occupies more resources when running; and all are packaged and cannot be flexibly used. In the customized application seeking high efficiency, a more efficient decoding method is urgently needed.
Disclosure of Invention
The embodiment of the invention provides a method and a device for decoding a PS data stream, computer equipment and a storage medium, wherein the PS data stream is decoded by a simple and efficient decoding method so as to solve the problem of low operation efficiency of decoding by using an SDK library.
A PS data stream decoding method, comprising:
analyzing the PS data stream to obtain a packet set containing a plurality of H264 video data packets;
selecting a preset number of H264 video data packets from the packet set as test packets, performing filtering analysis on the test packets by using a preset filtering condition set, and determining a mapping relation between a frame type of the H264 video data packets and packet header information of the H264 video data packets according to a filtering analysis result, wherein the frame type comprises an I frame and a P frame;
for each H264 video data packet in the packet set, performing data extraction on each H264 video data packet according to the packet header information of each H264 video data packet and the mapping relation to obtain frame data, wherein the frame data is I frame data or P frame data;
and according to the preset H264 coding requirement, recombining the frame data to obtain an H264data stream.
A PS data stream decoding apparatus, comprising:
the data stream analyzing module is used for analyzing the PS data stream to obtain a packet set containing a plurality of H264 video data packets;
a filtering analysis module, configured to select a preset number of H264 video data packets from the packet set as test packets, perform filtering analysis on the test packets by using a preset filtering condition set, and determine, according to a result of the filtering analysis, a mapping relationship between a frame type of the H264 video data packet and packet header information of the H264 video data packet, where the frame type includes an I frame and a P frame;
an extraction module, configured to perform data extraction on each H264 video data packet in the packet set according to packet header information of each H264 video data packet and the mapping relationship, to obtain frame data, where the frame data is I frame data or P frame data;
and the recombination module is used for recombining the frame data according to a preset H264 coding requirement to obtain an H264data stream.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the above-mentioned PS data stream decoding method when executing the computer program.
A computer-readable storage medium, in which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the above-mentioned PS data stream decoding method.
According to the PS data stream decoding method, the PS data stream decoding device, the computer equipment and the storage medium, the PS data stream is firstly analyzed to obtain an H264 video data packet, and the first layer of data unpacking is completed; performing statistical judgment on the header information of the H264 video data packet to obtain the header information of I frame data and the header data of P frame data in the H264 video data packet, extracting H264 frame data according to the header information of I, P frame data, and completing second-layer data unpacking; finally, the I frame data and the P frame data are arranged and combined to obtain H264 data; the decoding method can replace a manufacturer private SDK library for decoding operation, occupies less resources during operation, has higher operation efficiency, and thus provides a simple, convenient and quick general decoding method for decoding the PS data stream to obtain the H264data, and does not depend on the manufacturer private SDK library.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive labor.
FIG. 1 is a diagram illustrating an application environment of a method for decoding a PS data stream according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for decoding a PS data stream according to an embodiment of the present invention;
fig. 3 is a flowchart of step S2 in the PS data stream decoding method according to an embodiment of the invention;
fig. 4 is a flowchart of step S1 in the PS data stream decoding method according to an embodiment of the invention;
fig. 5 is a flowchart of step S3 in the PS data stream decoding method according to an embodiment of the invention;
FIG. 6 is a diagram of a PS data stream decoding device according to an embodiment of the invention;
FIG. 7 is a schematic diagram of a computer device in an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The PS data stream decoding method provided in the present application can be applied to the application environment shown in fig. 1. The client comprises a camera, an H264data stream output from the camera is firstly encapsulated into a PS data stream by the client by using a SDK (software development kit) library private to a camera manufacturer, then the PS data stream is transmitted to the server through a network in a PS data stream mode, and the PS data stream is decoded by the server to obtain the H264data stream. The network may be a wired network or a wireless network. H264 is a new generation of coding standard known for high compression, high quality and streaming media supporting a variety of networks. The PS data stream decoding method provided by the embodiment of the invention is applied to a server, and the server can be computer equipment including but not limited to a PC, a tablet computer, a smart phone terminal and the like.
In an embodiment, as shown in fig. 2, a PS data stream decoding method is provided, and a specific implementation flow thereof includes the following steps:
s1: and analyzing the PS data stream to obtain a packet set containing a plurality of H264 video data packets.
To facilitate efficient transport of audio/video stream data over a network, the client typically encapsulates the H264 stream output by the camera using MPEG2-PS to generate a PS stream. According to the encapsulation standard of PS, a complete PS data stream structure encapsulating H264 is:
PSheader+PS system header+PS system Map+PES header+H264data
the PSheader is a packet header of a PS data stream, consists of 14 bytes and mainly comprises a starting field mark, a system clock field, a packet filling length field and the like;
the PS system header is a system header of a PS data stream, consists of 18 bytes and mainly comprises a starting field mark, a rate limit field, an audio limit field, a stream identification field and the like;
the PS system Map is a mapping packet header of a PS data stream, consists of 30 bytes, and mainly comprises a starting field mark, a mapping identification field, a mapping length field, a basic stream information length field and the like;
the PES header is an extended packet header of the PS data stream, and is composed of 14 bytes, and mainly includes a stream identification field, a packet length field, a copyright field, a timestamp field, and the like;
h264data is the raw H264data stream encapsulated by PS.
The server can obtain the position of the H264data according to the number of bytes of each packet header in the PS encapsulation standard, and extract the position.
Specifically, for the PSheader packet, including a start field pack _ start _ code, whose corresponding value is 0x000001BA, the server sequentially intercepts 14 bytes of data from the PS data stream according to the packet length 14 bytes of the PSheader packet and the value of the start field, compares the 14 bytes of data with the value corresponding to the start field, if the two are equal, determines to find out the PSheader packet from the PS data stream, then skips 14 bytes from the start byte of the packet header, and detects the next packet header of the PS data stream;
for the PS system header packet, the service end determines the PS system header packet according to whether the packet length is 18 bytes and the packet start field system _ header _ start _ code is 0x000001BB, if the packet length is equal to the packet start field system _ header _ start _ code, skips 18 bytes from the packet header, and continues to detect the next packet header of the PS data stream;
for PS system Map packet, the server needs to detect its start field (packet _ start _ code _ prefix) and identification field (Map _ stream _ id), and if the start field has a value of 0x000001 and the identification field has a value of 0xBC, it can be determined as PS system Map packet, skip 30 bytes, and continue to detect the next packet header of PS data stream. Meanwhile, the service side can also detect a stream type field (stream _ type) of the PS system Map packet to determine what type of data stream is encapsulated in the PS data stream. E.g., for an MPEG-4 video stream, the value of this field is 0x10, for an H264data stream, the value of this field is 0x1B, and so on.
For the PES header and the H264data packet, the H264data stream is immediately followed by the PES header packet, the server detects the start code prefix field of the PES header packet, the start code prefix field of the packet occupies 3 bytes and has a value of 0x000001, and 14 bytes are counted from the start position of the PES header packet to the bottom, so that the start position of the H264data packet can be located.
By analogy, the service side detects the field information of the packet header one by one for each packet in the PS data stream to obtain a packet set containing a plurality of H264 video data packets, i.e. packets of "PES header + H264 data" in the PS data stream.
S2: selecting a preset number of H264 video data packets from the packet set as test packets, using a preset filtering condition set to filter and analyze the test packets, and determining a mapping relation between a frame type of the H264 video data packets and packet header information of the H264 video data packets according to a filtering and analyzing result, wherein the frame type comprises I frames and P frames.
In the compression standard of H264, data frames are classified. Among them, there are mainly three frame types, i.e., I-frame, P-frame, and B-frame. The completely coded frame is called I frame, the I frame is the intra-frame coding frame, which is an independent frame with all information, and can be independently decoded without referring to other images, and the first frame in the video sequence is always I frame; the frame generated by referring to the previous I frame and only containing difference part codes is called P frame, and the P frame is also called interframe predictive coding frame; the B frame is a bidirectional predictive coding frame, that is, the B frame records the difference between the current frame and the previous and next frames. Wherein, the I frame is a key frame, and the P frame can not be decoded without the I frame. A group of consecutive pictures requires at least one I-frame in combination with several P-frames or B-frames. P-frames or B-frames generally occupy fewer data bits than I-frames.
As a widely adopted coding format standard, there are usually several common coding modes:
baseline profile: basic image quality, which is mainly used for real-time communication such as video phones, video conferences, wireless communication and the like;
main profile: mainstream picture quality mainly used for digital broadcast television and digital video storage;
extended profile: the advanced image quality is mainly used for improving the error code performance and switching code streams;
high profile: high quality image quality, mainly for high compression efficiency and quality.
The H264data stream output by the camera adopts a Baseline profile coding mode, and because a preview picture in a video monitoring system is real-time and has higher requirements on the fluency of the picture, under the Baseline profile coding mode, only I frames and P frames are adopted for coding, so that the adaptability of the network can be improved by the output H264 video data stream, and the decoding cost can be reduced.
Since the H264 packet parsed in step S1 includes only two frame types of data, i.e., I-frame and P-frame, the server needs to parse out the header information corresponding to the I-frame and P-frame data through filtering.
The server selects a preset number of H264 video data packets from the packet set as test packets, and performs filtering analysis on the test packets by using a preset filtering condition set, so as to find out header information corresponding to each of the I frame and the P frame, namely a mapping relation between the frame type and the header information.
The preset filtering condition set refers to a set of judgment conditions used by the server to filter and analyze the header information of the test packet and further determine the header information corresponding to the frame type of the H264 video data packet. The preset set of filtering conditions may include, but is not limited to: stream identification field of PES header packet, packet length of H264data packet, etc.
The filtering analysis may specifically be to select filtering conditions from a preset filtering condition set and compare the filtering conditions with the test packets one by one. For example, if the selected filtering condition is the packet length of the H264data packet, and the specific value of the packet length of the H264data packet can be 30 bytes, filtering out the test packets with the packet length fields of the H264data packets not equal to 30 bytes in the test packets, and not processing the test packets; if the selected filtering condition is the stream identifier field of the PES header packet, and the specific value of the stream identifier field of the PES header packet may be "0 xC 0", then the test packets in the test packets whose stream identifier field of the PES header packet has a value not equal to "0 xC 0" are filtered out and not processed.
Specifically, the size of the frame data may be used as a determination criterion. If an I frame is a frame for completely encoding an image, and a P frame is a frame generated by referring to a previous I frame and only containing a difference part for encoding, therefore, the included pixel information should be that the I frame is more than the P frame, which is reflected in the size of frame data, and the number of bytes occupied by the I frame is more than that of the P frame. The packet length field of the PES header packet identifies the size of the H264 frame data after the PES header packet, and thus, the service end screens the value of the packet length field of the PES header packet in the test packet.
For example, the server side takes 10H 264 video data packets as test packets, and if 2 packets with packet length field values of 116 bytes and 8 packets with packet length field values of 20 bytes are taken from the 10 test packets, it is determined that the test packet with packet length field value of 116 bytes corresponds to I frame data and the test packet with packet length field value of 20 bytes corresponds to P frame data. Then, the server detects header information of an H264data packet in the I frame data and header information of an H264data packet in the P frame data, respectively, thereby determining the header information of the I frame and the header information of the P frame. If the header field of the H264data packet in the I frame data is 0x0001BA, the header information of the I frame is 0x0001 BA; the header field of the H264data packet in the P frame data is 0x0001AA, and the header information of the P frame is 0x0001 AA.
S3: and for each H264 video data packet in the packet set, performing data extraction on each H264 video data packet according to the packet header information of each H264 video data packet and the mapping relation to obtain frame data, wherein the frame data is I frame data or P frame data.
The step S2 obtains the header information corresponding to the I frame data and the P frame data in the H264 video data packet, and the server performs data extraction on each H264 video data packet according to the header information of the H264 video data packet and the mapping relationship, so as to obtain the I frame data or the P frame data in the H264 video data packet.
Specifically, the server takes the data with the packet length of 116 bytes and the H264data packet header field of 0x0001BA in the PES header packet as I frame data, removes the PES header packet according to the byte length of the PES header packet, extracts the subsequent H264data packet as I frame data into a temporary array, and marks the I frame data as I frame data; similarly, the server takes the data with the packet length of 20 bytes in the PES header packet and the H264data header field of 0x000FBA as P frame data, removes the PES header packet according to the byte length of the PES header packet, extracts the subsequent H264data packet as P frame data into the temporary array, and marks the P frame data.
S4: and according to the preset H264 coding requirement, recombining the frame data to obtain an H264data stream.
The H264 coding requirement is to determine a combination of I frame data and P frame data according to a frame rate. Frame rate (Frame rate) is a measure for measuring the number of display Frames in units of Frames Per Second (FPS) or "Hertz" (Hz). Due to the particular physiological structure of the human eye, a frame rate of video frames is at least 16 frames per second or higher, which is considered coherent if the frame rate of the frames viewed is higher than 16.
The H264data stream output by the camera is composed of frame data, and the frame data is divided into I frame data and P frame data. Taking the frame rate of a movie video playing 25 frames per second as an example, there is one key frame, i.e. I frame, in every 25 frames, and the rest are P frames. Increasing the number of key frames can improve picture quality but at the same time increase bandwidth and network load.
In order to ensure the picture quality of the video output by the camera, the server side recombines every 25 frames with a key frame, namely an I frame, and the rest are P frames to obtain an H264data stream. The reassembled H264data stream may be represented as a combination of frame data as follows:
IPPPPPPPPP...
i.e. the first frame is an I-frame followed by 24P-frames.
Specifically, the server may create a large array, first fill the first frame of I frame data into the array, and then sequentially fill the 24 frames of P frame data into the array, thereby completing the reassembly of one frame of H264data stream.
In this embodiment, the PS data stream is first analyzed to obtain an H264 video data packet, and the first layer data unpacking is completed; performing statistical judgment on the header information of the H264 video data packet to obtain the header information of I frame data and the header data of P frame data in the H264 video data packet, extracting H264 frame data according to the header information of I, P frame data, and completing second-layer data unpacking; finally, the I frame data and the P frame data are arranged and combined to obtain H264 data; the method can replace a manufacturer private SDK library to perform decoding operation, occupies less resources during operation, has higher operation efficiency, and thus provides a simple, convenient and quick general decoding method for decoding the PS data stream to obtain H264data, which does not depend on the manufacturer private SDK library.
Further, as shown in fig. 3, in an embodiment, the step S2, namely selecting a preset number of H264 video data packets from the packet set as test packets, performing filter analysis on the test packets by using a preset set of filter conditions, and determining a mapping relationship between a frame type of the H264 video data packet and packet header information of the H264 video data packet according to a result of the filter analysis specifically includes the following steps:
s21: and selecting one filtering condition from a preset filtering condition set as the current filtering condition.
The preset filtering condition set may include a header field start code, a header length, and the like. Wherein, the initial code of the header field is 0x0000 BA-0 x000 FBA; the header length is 20 bytes, 116 bytes, 336 bytes.
Specifically, the start code of the header field and the length of the header in the filter condition set may be respectively stored as a one-dimensional array, for example, the value in the start code array of the header field is from 0x0000BA to 0x000 FBA; the values in the header length array are 20, 116 and 336.
S22: and if the packet header information of the test packet meets the current filtering condition, extracting data of the test packet according to the packet header information of the test packet to obtain packet data.
And comparing the packet header information of the test packet with the current filtering condition, and if the packet header information accords with the current filtering condition, extracting data of the test packet according to the packet header information of the test packet to obtain packet data.
Specifically, a value, for example, 0x0001BA, is taken from the header field start code array, and is used as the header field of the I frame data, the header field start code of the test packet is compared with 0x0001BA, if the two are equal, the test packet is determined to be the I frame data, and is extracted and marked as the I frame data, and meanwhile, the other test packets are extracted and marked as the P frame data; if the two are not equal, the start code of the header field of the next test packet is selected to be compared with 0x0001BA until all test packets are traversed.
S23: the packet data is H264 decoded to acquire image data.
The image data may be RGB image data, or YUV image data.
The RGB image data is image data encoded in an RGB format. The RGB color scheme is a color standard in the industry, and various colors are obtained by changing three color channels of red (R), green (G) and blue (B) and superimposing the three color channels on each other, wherein RGB represents the colors of the three color channels of red, green and blue. The RGB format coding method is used so that each color can be represented by three variables, namely the intensity of red, green and blue. Common RGB formats are RGB24, RGB32, ARGB32, and the like. Wherein, the RGB24 uses 24 bits to represent a pixel, the RGB components are all represented by 8 bits, and the value range is 0-255; RGB32 uses 32 bits to represent a pixel, the RGB components each take 8 bits, and the remaining 8 bits are used as grayscale channels or not; ARGB32 is RGB24 with Alpha (transparency) channel.
The YUV image data is image data encoded in YUV format. YUV, a color coding method, is often used in various video processing components. Y, U, V form a color vector space, where "Y" represents brightness (Luma), i.e., a gray scale value; "U" and "V" denote Chroma (Chroma) which describes the color and saturation of an image and is used to specify the color of a pixel. YUV is particularly suitable for use in the digital processing of images for the transmission and storage of data. The main sampling formats of YUV images are YCbCr4: 2:0, YCbCr4: 2:2, YCbCr4:1:1 and YCbCr4: 4: 4. The YCbCr4:1:1 is commonly used, and has the following meanings: each point holds an 8-bit luminance value (i.e., Y value) and each 2 by 2 point holds a Cr and Cb value, and the image does not change much in the sense of the naked eye.
Both RGB and YUV are color spaces for representing colors, and RGB image data and YUV image data can be converted with each other, and the most significant advantage of YUV image data is that it only needs to occupy a very small bandwidth compared with RGB video signal transmission.
Specifically, the packet data marked as the I frame and the packet data marked as the P frame obtained in step S22 are decoded according to the encoding standard of H264, and the pixel information in the I frame, the P frame, or the B frame in the H264data stream is extracted and converted into RGB data or YUV data.
Preferably, the packet data is decoded into YUV image data.
S24: and if the image data is successfully matched with the preset threshold, determining the mapping relation between the frame type of the H264 video data packet and the header information of the H264 video data packet according to the current filtering condition.
Comparing the RGB image data or the YUV image data with a preset threshold value to judge whether the image has distortion or not, and accordingly determining header information corresponding to an I frame and a P frame in an H264 video data packet.
The preset threshold may be an average value of color values of all pixels in the image or a color distribution of pixels in a certain area. For example, for the case of full white or full black, the color values of the large-area pixels are the same on the image data, that is, for the case of full black, the pixels are all {0, 0, 0}, and for the case of full white, the pixels are all {255, 255, 255 }; for the case of screen-splash or mosaic, the regular rectangular blocks appear in a certain area on the image data, that is, the pixel point values in a certain area are the same and form a square. In the above cases, the final reason is that in step S23, the H264 decoding module decodes the non-I frame data as I frames or decodes the non-P frame data as P frames.
The RGB image data or YUV image data are stored in array form. For example, for a 320 × 480 resolution RGB image data, there are 153600 pixels, and each pixel is represented as a decimal set of { R, G, B }, where R, G, B represents the value of the red, green, and blue channels of a pixel respectively.
Specifically, the picture with the preset threshold may be stored as an image, the image may specifically be an abnormal image of full white, full black or a flower screen, the size of the image is consistent with the size of the image data, the pixel array of the image is compared with the RGB array or the YUV array of the image data, and the comparison takes a pixel point as a unit. If the number of pixels with equal pixel values is less than 20% of the total number of pixels in the comparison result, it represents that the output image data is not distorted, and thus the header information corresponding to the I frame and the P frame in the H264 video data packet is determined.
S25: if the image data is unsuccessfully matched with the preset threshold, another filtering condition is selected from the preset filtering condition set as the current filtering condition, and the step S22 is returned to continue execution until all filtering conditions in the preset filtering condition set are traversed.
Specifically, a picture with a preset threshold value is stored as an image, the size of the image is consistent with that of image data, a pixel array of the image is compared with an RGB array or a YUV array of the image data, and the comparison takes pixel points as a unit. If the number of pixels with equal pixel values is higher than 80% of the total number of pixels in the comparison result, it represents that the output image data has distortion. Then, the server selects another filtering condition from the header field start code array or the header length array as the current filtering condition, and re-determines the header information of the I frame or the P frame, so as to continue inputting the packet data marked as the I frame and the packet data marked as the P frame into the H264 decoding module until all filtering conditions in the header field start code array or the header length array are traversed, i.e., the process from step S22 to step S25 is executed in a loop, which is not described herein again.
If all the filter conditions in the preset filter condition set are not successfully matched after traversal, the server returns abnormal information for prompting that the received PS data stream has problems or the PS data streams with different packaging formats are received.
In this embodiment, the filtering conditions are sequentially selected from a preset filtering condition set, and compared with the header information of the test packet, the mapping relationship between the frame type of the H264 video data packet and the header information of the H264 video data packet is gradually estimated, so that the header field start code and the header length of each of the I frame data and the P frame data in the H264 video data packet can be quickly and accurately found from the PS data stream, that is, the mapping relationship between the frame type of the H264 video data packet and the header information is determined, and the decoding speed of the PS data stream is improved.
Further, as shown in fig. 4, in an embodiment, in step S1, parsing the PS data stream to obtain a packet set including a plurality of H264 video data packets includes the following steps:
s11: and extracting a packet set containing a plurality of H264data packets from the PS data stream according to the packet header information of the PS data packets.
Specifically, according to the encoding standard of the PS data stream, the service end may extract packet header information of the PS data packet, and analyze the PS data stream by using a tool, thereby obtaining a packet set including a plurality of H264data packets, i.e., a packet set composed of "PES header + H264 data" packets.
For example, a tool Color trace analyzer may be used to parse the PS data stream to obtain parameters related to the PS data packet, including: packet header name, packet header length, packet type, packet attributes, frame type. The Color contract analyzer is a closed-source analysis tool, and can analyze a plurality of video data streams including the PS data stream. The tool analyzes the PS data stream, so that the workload of the server side for processing the PS data stream can be reduced.
S12: and if the header information of the H264data packet is matched with the preset field information, determining that the H264data packet is an H264 audio data packet, and if not, determining that the H264data packet is an H264 video data packet.
The packet set including the H264 packets obtained in step S11 includes H264 audio packets and H264 video packets. The server side needs to delete the H264 audio data packet and only leaves the H264 video data packet.
The server can detect a stream identifier field stream _ id in a PES header packet, when the value range of the field is 0xC 0-0 xDF, the data in the H264data packet is represented by audio data, the H264data packet is an H264 audio data packet, and if the value of the field is other values, the data in the H264data packet is represented by video data, and the H264data packet is an H264 video data packet.
Specifically, when the service end detects that the stream identification field of the PES header packet takes a value of 0XC0, it determines that the H264data packet following the PES header packet is an H264 audio data packet, and the service end marks the packet to distinguish the packet from the H264 video data packet.
S13: and deleting the H264 audio data packets from the packet set to obtain a packet set containing a plurality of H264 video data packets.
Specifically, the server deletes the H264 audio data packets from the packet set according to the flag made in step S12, resulting in a packet set containing several H264 video data packets.
In this embodiment, the server filters the H264 audio data packet in the PS data stream, deletes the H264 audio data packet, and decodes only the H264 video data packet, thereby improving the decoding speed of the PS data stream.
Further, as shown in fig. 5, in an embodiment, for step S3, that is, for each H264 video data packet in the packet set, according to the header information of each H264 video data packet and the mapping relationship, data extraction is performed on each H264 video data packet to obtain frame data, which specifically includes the following steps:
s31: and storing the H264 video data packet into a preset buffer pool.
Generally, the video decoding process has a very high requirement on timeliness, and untimely decoding can cause the video picture to be blocked or lost, thereby affecting the picture quality and the appearance. The server caches the H264 video data packet to the local, and the speed of acquiring and decoding the H264 video data packet from the local is faster than the speed of acquiring and decoding the H264 video data packet from the network.
The server side opens a cache pool for the H264 video data packets, the size of the cache pool is preset, and the server side stores the H264 video data packets into a preset array in sequence. Specifically, the server opens up a temporary storage area, and the size of the allocated array space can be determined according to the playing time of the H264 video stream.
For example, if it takes 10 seconds to buffer and start decoding and playing video, the size of the H264data stream required for playing 10 seconds of video is the size of the buffer pool. Assuming that 25 frames of pictures are played in one second, and there are 24P frames and one I frame in the 25 frames, 10 seconds of video requires data of 10I frames and 240P frames, and the size of the array of the buffer pool is equal to the number of bytes occupied by 10I frame data and 240P frame data.
S32: and if the buffer pool is detected to be full, performing data extraction on each H264 video data packet according to the header information of each H264 video data packet and the mapping relation to obtain frame data.
When the server side fills the H264 video data packets into the array of the cache pool in sequence, the size of the filled H264 video data packets is accumulated, when the array is detected to be full, the server side takes out the H264 video data packets from the array of the cache pool in sequence, and extracts data of each H264 video data packet according to the packet header information of each H264 video data packet and the mapping relation to obtain frame data. The process of extracting the frame data has been described in detail in step S3, and is not described herein again. Meanwhile, the server continues to cache the newly received H264 video data packet into the cache pool array.
In this embodiment, the server stores the H264 video data packet in the cache pool, and takes the H264 video data packet from the cache pool and then extracts frame data, instead of operating the H264 video data packet received from the network, so that data is extracted while receiving data without interference, dependence on the network speed in the decoding process can be reduced, and the decoding speed is improved as a whole.
Further, in an embodiment, the step S4 of reconstructing the frame data according to the preset H264 coding requirement to obtain the H264data stream includes:
and carrying out cross arrangement and combination on the I frame data and the P frame data according to a preset frame number to obtain the H264data stream.
The quality of H264 video pictures has a clear relationship with the permutation and combination of I frames and P frames, and the quality of different permutation and combination pictures is different. In general, increasing the number of I-frames improves picture quality, but increases both bandwidth and network load,
the preset number of frames is how many P frames of data are separated from one I frame of data to the next I frame of data in the H264data stream. Specifically, for the H264data stream output by the camera, one I frame data is separated from the next I frame data by 49P frame data.
In this embodiment, the I frame data and the P frame data are cross-arranged and combined according to the preset number of frames, so that the reconstructed H264data stream has different picture qualities, and can be applied to different application occasions.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by functions and internal logic of the process, and should not limit the implementation process of the embodiments of the present invention in any way.
In an embodiment, a PS data stream decoding apparatus is provided, and the PS data stream decoding apparatus corresponds to the PS data stream decoding method in the foregoing embodiment one to one. As shown in fig. 6, the PS data stream decoding apparatus includes a data stream parsing module 61, a filtering analysis module 62, an extraction module 63, and a reassembly module 64. The functional modules are explained in detail as follows:
a data stream parsing module 61, configured to parse the PS data stream to obtain a packet set including a plurality of H264 video data packets;
the filtering analysis module 62 is configured to select a preset number of H264 video data packets from the packet set as test packets, perform filtering analysis on the test packets by using a preset filtering condition set, and determine a mapping relationship between a frame type of the H264 video data packet and packet header information of the H264 video data packet according to a result of the filtering analysis, where the frame type includes an I frame and a P frame;
the extracting module 63 is configured to, for each H264 video data packet in the packet set, perform data extraction on each H264 video data packet according to the packet header information of each H264 video data packet and the mapping relationship, to obtain frame data, where the frame data is I frame data or P frame data;
and the restructuring module 64 is configured to restructure the frame data according to a preset H264 coding requirement, so as to obtain an H264data stream.
Further, the filtering analysis module 62 includes:
a filtering condition selecting submodule 621, configured to select a filtering condition from a preset filtering condition set as a current filtering condition;
a packet data extraction sub-module 622, configured to, if the packet header information of the test packet meets the current filtering condition, perform data extraction on the test packet according to the packet header information of the test packet to obtain packet data;
a decoding sub-module 623, configured to perform H264 decoding on the packet data to obtain image data;
a mapping relation determining submodule 624, configured to determine, according to the current filtering condition, a mapping relation between the frame type of the H264 video data packet and the header information of the H264 video data packet if the image data is successfully matched with the preset threshold;
the circulating sub-module 625 is configured to, if the image data is unsuccessfully matched with the preset threshold, select another filtering condition from the preset filtering condition set as the current filtering condition, and return to the step of, if the header information of the test packet meets the current filtering condition, performing data extraction on the test packet according to the header information of the test packet, and continuing to perform the step of obtaining packet data until all filtering conditions in the preset filtering condition set are traversed.
Further, the filtering analysis module 61 includes:
a PS data stream filtering sub-module 611, configured to extract a packet set including a plurality of H264data packets from the PS data stream according to the packet header information of the PS data packets;
a header information matching sub-module 612, configured to determine that the H264data packet is an H264 audio data packet if header information of the H264data packet matches preset field information, and otherwise, determine that the H264data packet is an H264 video data packet;
the filtering submodule 613 is configured to delete the H264 audio data packets from the packet set, so as to obtain a packet set including a plurality of H264 video data packets.
Further, the extraction module 63 includes:
the buffer sub-module 631 is configured to store the H264 video data packet into a preset buffer pool;
the extracting submodule 632 is configured to, if it is detected that the cache pool is full, perform data extraction on each H264 video data packet according to the header information of each H264 video data packet and the mapping relationship, so as to obtain frame data.
Further, the reassembly module 64 includes:
the reassembly sub-module 641 is configured to perform cross arrangement and combination on the I frame data and the P frame data according to a preset number of frames to obtain an H264data stream.
For specific limitations of the PS data stream decoding apparatus, reference may be made to the above limitations of the PS data stream decoding method, which are not described herein again. The modules in the PS data stream decoding apparatus can be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a PS data stream decoding method.
In one embodiment, a computer device is provided, which includes a memory, a processor and a computer program stored in the memory and executable on the processor, and the processor executes the computer program to implement the steps of the PS data stream decoding method in the foregoing embodiments, such as steps S1 to S4 shown in fig. 2. Alternatively, the processor, when executing the computer program, implements the functions of each module/unit of the PS data stream decoding apparatus in the above-described embodiments, for example, the functions of the modules 61 to 64 shown in fig. 6. To avoid repetition, further description is omitted here.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, and the computer program is executed by a processor to implement the PS data stream decoding method in the above method embodiment, or the computer program is executed by the processor to implement the functions of each module/unit in the PS data stream decoding device in the above device embodiment. To avoid repetition, further description is omitted here.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (8)

1. A PS data stream decoding method, the PS data stream decoding method comprising:
analyzing the PS data stream to obtain a packet set containing a plurality of H264 video data packets;
selecting a preset number of H264 video data packets from the packet set as test packets, performing filtering analysis on the test packets by using a preset filtering condition set, and determining a mapping relation between a frame type of the H264 video data packets and packet header information of the H264 video data packets according to a filtering analysis result, wherein the frame type comprises an I frame and a P frame;
for each H264 video data packet in the packet set, performing data extraction on each H264 video data packet according to the packet header information of each H264 video data packet and the mapping relation to obtain frame data, wherein the frame data is I frame data or P frame data;
recombining the frame data according to a preset H264 coding requirement to obtain an H264data stream;
the filtering and analyzing the test packet by using a preset filtering condition set, and determining a mapping relationship between the frame type of the H264 video data packet and the header information of the H264 video data packet according to a filtering and analyzing result, including:
selecting a filtering condition from the preset filtering condition set as a current filtering condition;
if the packet header information of the test packet meets the current filtering condition, performing data extraction on the test packet according to the packet header information of the test packet to obtain packet data;
h264 decoding is carried out on the packet data to obtain image data;
if the image data is successfully matched with a preset threshold, determining a mapping relation between the frame type of the H264 video data packet and the header information of the H264 video data packet according to the current filtering condition;
if the image data is unsuccessfully matched with the preset threshold value, selecting another filtering condition from a preset filtering condition set as the current filtering condition, returning to the step of extracting data from the test packet according to the packet header information of the test packet if the packet header information of the test packet meets the current filtering condition, and continuing executing the step of obtaining packet data until all filtering conditions in the preset filtering condition set are traversed.
2. The PS data stream decoding method of claim 1, wherein the parsing the PS data stream to obtain a packet set including H264 video data packets comprises:
extracting a packet set containing a plurality of H264data packets from the PS data stream according to the packet header information of the PS data packets;
if the header information of the H264data packet is matched with the preset field information, determining that the H264data packet is an H264 audio data packet, otherwise, determining that the H264data packet is the H264 video data packet;
and deleting the H264 audio data packets from the packet set to obtain a packet set containing a plurality of H264 video data packets.
3. The PS data stream decoding method according to claim 1 or 2, wherein the performing, for each H264 video data packet in the packet set, data extraction on each H264 video data packet according to the header information of each H264 video data packet and the mapping relationship to obtain frame data comprises:
storing the H264 video data packet into a preset cache pool;
and if the cache pool is detected to be full, performing data extraction on each H264 video data packet according to the header information of each H264 video data packet and the mapping relation to obtain frame data.
4. The PS data stream decoding method according to claim 1 or 2, wherein the reconstructing the frame data according to the preset H264 coding requirement to obtain the H264data stream comprises:
and carrying out cross arrangement and combination on the I frame data and the P frame data according to a preset frame number to obtain an H264data stream.
5. A PS data stream decoding apparatus, wherein the PS data stream decoding apparatus comprises:
the data stream analyzing module is used for analyzing the PS data stream to obtain a packet set containing a plurality of H264 video data packets;
the filtering analysis module is used for selecting a preset number of H264 video data packets from the packet set as test packets, performing filtering analysis on the test packets by using a preset filtering condition set, and determining a mapping relation between a frame type of the H264 video data packets and packet header information of the H264 video data packets according to a filtering analysis result, wherein the frame type comprises an I frame and a P frame;
an extraction module, configured to, for each H264 video data packet in the packet set, perform data extraction on each H264 video data packet according to packet header information of each H264 video data packet and the mapping relationship, to obtain frame data, where the frame data is I frame data or P frame data;
the recombination module is used for recombining the frame data according to a preset H264 coding requirement to obtain an H264data stream;
the filtration analysis module comprises:
a filtering condition selecting submodule, configured to select a filtering condition from the preset filtering condition set as a current filtering condition;
the packet data extraction submodule is used for extracting data of the test packet according to the packet header information of the test packet to obtain packet data if the packet header information of the test packet meets the current filtering condition;
the decoding submodule is used for carrying out H264 decoding on the packet data to obtain image data;
a mapping relation determining sub-module, configured to determine, according to the current filtering condition, a mapping relation between a frame type of the H264 video data packet and header information of the H264 video data packet if the image data is successfully matched with a preset threshold;
and the circulating submodule is used for selecting another filtering condition from a preset filtering condition set as the current filtering condition if the image data is not successfully matched with the preset threshold value, returning back to the step of extracting data of the test packet according to the header information of the test packet if the header information of the test packet meets the current filtering condition, and continuously executing the step of obtaining the packet data until all filtering conditions in the preset filtering condition set are traversed.
6. The PS data stream decoding apparatus according to claim 5, wherein the filtering analysis module includes:
the PS data flow filtering submodule is used for extracting a packet set containing a plurality of H264data packets from the PS data flow according to the packet header information of the PS data packets;
a header information matching submodule, configured to determine that the H264data packet is an H264 audio data packet if the header information of the H264data packet matches preset field information, and otherwise, determine that the H264data packet is the H264 video data packet;
and the filtering submodule is used for deleting the H264 audio data packets from the packet set to obtain a packet set containing a plurality of H264 video data packets.
7. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the PS data stream decoding method according to any one of claims 1 to 4 when executing the computer program.
8. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the PS data stream decoding method according to any one of claims 1 to 4.
CN201811010989.9A 2018-08-31 2018-08-31 PS data stream decoding method, device, computer equipment and storage medium Active CN109089153B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811010989.9A CN109089153B (en) 2018-08-31 2018-08-31 PS data stream decoding method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811010989.9A CN109089153B (en) 2018-08-31 2018-08-31 PS data stream decoding method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109089153A CN109089153A (en) 2018-12-25
CN109089153B true CN109089153B (en) 2022-08-19

Family

ID=64840197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811010989.9A Active CN109089153B (en) 2018-08-31 2018-08-31 PS data stream decoding method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109089153B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103716640A (en) * 2010-12-17 2014-04-09 华为技术有限公司 Method and device for detecting frame type
CN107241323A (en) * 2017-06-01 2017-10-10 上海寰视网络科技有限公司 Spell frame method and equipment
CN107404646A (en) * 2016-05-20 2017-11-28 华为技术有限公司 The method, apparatus and headend of video quality assessment
WO2017219896A1 (en) * 2016-06-21 2017-12-28 中兴通讯股份有限公司 Method and device for transmitting video stream
CN108063953A (en) * 2017-12-28 2018-05-22 武汉烽火众智数字技术有限责任公司 The code-transferring method of video code conversion gateway, monitoring system and video code conversion gateway

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103716640A (en) * 2010-12-17 2014-04-09 华为技术有限公司 Method and device for detecting frame type
CN107404646A (en) * 2016-05-20 2017-11-28 华为技术有限公司 The method, apparatus and headend of video quality assessment
WO2017219896A1 (en) * 2016-06-21 2017-12-28 中兴通讯股份有限公司 Method and device for transmitting video stream
CN107241323A (en) * 2017-06-01 2017-10-10 上海寰视网络科技有限公司 Spell frame method and equipment
CN108063953A (en) * 2017-12-28 2018-05-22 武汉烽火众智数字技术有限责任公司 The code-transferring method of video code conversion gateway, monitoring system and video code conversion gateway

Also Published As

Publication number Publication date
CN109089153A (en) 2018-12-25

Similar Documents

Publication Publication Date Title
US20210385466A1 (en) Method and device for encoding/decoding an image unit comprising image data represented by a luminance channel and at least one chrominance channel
US11159805B2 (en) Methods and systems for generating regional nesting messages for video pictures
CN111416976B (en) Video decoding method, video encoding method, device, equipment and storage medium
CN110446041B (en) Video encoding and decoding method, device, system and storage medium
US8369404B2 (en) Moving image decoding device and moving image decoding method
RU2573257C2 (en) Image signal decoding device, image signal decoding method, image signal encoding device, image signal encoding method and programme
US20220329792A1 (en) Method and apparatus for encoding and decoding a video stream with subpictures
JP7066786B2 (en) High dynamic range and wide color gamut content transmission in transport streams
EP3962085A2 (en) Image encoding and decoding method, encoding and decoding device, encoder and decoder
WO2015096822A1 (en) Image coding and decoding methods and devices
TW201735608A (en) Methods and systems for generating color remapping information supplemental enhancement information messages for video
CN107172376B (en) Video coding method and device based on screen sharing
US20070098083A1 (en) Supporting fidelity range extensions in advanced video codec file format
EP4136851A1 (en) Adaptive loop filtering for color format support
US20240080487A1 (en) Method, apparatus for processing media data, computer device and storage medium
CN109089153B (en) PS data stream decoding method, device, computer equipment and storage medium
CN108307191B (en) Image data alignment method and device
US11849133B2 (en) Low complexity history usage for rice parameter derivation for high bit-depth video coding
WO2024059998A1 (en) Variable intra-frame (i-frame) time interval and group of picture (gop) length for video coding
US11838553B2 (en) Green metadata signaling
US20220201329A1 (en) Intra prediction using enhanced interpolation filters
CN110233985B (en) Data transmission method and network video monitoring device
CN110235446B (en) Video encoding method, video decoding method and related devices
CN116033170A (en) Video decoding method, video encoding/decoding system, and video decoding device
CN104469399A (en) Method for macro block SKIP type selection in spatial resolution video transcoding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant