CN113645490A - Soft and hard combined multi-channel video synchronous decoding method - Google Patents

Soft and hard combined multi-channel video synchronous decoding method Download PDF

Info

Publication number
CN113645490A
CN113645490A CN202110697282.5A CN202110697282A CN113645490A CN 113645490 A CN113645490 A CN 113645490A CN 202110697282 A CN202110697282 A CN 202110697282A CN 113645490 A CN113645490 A CN 113645490A
Authority
CN
China
Prior art keywords
frame
data
decoding
length
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110697282.5A
Other languages
Chinese (zh)
Other versions
CN113645490B (en
Inventor
高娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Jinhang Computing Technology Research Institute
Original Assignee
Tianjin Jinhang Computing Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Jinhang Computing Technology Research Institute filed Critical Tianjin Jinhang Computing Technology Research Institute
Priority to CN202110697282.5A priority Critical patent/CN113645490B/en
Publication of CN113645490A publication Critical patent/CN113645490A/en
Application granted granted Critical
Publication of CN113645490B publication Critical patent/CN113645490B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation

Abstract

A soft and hard combined multi-channel video synchronous decoding method. The invention belongs to a video decoding technology under a linux system, and relates to a design method for synchronous decoding of a multichannel video by combining software and hardware under the linux system. It is characterized in that: firstly, ffmpeg is transplanted to the Haisi platform, source code modification is carried out on the ffmpeg to enable the source code to have the function of returning parameter frames, and meanwhile, the source code is adapted to a Haisi chip. Secondly, starting a network to perform data transmission interaction with a host, and adopting a buffer mechanism for received data so as to realize the synchronization of multiple paths of videos; then, starting a dynamic library of the ffmpeg to dynamically filter the data transmitted by the network, removing error frames, and acquiring a data packet combining the image and the parameter information; and finally, the complete data packet is transmitted to a hard decoding module of the Haisi chip, and the data of the decoding module is acquired and then transmitted to the host again through the network, thereby completing the decoding task.

Description

Soft and hard combined multi-channel video synchronous decoding method
Technical Field
The invention belongs to a video decoding technology under a linux system, and particularly relates to a soft and hard combined multi-channel video synchronous decoding method.
Background
Hi3559AV100 is a professional 8K Ultra HD Mobile Camera SOC, provides digital video recording of 8K30/4K120 broadcast-level image quality, supports multi-channel Sensor input, supports H.265 coding output or video-level RAW data output, integrates high-performance ISP processing, and simultaneously adopts an advanced low-power-consumption process and a low-power-consumption architecture design, and provides excellent image processing capability for users.
Hi3559AV100 supports the leading multi-channel 4K Sensor input in the industry, multi-channel ISP image processing, the high dynamic range technical standard of HDR10 and multi-channel panoramic hardware splicing. In support of 8K30/4K120 video recording, Hi3559AV100 provides hardened 6-Dof digital anti-shake, reducing reliance on mechanical holders.
However, the Hi3559AV100 belongs to the hard decoding category, and when the protocol frame does not completely conform to the decoding protocol or the number of error frames is large, the decoding efficiency is low or decoding is impossible. Moreover, when multiple images are transmitted simultaneously, the decoding is not synchronous due to different transmission rates of the multiple images. The invention adopts the ffmpeg decoding library to acquire parameter frame information and data frame information in the original frame, effectively filters error frames, combines a complete data packet to the hard decoding module, greatly shortens decoding time, and simultaneously adopts a buffer mechanism to process image data, thereby effectively solving the problem of asynchronous decoding.
Disclosure of Invention
The technical problem solved by the invention is as follows: the invention overcomes the defects of the prior art and provides a soft and hard combined multi-channel video synchronous decoding method, and the system is in linux, a Haisi Hi3559AV100 chip is used as a hard decoding module, and an FFMPEG decoding library is used for acquiring complete frame information, thereby effectively acquiring compressed frame parameter information and reducing decoding time. Meanwhile, the received image data is cached, and the phenomenon that multiple paths of images are not synchronous due to different transmission rates is reduced.
The technical solution of the invention is as follows:
in a first aspect, a soft and hard combined multi-channel video synchronous decoding method includes the following steps:
1) the compiling attribute and parameter of the ffmpeg are configured, and the ffmpeg dynamic library is transplanted to the Haisi platform;
2) establishing a task of receiving image data by a network and storing the data into an annular buffer area;
3) extracting original data from the annular buffer area, and filtering error frames by using an ffmpeg dynamic library to obtain a complete data packet;
4) and screening error frames of the complete data packet acquired by the ffmpeg based on a fault-tolerant strategy.
5) Decoding the image data packet by using a hard decoding module in the Haisi platform, and sending the screened complete data packet to the hard decoding module;
6) and the decoded data is transmitted back to the host by the network after the decoded channel image is obtained.
Optionally, the task of creating a network to receive image data in step 2) specifically includes:
212) acquiring a receiving ip address and a receiving port in a configuration file;
213) creating a network socket;
214) clearing the receiving buffer, and entering step 215 after waiting for receiving the image data sent by the network);
215) judging whether the length received this time is greater than zero, if so, carrying out the next step, otherwise, returning to the step 213);
216) and judging whether the protocol frame header meets the protocol requirements, if not, discarding the image frame, otherwise, storing the data into a ring buffer.
Optionally, the storing the data in the ring buffer in step 2) specifically includes:
221) judging whether the length len of the data received by the network is greater than 0, if so, carrying out the next step, and if not, exiting; waiting for the next network transmission of the original compressed image data;
222) searching a channel number of the current data according to bytes specified by a protocol, and storing the channel number;
223) judging whether the sum of the existing data lengths cirLen and LEN of the data buffer area of the channel number is smaller than the maximum length MAX _ LEN of the buffer area, if so, carrying out the next step, and if not, carrying out the step 226);
224) and copying the data of the network receiving area to a ring buffer area, wherein the copying length is len.
225) The total data length of the buffer area is increased by len, and meanwhile, the first address ptr of the stored data is also moved by len positions.
226) Copying the data of the network receiving area to a buffer area, wherein the copy length is equal to the maximum value of _ Len and cirLen;
227) resetting a write pointer putpttr, and moving the pointer to the first address of the buffer area;
228) calculating the total data length cirLen of the buffer area as the difference value of len and the copy length in the step 226);
229) receiving the array from the network again and copying the array to a buffer area, wherein the buffer area position is moved by MAX _ LEN minus the length of cirLen bytes, and the copy length is cirLen;
2210) move write pointer putpttr forward, move cirLen bytes.
Optionally, the screening of the error frame for the complete data packet obtained by ffmpeg in step 4) specifically includes:
41) searching a frame header in the acquired data according to the h265 protocol frame header;
42) searching a frame tail in the acquired data according to the h265 protocol frame header;
43) and finding the frame of the frame head and the frame tail as a complete frame, recording the type of the image frame, and if not, discarding the image frame.
44) Recording the type of original data required by each image frame to obtain a coding rule;
45) analyzing erroneous image data that does not comply with the rule in step 44);
451) storing a vps frame, an sps frame, a pps frame and a sei frame which are earlier than the current frame in time and are the latest;
452) if the current frame and the previously stored frame do not accord with the encoding rule Encode and the current frame is a redundant frame, modifying the first p frame in the encoding rule into an I frame, and combining the vps frame, the sps frame, the pps frame, the sei frame and all the p frames in the encoding rule obtained in the step 451) to be combined together to be used as image data; completing error frame screening work to obtain a screened complete data packet; otherwise, go to step 453);
453) if the current frame and the previously stored frame do not accord with the encoding rule Encode and the frame behind the current frame is a vps frame, modifying the first p frame of the current encoding rule into an I frame, and combining the vps frame, the sps frame, the pps frame, the sei frame and all the p frames in the modified encoding rule in the step 451) to form image data; completing error frame screening work to obtain a screened complete data packet; otherwise, go to step 454);
454) if the current frame and the previously stored frame accord with the coding rule and the current frame is an error frame, discarding the current frame, modifying the first p frame in the coding rule into an I frame, and combining the previous two p frames and the vps frame, the sps frame, the pps frame and the sei frame in the step 451) together to form image data; and finishing the error frame screening work to obtain the screened complete data packet.
Optionally, the acquiring the complete data packet in step 3) specifically includes:
321) acquiring the image data buffer first address bufPtr and the length bufLen read from the ring buffer, specifically including the following steps:
3211) acquiring a write pointer position putpttr of a current buffer area and the total length cirLen of data of the buffer area;
3212) judging whether the readPtr and the putPtr are consistent, if so, delaying for 1ms, returning to the step 3211), and if not, carrying out the next step.
3213) Judging whether the readLen of the read data is smaller than the cirLen, if so, continuing the next step, otherwise, 3217);
3214) judging whether the difference value between the cirLen and the readLen is more than or equal to the fixed length frame _ len of the protocol frame, if so, continuing the next step, and if not, performing a step 3216);
3215) taking the current readPtr as a decoded image data head address bufPtr, wherein the length bufLen is frame _ len, simultaneously moving a pointer readPtr, moving bytes to be frame _ len, increasing the read length readLen, and increasing the number of the increased bytes to be frame _ len;
3216) and taking the current readPtr as a first address bufPtr of the decoded image data, wherein the length bufLen is cirLen minus readLen, the readPtr is moved at the same time, the number of the movement is cirLen minus readLen, the read length readLen is increased, and the number of the increased bytes is the difference value of the cirLen and the readLen.
3217) Judging whether the sum of readLen and frame _ LEN is less than the maximum length MAX _ LEN of the buffer area, if so, carrying out the next step, and if not, carrying out the step 3219);
3218) taking the current readPtr as the decoded image data head address bufPtr, the data length bufLen is frame _ len, moving the read pointer, the number of bytes moved by the pointer is frame _ len, increasing the read data length readLen, and the number of increased bytes is frame _ len.
3219) And taking the current readPtr as a first address bufPtr of the decoded image data, wherein the data length bufLen is a difference value obtained by subtracting readLen from MAX _ LEN, moving a read pointer to a first address of the buffer, and setting the read length readLen to be 0.
322) Judging whether the current data length bufLen is larger than 0, if so, continuing the next step, and if not, exiting the decoding process of the data, and waiting for the original compressed image data transmitted by the network next time;
323) transmitting a data array head address pointer bufPtr and a length bufLen to a soft decoding module, segmenting data by frames by using a library function av _ parser _ parse2, if a complete image frame data packet can be successfully obtained from an array, recording and continuing the next step, and if not, exiting the decoding process;
324) storing the length ret of the image frame divided in the data array at the time, removing the length ret of the image frame data packet divided at the time from the total length bufLen of the data array, and moving a first address pointer bufPtr forwards for ret;
325) and putting the image complete data packet segmented this time into a queue to be decoded, waiting for the original compressed image data transmitted by the network next time, and returning to the step 321).
Optionally, in the step 5), decoding the image data packet by using a hard decoding module in the haisi platform, specifically:
51) initializing a hard decoding module according to the image parameters and the decoding type, configuring the size of a video data buffer area of the hard decoding module, and starting the decoding module;
52) dynamically applying for the size buf of a buffer area of a video image frame data packet;
53) circularly judging whether the current decoding process of the decoding channel is decoding completion or not, if so, exiting the processing flow and carrying out the step 57), and if not, carrying out the next step;
54) setting parameters of the current frame: stream end identifier, frame head identifier, frame tail identifier;
55) putting the image frame data into buf;
56) calling a dynamic library function to send buf data to a decoding module;
57) monitoring the decoding state of the hard decoding module in real time, if the decoding is wrong, restarting the hard decoding module in a soft mode and resetting parameters, and if the decoding is normal, calling a library function to obtain a decoded image;
58) and calling a library function to stop sending the video stream to the decoding module, closing a decoding channel, unbinding the binding relationship of each module and clearing resources.
Optionally, the returning the decoded data to the host by using the network in the step 6) is specifically:
61) creating a decoding image obtaining thread;
62) entering an image acquisition cycle;
63) inquiring the working state in the channel, if the working state is the reset state, delaying for 1ms, continuously inquiring the reset state, and if not, carrying out the next step;
64) calling a library function HI _ MPI _ VDEC _ GetFrame to acquire a memory address of image storage; if the failure is caused, jumping to the step 63), if the failure is caused, entering the next step;
65) creating a network sending task, and sending the data stored in the memory address in the step 64) to the host.
In a second aspect, a processing apparatus comprises:
a memory for storing a computer program;
a processor for calling and running the computer program from the memory to perform the method of the first aspect.
A computer readable storage medium having stored thereon a computer program or instructions which, when executed, implement the method of the first aspect.
A computer program product comprising instructions for causing a computer to perform the method of the first aspect when the computer program product is run on a computer.
Compared with the prior art, the invention has the advantages that:
the method can realize the video decoding and transmission problem under the linux system, and the method is verified by an algorithm and is tested by an experiment. The result shows that the scheme can adopt a soft decoding method to dynamically filter the error frame and obtain a complete data packet in order to solve the problem of image decoding, and uses a hard decoding module of a chip to complete the decoding process, thereby effectively reducing the decoding time. And synchronizing the multi-path image data decoding process by using a buffering mechanism on the received data.
Drawings
Fig. 1 is a flow chart of a software and hardware combined multi-channel video synchronous decoding method of the present invention.
Detailed Description
Aiming at the property of a Haisi Hi3559AV100 chip hard decoding module and combining the characteristic of soft decoding, the multichannel video synchronous decoding method based on the combination of soft decoding and hard decoding under the linux system is designed and realized. The method comprises the following steps:
1) the compiling attribute and parameter of the ffmpeg are configured, and the ffmpeg dynamic library is transplanted to the Haisi platform; first, the ffmpeg compile attribute is configured. Then, the ffmpeg source code is modified to have a function of returning to the parameter frame, and the ffmpeg source code is cross-coded. And finally, obtaining the ffmpeg decoding dynamic library and copying the dynamic library to a Haima development board. The step 1) carries out configuration of a soft decoding environment on the board card, and transplants the ffmpeg of a soft decoding library of the board card to adapt to the Haisi platform.
2) Establishing a task of receiving image data by a network and storing the data into an annular buffer area; firstly, acquiring the information of ip and port number in transmission according to the configuration file. Then, image data waiting for network transmission is blocked. And finally, storing the data into a buffer area. Step 2) a network receiving task is created, data of different channels are received in a classified mode and stored in corresponding annular buffer areas, the data receiving process of each channel is synchronized, and the phenomenon of asynchronous decoding caused by different transmission rates is reduced;
3) extracting original data from the annular buffer area, and filtering error frames by using an ffmpeg dynamic library to obtain a complete data packet;
first, the ffmpeg usage environment is initialized. And secondly, establishing a data packet obtaining thread, taking out original data from the annular buffer area, and dynamically filtering error frames by using an ffmpeg dynamic library to obtain a complete data packet of the compressed image. And 3) carrying out access operation with fixed protocol length on the ring buffer area, creating a use environment for ffmpeg, and appointing a decoder with a starting requirement, so as to carry out protocol analysis on original data. Meanwhile, original data are circularly obtained from a data area received by a network, a library function is called to obtain the length of a data packet which can be combined into a complete image frame, and the length is removed from the annular buffer data area and circularly repeated until the data area has no data.
4) And screening error frames of the complete data packet acquired by the ffmpeg based on a fault-tolerant strategy.
5) Decoding the image data packet by using a hard decoding module in the Haisi platform, and sending the screened complete data packet to the hard decoding module; and after the complete data packet extracted by the ffmpeg is obtained, sending the video stream to a decoding module through the Haisi dynamic library function. And starting the decoding module to perform a decoding task, monitoring the working state of the decoding module in real time, and performing soft reset on the decoder according to the decoding condition to prevent the decoder from generating a fault that the decoding cannot be continued due to an error frame. And 5) starting a hard decoding module of the chip, copying and caching the original image data according to the size of the image buffer area dynamically applied to the size of the image to be decoded, and carrying out hard decoding on the configuration information. Meanwhile, a real-time monitoring thread is started aiming at the condition that the decoding module cannot work due to the error frame, the decoding state of the real-time monitoring thread is analyzed, and a soft reset measure is implemented, so that the decoding module can continuously perform decoding work.
6) And the decoded data is transmitted back to the host by the network after the decoded channel image is obtained.
And establishing a task of acquiring a decoded image, reading the working state of a decoder in real time, calling a library function to acquire the decoded image in a decoding channel, and sending the decoded image data to a host through a network sending task. Through the steps, the video decoding function under the linux system can be realized. And 6) starting a network sending task, and sending the decoded image acquired in real time from the decoding module to the host through the network, thereby completing the data transmission task.
Step 2) the task of creating the network to receive the image data is specifically as follows:
212) firstly, acquiring a receiving ip address and a receiving port in a configuration file;
213) then, a network socket is created;
214) secondly, clearing the receiving buffer, and entering step 215 after waiting for receiving the image data sent by the network);
215) judging whether the length received this time is greater than zero, if so, carrying out the next step, otherwise, returning to the step 213);
216) and judging whether the protocol frame header meets the protocol requirement, if not, discarding the image frame, and if so, storing the image frame into a ring buffer.
Step 2) storing the data into the ring buffer area, specifically:
221) judging whether the length len of the data received by the network is greater than 0, if so, carrying out the next step, and if not, exiting; waiting for the next network transmission of the original compressed image data;
222) searching a channel number of the current data according to bytes specified by a protocol, and storing the channel number;
223) judging whether the sum of the existing data lengths cirLen and LEN of the data buffer area of the channel number is smaller than the maximum length MAX _ LEN of the buffer area, if so, carrying out the next step, and if not, carrying out the step 226);
224) and copying the data of the network receiving area to a ring buffer area, wherein the copying length is len.
225) The total data length of the buffer area is increased by len, and meanwhile, the first address ptr of the stored data is also moved by len positions.
226) Copying the data of the network receiving area to a buffer area, wherein the copy length is equal to the maximum value of _ Len and cirLen;
227) resetting a write pointer putpttr, and moving the pointer to the first address of the buffer area;
228) calculating the total data length cirLen of the buffer area as the difference value of len and the copy length in the step 226);
229) receiving the array from the network again and copying the array to a buffer area, wherein the buffer area position is moved by MAX _ LEN minus the length of cirLen bytes, and the copy length is cirLen;
2210) move write pointer putpttr forward, move cirLen bytes.
The step 4) of screening error frames of the complete data packet obtained by ffmpeg specifically includes:
41) searching a frame header in the acquired data according to the h265 protocol frame header;
42) searching a frame tail in the acquired data according to the h265 protocol frame header;
43) and finding the frame of the frame head and the frame tail as a complete frame, recording the type of the image frame, and if not, discarding the image frame.
44) Recording the type of original data required by each image frame to obtain a coding rule;
45) analyzing erroneous image data that does not comply with the rule in step 44);
451) storing a vps frame, an sps frame, a pps frame and a sei frame which are earlier than the current frame in time and are the latest;
452) if the current frame and the previously stored frame do not accord with the encoding rule Encode and the current frame is a redundant frame, modifying the first p frame in the encoding rule into an I frame, and combining the vps frame, the sps frame, the pps frame, the sei frame and all the p frames in the encoding rule obtained in the step 451) to be combined together to be used as image data; completing error frame screening work to obtain a screened complete data packet; otherwise, go to step 453);
453) if the current frame and the previously stored frame do not accord with the encoding rule Encode and the frame behind the current frame is a vps frame, modifying the first p frame of the current encoding rule into an I frame, and combining the vps frame, the sps frame, the pps frame, the sei frame and all the p frames in the modified encoding rule in the step 451) to form image data; completing error frame screening work to obtain a screened complete data packet; otherwise, go to step 454);
454) if the current frame and the previously stored frame accord with the coding rule and the current frame is an error frame, discarding the current frame, modifying the first p frame in the coding rule into an I frame, and combining the previous two p frames and the vps frame, the sps frame, the pps frame and the sei frame in the step 451) together to form image data; and finishing the error frame screening work to obtain the screened complete data packet.
The step 3) of obtaining the complete data packet specifically includes:
321) acquiring the image data buffer first address bufPtr and the length bufLen read from the ring buffer, specifically including the following steps:
3211) acquiring a write pointer position putpttr of a current buffer area and the total length cirLen of data of the buffer area;
3212) judging whether the readPtr and the putPtr are consistent, if so, delaying for 1ms, returning to the step 3211), and if not, carrying out the next step.
3213) Judging whether the readLen of the read data is smaller than the cirLen, if so, continuing the next step, otherwise, 3217);
3214) judging whether the difference value between the cirLen and the readLen is more than or equal to the fixed length frame _ len of the protocol frame, if so, continuing the next step, and if not, performing a step 3216);
3215) taking the current readPtr as a decoded image data head address bufPtr, wherein the length bufLen is frame _ len, simultaneously moving a pointer readPtr, moving bytes to be frame _ len, increasing the read length readLen, and increasing the number of the increased bytes to be frame _ len;
3216) and taking the current readPtr as a first address bufPtr of the decoded image data, wherein the length bufLen is cirLen minus readLen, the readPtr is moved at the same time, the number of the movement is cirLen minus readLen, the read length readLen is increased, and the number of the increased bytes is the difference value of the cirLen and the readLen.
3217) Judging whether the sum of readLen and frame _ LEN is less than the maximum length MAX _ LEN of the buffer area, if so, carrying out the next step, and if not, carrying out the step 3219);
3218) taking the current readPtr as the decoded image data head address bufPtr, the data length bufLen is frame _ len, moving the read pointer, the number of bytes moved by the pointer is frame _ len, increasing the read data length readLen, and the number of increased bytes is frame _ len.
3219) And taking the current readPtr as a first address bufPtr of the decoded image data, wherein the data length bufLen is a difference value obtained by subtracting readLen from MAX _ LEN, moving a read pointer to a first address of the buffer, and setting the read length readLen to be 0.
322) Judging whether the current data length bufLen is larger than 0, if so, continuing the next step, and if not, exiting the decoding process of the data, and waiting for the original compressed image data transmitted by the network next time;
323) transmitting a data array head address pointer bufPtr and a length bufLen to a soft decoding module, segmenting data by frames by using a library function av _ parser _ parse2, if a complete image frame data packet can be successfully obtained from an array, recording and continuing the next step, and if not, exiting the decoding process;
324) storing the length ret of the image frame divided in the data array at the time, removing the length ret of the image frame data packet divided at the time from the total length bufLen of the data array, and moving a first address pointer bufPtr forwards for ret;
325) and putting the image complete data packet segmented this time into a queue to be decoded, waiting for the original compressed image data transmitted by the network next time, and returning to the step 321).
The decoding of the image data packet by using the hard decoding module in the haisi platform in the step 5) is specifically:
51) initializing a hard decoding module according to the image parameters and the decoding type, configuring the size of a video data buffer area of the hard decoding module, and starting the decoding module;
52) dynamically applying for the size buf of a buffer area of a video image frame data packet;
53) circularly judging whether the current decoding process of the decoding channel is decoding completion or not, if so, exiting the processing flow and carrying out the step 57), and if not, carrying out the next step;
54) setting parameters of the current frame: stream end identifier, frame head identifier, frame tail identifier;
55) putting the image frame data into buf;
56) calling a dynamic library function to send buf data to a decoding module;
57) monitoring the decoding state of the hard decoding module in real time, if the decoding is wrong, restarting the hard decoding module in a soft mode and resetting parameters, and if the decoding is normal, calling a library function to obtain a decoded image;
58) and calling a library function to stop sending the video stream to the decoding module, closing a decoding channel, unbinding the binding relationship of each module and clearing resources.
Said step 6) of returning the decoded data to the host using the network specifically includes:
61) creating a decoding image obtaining thread;
62) entering an image acquisition cycle;
63) inquiring the working state in the channel, if the working state is the reset state, delaying for 1ms, continuously inquiring the reset state, and if not, carrying out the next step;
64) calling a library function HI _ MPI _ VDEC _ GetFrame to acquire a memory address of image storage; if the failure is caused, jumping to the step 63), if the failure is caused, entering the next step;
65) creating a network sending task, and sending the data stored in the memory address in the step 64) to the host.
A processing apparatus, comprising:
a memory for storing a computer program;
a processor for calling and running the computer program from the memory to perform the method described above.
A computer-readable storage medium having stored thereon a computer program or instructions which, when executed, implement the above-described method.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform the method as described above.
In order to solve the problem of video synchronous decoding in a linux system, a method of combining ffmpeg soft decoding and Haimai chip module hard decoding, transmitting data by using a PICE and storing the data into a cache is adopted. The invention is further described below, as shown in fig. 1.
1. Transplanting ffmpeg to Haisi platform
11) First, ffmpeg compiling attribute is configured, and parameters are configured according to the platform type, the cpu type, the codec attribute, the format conversion attribute and the cross compiling attribute.
12) Modifying acodec.h files, increasing parameter frame length SEI _ len and array SEI _ BUF, and determining parameter frame SIZE SEI _ BUF _ SIZE according to application layer requirements;
13) adding a parameter frame acquiring function in a decode _ nal _ sei _ prefix function in a hevc _ sei.c file: obtaining a parameter SIZE in a function, assigning the SIZE to SEI _ len, judging whether the SIZE is smaller than or equal to SEI _ BUF _ SIZE, copying data in a context parameter array gb to SEI _ BUF if a condition is met, wherein the copy length is the SIZE, and an array subscript i of the copied SEI _ BUF is an index of the gb array divided by 8, namely SEI _ BUF [ i ] ═ gb _ BUF [ index/8 ];
14) then, a configure command is executed, and the generated decoding libraries libavcodec, libavformat, libavutil and libswscale are located under the subfolder lib of the configuration folder.
15) Finally, copying the dynamic library to a/usr/lib path of a decoding board card;
2. creating a network to receive image data and storing the data in a ring buffer
21) Creating a network data receiving thread, which comprises the following specific steps:
212) firstly, acquiring a receiving ip address and a receiving port in a configuration file;
213) then, a network socket is created;
214) secondly, clearing the receiving buffer zone to block the image data sent by the receiving network;
215) judging whether the length received this time is greater than zero, if so, carrying out the next step, otherwise, returning to the step 213);
216) judging whether the protocol frame header meets the protocol requirement, if not, discarding the frame;
22) storing data received by a network into a buffer zone, which comprises the following steps:
221) judging whether the length len of the data received by the network is greater than 0, if so, carrying out the next step, and if not, exiting; waiting for the next network transmission of the original compressed image data;
222) searching a channel number of the current data according to bytes specified by a protocol, and storing the channel number;
223) judging whether the sum of the existing data lengths cirLen and LEN of the data buffer area of the channel number is smaller than the maximum length MAX _ LEN of the buffer area, if so, carrying out the next step, and if not, carrying out the step 226);
224) and copying the data of the network receiving area to a ring buffer area, wherein the copying length is len.
225) The total data length of the buffer area is increased by len, and the first address ptr (initialized to the buffer area first address) of the stored data is also moved by len positions.
226) Copying the data of the network receiving area to a buffer area, wherein the copy length is equal to the maximum value of _ Len and cirLen;
227) resetting a write pointer putpttr, and moving the pointer to the first address of the buffer area;
228) calculating the total data length cirLen of the buffer area as the difference value of len and the copy length in the step 226);
229) receiving the array from the network again and copying the array to a buffer area, wherein the position of the buffer area is moved by MAX _ LEN-cirLen lengths, and the copying length is cirLen;
2210) move write pointer putpttr forward, move cirLen bytes.
3. Obtaining a complete data packet of a compressed image using an ffmpeg dynamic library
Original data is taken out from the annular buffer area, and an ffmpeg dynamic library is used for filtering error frames to obtain a complete data packet;
31) initializing decoding library usage environment
First, a decoder type is set, and an HEVC (h265) type decoder is adopted. Then, the context environment of the decoder is initialized, and the image frame storage space is dynamically applied.
32) Creating an ffmpeg acquisition data packet thread, wherein the thread comprises the following specific flows:
321) acquiring the image data buffer first address bufPtr and the length bufLen read from the ring buffer, specifically including the following steps:
3211) acquiring a write pointer position putpttr of a current buffer area and the total length cirLen of data of the buffer area;
3212) judging whether the readPtr and the putPtr are consistent, if so, delaying for 1ms, returning to the step 3211), and if not, carrying out the next step.
3213) Judging whether the readLen of the read data is smaller than the cirLen, if so, continuing the next step, otherwise, 3217);
3214) judging whether the difference value between the cirLen and the readLen is more than or equal to the fixed length frame _ len of the protocol frame, if so, continuing the next step, and if not, performing a step 3216);
3215) taking the current readPtr as the decoded picture data header address bufPtr, the length bufLen is frame _ len, and moving the pointer readPtr, moving the byte as frame _ len, increasing the read length readLen, and increasing the number of bytes as frame _ len.
3216) And taking the current readPtr as a first address bufPtr of the decoded image data, wherein the length bufLen is cirLen minus readLen, the readPtr is moved at the same time, the number of the movements is the difference of cirLen minus readLen, the read length readLen is increased, and the number of the increased bytes is (cirLen-readLen) difference.
3217) Judging whether the sum of readLen and frame _ LEN is less than the maximum length MAX _ LEN of the buffer area, if so, carrying out the next step, and if not, carrying out the step 3219);
3218) taking the current readPtr as the decoded image data head address bufPtr, the data length bufLen is frame _ len, moving the read pointer, the number of bytes moved by the pointer is frame _ len, increasing the read data length readLen, and the number of increased bytes is frame _ len.
3219) And taking the current readPtr as a first address bufPtr of decoded image data, wherein the data length bufLen is MAX _ LEN-readLen, moving a read pointer to a first address of a buffer area, and setting the read length readLen to be 0.
322) And judging whether the current data length bufLen is larger than 0, if so, continuing the next step, and if not, exiting the decoding process of the data, and waiting for the original compressed image data transmitted by the network next time.
323) And transmitting a data array head address pointer bufPtr and a length bufLen to a soft decoding module, segmenting data by frames by using a library function av _ parser _ parse2, recording and continuing the next step if a complete image frame data packet can be successfully obtained from the array, and exiting the decoding process if the complete image frame data packet cannot be obtained from the array.
324) And storing the length ret of the image frame divided in the data array at the time, removing the length ret of the image frame divided at the time from the total length bufLen of the data array, and moving the first address pointer bufPtr forwards for ret.
325) Putting the image complete data packet segmented this time into a queue to be decoded, waiting for the original compressed image data transmitted by the network next time, and returning to the step 321);
4. screening of error frames of complete data packets acquired by ffmpeg based on fault-tolerant strategy
And screening error frames of the complete data packet acquired by the ffmpeg based on a fault-tolerant strategy. The method comprises the following specific steps:
41) searching a frame header in the acquired data according to the h265 protocol frame header;
42) searching a frame tail in the acquired data according to the h265 protocol frame header;
43) and finding the frame of the frame head and the frame tail as a complete frame, recording the type of the image frame, and if not, discarding the image frame.
44) The type of raw data required to record each image frame: number of vps frame, sps frame, pps frame, sei frame, P frame and I frame, i.e. encoding rule Encode { vps, sps, pps, sei, N x I, N x P }.
45) Analyzing error image data (incomplete image frame due to packet loss) which does not conform to the rule in step 44), and processing according to the following rule:
451) storing a vps frame, an sps frame, a pps frame and a sei frame which are earlier than the current frame in time and are the latest;
452) if the current frame and the frame stored before do not accord with the encoding rule Encode and the current frame is a redundant frame, modifying the first p frame in the encoding rule into an I frame, and combining the vps frame, the sps frame, the pps frame, the sei frame and all the p frames in the encoding rule obtained in the step 451) to be combined together to be used as image data; completing error frame screening work to obtain a screened complete data packet; otherwise, go to step 453);
453) if the current frame and the previously stored frame do not accord with the encoding rule Encode and the frame behind the current frame is a vps frame, modifying the first p frame of the current encoding rule into an I frame, and combining the vps frame, the sps frame, the pps frame, the sei frame and all the p frames in the modified encoding rule of 451) to form image data; completing error frame screening work to obtain a screened complete data packet; otherwise, go to step 454);
454) if the current frame and the previously stored frame accord with the coding rule and the current frame is an error frame, discarding the current frame, modifying the first p frame in the coding rule into an I frame, and combining the previous two p frames and the vps frame, the sps frame, the pps frame and the sei frame in the step 451) together to form image data; and finishing the error frame screening work to obtain the screened complete data packet.
5. Sending the screened complete data packet to a hard decoding module
51) Initializing a hard decoding module according to the image parameters and the decoding type, configuring the size of a video data buffer area of the hard decoding module, and starting the decoding module;
52) dynamically applying for the size buf of a buffer area of a video image frame data packet;
53) circularly judging whether the current decoding process of the decoding channel is decoding completion or not, if so, exiting the processing flow and carrying out the step 57), and if not, carrying out the next step;
54) setting parameters of the current frame: stream end identifier, frame head identifier, frame tail identifier;
55) putting the image frame data into buf;
56) calling a dynamic library function to send buf data to a decoding module;
57) monitoring the decoding state of the hard decoding module in real time, if the decoding is wrong, restarting the hard decoding module in a soft mode and resetting parameters, and if the decoding is normal, calling a library function to obtain a decoded image;
58) and calling a library function to stop sending the video stream to the decoding module, closing a decoding channel, unbinding the binding relationship of each module and clearing resources.
6. Sending the decoded image to the host computer through the network
And establishing a task of acquiring a decoded image, reading the working state of a decoder in real time, calling a library function to acquire the decoded image in a decoding channel, and sending the decoded image data to a host through a network sending task. The method comprises the following specific steps:
61) creating a decoding image obtaining thread;
62) entering an image acquisition cycle;
63) inquiring the working state in the channel, if the working state is the reset state, delaying for 1ms, continuously inquiring the reset state, and if not, carrying out the next step;
64) and calling a library function HI _ MPI _ VDEC _ GetFrame to acquire the memory address of the image storage. If the failure is caused, jumping to the step 63), if the failure is caused, entering the next step;
65) creating a network sending task, and sending the data stored in the memory address in the step 64) to the host.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to limit the present invention, and those skilled in the art can make variations and modifications of the present invention without departing from the spirit and scope of the present invention by using the methods and technical contents disclosed above.
Those skilled in the art will appreciate that those matters not described in detail in the present specification are well known in the art.

Claims (10)

1. A soft and hard combined multi-channel video synchronous decoding method is characterized by comprising the following steps:
1) the compiling attribute and parameter of the ffmpeg are configured, and the ffmpeg dynamic library is transplanted to the Haisi platform;
2) establishing a task of receiving image data by a network, and storing original data into an annular buffer area;
3) extracting original data from the annular buffer area, and filtering error frames by using an ffmpeg dynamic library to obtain a complete data packet;
4) screening error frames of the complete data packet acquired by the ffmpeg based on a fault-tolerant strategy;
5) decoding the image data packet by using a hard decoding module in the Haisi platform, and sending the screened complete data packet to the hard decoding module;
6) and acquiring the image through the decoding channel, and returning the decoded data to the host computer by using a network.
2. The method according to claim 1, wherein the task of creating a network to receive image data is specifically:
212) acquiring a receiving ip address and a receiving port in a configuration file;
213) creating a network socket;
214) clearing the receiving buffer, and entering step 215 after waiting for receiving the image data sent by the network);
215) judging whether the length received this time is greater than zero, if so, carrying out the next step, otherwise, returning to the step 213);
216) and judging whether the protocol frame header meets the protocol requirements, if not, discarding the image frame, otherwise, storing the data into a ring buffer.
3. The method according to claim 1, wherein the step 2) stores data into a ring buffer, specifically:
221) judging whether the length len of the data received by the network is greater than 0, if so, carrying out the next step, and if not, exiting; waiting for the original data transmitted by the network next time;
222) searching a channel number of the current data according to bytes specified by a protocol, and storing the channel number;
223) judging whether the sum of the existing data lengths cirLen and LEN of the data buffer area of the channel number is smaller than the maximum length MAX _ LEN of the buffer area, if so, carrying out the next step, and if not, carrying out the step 226);
224) copying the data of the network receiving area to an annular buffer area, wherein the copying length is len;
225) the total length of the data in the buffer area is increased by len, and meanwhile, the first address ptr of the stored data is also moved by len positions;
226) copying data of a network receiving area to a buffer area, wherein the copy length is equal to the maximum value of _ Len and cirLen;
227) resetting a write pointer putpttr, and moving the pointer to the first address of the buffer area;
228) calculating the total data length cirLen of the buffer area as the difference value of len and the copy length in the step 226);
229) receiving the array from the network again and copying the array to a buffer area, wherein the buffer area position is moved by MAX _ LEN minus the length of cirLen bytes, and the copy length is cirLen;
2210) the write pointer putpttr is moved forward, which moves cirLen bytes.
4. The soft-hard combined multi-channel video synchronous decoding method according to claim 1, wherein the step 4) of performing error frame screening on the complete data packet obtained by ffmpeg specifically comprises:
41) searching a frame header in the acquired data according to the h265 protocol frame header;
42) searching a frame tail in the acquired data according to the h265 protocol frame header;
43) finding out a frame of the frame head and the frame tail as a complete frame and recording the type of the image frame, if not, abandoning the image frame;
44) recording the type of original data required by each image frame to obtain a coding rule;
45) analyzing erroneous image data that do not comply with the encoding rule in step 44);
451) storing a vps frame, an sps frame, a pps frame and a sei frame which are earlier than the current frame in time and are the latest;
452) if the current frame and the previously stored frame do not accord with the encoding rule Encode and the current frame is a redundant frame, modifying the first p frame in the encoding rule into an I frame, and combining the vps frame, the sps frame, the pps frame, the sei frame and all the p frames in the encoding rule obtained in the step 451) to be combined together to be used as image data; completing error frame screening work to obtain a screened complete data packet; otherwise, go to step 453);
453) if the current frame and the previously stored frame do not accord with the encoding rule Encode and the frame behind the current frame is a vps frame, modifying the first p frame of the current encoding rule into an I frame, and combining the vps frame, the sps frame, the pps frame, the sei frame and all the p frames in the modified encoding rule in the step 451) to form image data; completing error frame screening work to obtain a screened complete data packet; otherwise, go to step 454);
454) if the current frame and the previously stored frame accord with the coding rule and the current frame is an error frame, discarding the current frame, modifying the first p frame in the coding rule into an I frame, and combining the previous two p frames and the vps frame, the sps frame, the pps frame and the sei frame in the step 451) together to form image data; and finishing the error frame screening work to obtain the screened complete data packet.
5. The method according to claim 1, wherein the step 3) of obtaining the complete data packet specifically comprises:
321) acquiring the image data buffer first address bufPtr and the length bufLen read from the ring buffer, specifically including the following steps:
3211) acquiring a write pointer position putpttr of a current buffer area and the total length cirLen of data of the buffer area;
3212) judging whether the readPtr and the putPtr are consistent, if so, delaying for 1ms, returning to the step 3211), otherwise, carrying out the next step;
3213) judging whether the readLen of the read data is smaller than the cirLen, if so, continuing the next step, otherwise, 3217);
3214) judging whether the difference value between the cirLen and the readLen is more than or equal to the fixed length frame _ len of the protocol frame, if so, continuing the next step, and if not, performing a step 3216);
3215) taking the current readPtr as a decoded image data head address bufPtr, wherein the length bufLen is frame _ len, simultaneously moving a pointer readPtr, moving bytes to be frame _ len, increasing the read length readLen, and increasing the number of the increased bytes to be frame _ len;
3216) taking the current readPtr as a first address bufPtr of decoded image data, wherein the length bufLen is cirLen minus readLen, the readPtr is moved at the same time, the number of the movement is cirLen minus readLen, the read length readLen is increased, and the number of the increased bytes is the difference value of cirLen and readLen;
3217) judging whether the sum of readLen and frame _ LEN is less than the maximum length MAX _ LEN of the buffer area, if so, carrying out the next step, and if not, carrying out the step 3219);
3218) taking the current readPtr as a decoding image data head address bufPtr, wherein the data length bufLen is frame _ len, moving a reading pointer, the number of bytes moved by the pointer is frame _ len, increasing the read data length readLen, and the number of increased bytes is frame _ len;
3219) taking the current readPtr as a first address bufPtr of decoded image data, wherein the data length bufLen is a difference value obtained by subtracting readLen from MAX _ LEN, moving a read pointer to a first address of a buffer area, and setting the read length readLen to be 0;
322) judging whether the current data length bufLen is larger than 0, if so, continuing the next step, and if not, exiting the decoding process of the data and waiting for the original data transmitted by the network next time;
323) transmitting a data array head address pointer bufPtr and a length bufLen to a soft decoding module, segmenting data by frames by using a library function av _ parser _ parse2, if a complete image frame data packet can be successfully obtained from an array, recording and continuing the next step, and if not, exiting the decoding process;
324) storing the length ret of the image frame divided in the data array at the time, removing the length ret of the image frame data packet divided at the time from the total length bufLen of the data array, and moving a first address pointer bufPtr forwards for ret;
325) and putting the image complete data packet segmented this time into a queue to be decoded, waiting for the original compressed image data transmitted by the network next time, and returning to the step 321).
6. The method according to claim 1, wherein the step 5) uses a hard decoding module in the Haas platform to decode the image data packet, specifically:
51) initializing a hard decoding module according to the image parameters and the decoding type, configuring the size of a video data buffer area of the hard decoding module, and starting the decoding module;
52) dynamically applying for the size buf of a buffer area of a video image frame data packet;
53) circularly judging whether the current decoding process of the decoding channel is decoding completion or not, if so, exiting the processing flow and carrying out the step 57), and if not, carrying out the next step;
54) setting parameters of the current frame: stream end identifier, frame head identifier, frame tail identifier;
55) putting the image frame data into buf;
56) calling a dynamic library function to send buf data to a decoding module;
57) monitoring the decoding state of the hard decoding module in real time, if the decoding is wrong, restarting the hard decoding module in a soft mode and resetting parameters, and if the decoding is normal, calling a library function to obtain a decoded image;
58) and calling a library function to stop sending the video stream to the decoding module, closing a decoding channel, unbinding the binding relationship of each module and clearing resources.
7. The soft-hard combined multi-channel video synchronous decoding method according to claim 1, wherein: said step 6) of returning the decoded data to the host using the network specifically includes:
61) creating a decoding image obtaining thread;
62) entering an image acquisition cycle;
63) inquiring the working state in the channel, if the working state is the reset state, delaying for 1ms, and continuously inquiring the reset state until the working state is not the reset state and then carrying out the next step;
64) calling a library function HI _ MPI _ VDEC _ GetFrame to acquire a memory address of image storage; if the failure is caused, jumping to the step 63), if the failure is caused, entering the next step;
65) creating a network sending task, and sending the data stored in the memory address in the step 64) to the host.
8. A processing apparatus, comprising:
a memory for storing a computer program;
a processor for calling and running the computer program from the memory to perform the method of any of claims 2 to 7.
9. A computer-readable storage medium, having stored thereon a computer program or instructions, which, when executed, implement the method of any one of claims 2 to 7.
10. A computer program product, characterized in that it comprises instructions which, when run on a computer, cause the computer to carry out the method of any one of claims 2 to 7.
CN202110697282.5A 2021-06-23 2021-06-23 Soft-hard combined multichannel video synchronous decoding method Active CN113645490B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110697282.5A CN113645490B (en) 2021-06-23 2021-06-23 Soft-hard combined multichannel video synchronous decoding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110697282.5A CN113645490B (en) 2021-06-23 2021-06-23 Soft-hard combined multichannel video synchronous decoding method

Publications (2)

Publication Number Publication Date
CN113645490A true CN113645490A (en) 2021-11-12
CN113645490B CN113645490B (en) 2023-05-09

Family

ID=78416127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110697282.5A Active CN113645490B (en) 2021-06-23 2021-06-23 Soft-hard combined multichannel video synchronous decoding method

Country Status (1)

Country Link
CN (1) CN113645490B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114339408A (en) * 2021-11-26 2022-04-12 惠州华阳通用电子有限公司 Video decoding method
CN115373645A (en) * 2022-10-24 2022-11-22 济南新语软件科技有限公司 Complex data packet operation method and system based on dynamic definition
CN117255222A (en) * 2023-11-20 2023-12-19 上海科江电子信息技术有限公司 Digital television monitoring method, system and application
CN117560501A (en) * 2024-01-11 2024-02-13 杭州国芯科技股份有限公司 Multi-standard video decoder architecture

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1387338A (en) * 2001-03-29 2002-12-25 松下电器产业株式会社 Data reproducing device and method
CN101159843A (en) * 2007-10-29 2008-04-09 中兴通讯股份有限公司 Image switching method and system for improving video switch effect in video session
CN104104913A (en) * 2014-07-14 2014-10-15 华侨大学 Intelligent distributed type video collecting system based on Android system
CN106792124A (en) * 2016-12-30 2017-05-31 合网络技术(北京)有限公司 Multimedia resource decodes player method and device
CN108206956A (en) * 2016-12-20 2018-06-26 深圳市中兴微电子技术有限公司 A kind of processing method and processing device of video decoding error
CN108235096A (en) * 2018-01-18 2018-06-29 湖南快乐阳光互动娱乐传媒有限公司 The mobile terminal hard decoder method that intelligently the soft decoding of switching plays video
WO2019228078A1 (en) * 2018-05-31 2019-12-05 腾讯科技(深圳)有限公司 Video transcoding system and method, apparatus, and storage medium
WO2020078165A1 (en) * 2018-10-15 2020-04-23 Oppo广东移动通信有限公司 Video processing method and apparatus, electronic device, and computer-readable medium
CN112261460A (en) * 2020-10-19 2021-01-22 天津津航计算技术研究所 PCIE-based multi-channel video decoding scheme design method
CN112511840A (en) * 2020-12-24 2021-03-16 北京睿芯高通量科技有限公司 Decoding system and method based on FFMPEG and hardware acceleration equipment
US20210126743A1 (en) * 2019-10-23 2021-04-29 Mediatek Singapore Pte. Ltd. Apparatus and methods for harq in a wireless network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1387338A (en) * 2001-03-29 2002-12-25 松下电器产业株式会社 Data reproducing device and method
CN101159843A (en) * 2007-10-29 2008-04-09 中兴通讯股份有限公司 Image switching method and system for improving video switch effect in video session
CN104104913A (en) * 2014-07-14 2014-10-15 华侨大学 Intelligent distributed type video collecting system based on Android system
CN108206956A (en) * 2016-12-20 2018-06-26 深圳市中兴微电子技术有限公司 A kind of processing method and processing device of video decoding error
CN106792124A (en) * 2016-12-30 2017-05-31 合网络技术(北京)有限公司 Multimedia resource decodes player method and device
CN108235096A (en) * 2018-01-18 2018-06-29 湖南快乐阳光互动娱乐传媒有限公司 The mobile terminal hard decoder method that intelligently the soft decoding of switching plays video
WO2019228078A1 (en) * 2018-05-31 2019-12-05 腾讯科技(深圳)有限公司 Video transcoding system and method, apparatus, and storage medium
WO2020078165A1 (en) * 2018-10-15 2020-04-23 Oppo广东移动通信有限公司 Video processing method and apparatus, electronic device, and computer-readable medium
US20210126743A1 (en) * 2019-10-23 2021-04-29 Mediatek Singapore Pte. Ltd. Apparatus and methods for harq in a wireless network
CN112261460A (en) * 2020-10-19 2021-01-22 天津津航计算技术研究所 PCIE-based multi-channel video decoding scheme design method
CN112511840A (en) * 2020-12-24 2021-03-16 北京睿芯高通量科技有限公司 Decoding system and method based on FFMPEG and hardware acceleration equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114339408A (en) * 2021-11-26 2022-04-12 惠州华阳通用电子有限公司 Video decoding method
CN115373645A (en) * 2022-10-24 2022-11-22 济南新语软件科技有限公司 Complex data packet operation method and system based on dynamic definition
CN115373645B (en) * 2022-10-24 2023-02-03 济南新语软件科技有限公司 Complex data packet operation method and system based on dynamic definition
CN117255222A (en) * 2023-11-20 2023-12-19 上海科江电子信息技术有限公司 Digital television monitoring method, system and application
CN117560501A (en) * 2024-01-11 2024-02-13 杭州国芯科技股份有限公司 Multi-standard video decoder architecture

Also Published As

Publication number Publication date
CN113645490B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN113645490B (en) Soft-hard combined multichannel video synchronous decoding method
CN112261460B (en) PCIE (peripheral component interface express) -based multi-channel video decoding scheme design method
US6442329B1 (en) Method and apparatus for traversing a multiplexed data packet stream
CN112565627B (en) Multi-channel video centralized display design method based on bitmap superposition
JP4846002B2 (en) File transfer system and file transfer method
WO2019149066A1 (en) Video playback method, terminal apparatus, and storage medium
CN114095784A (en) H.265 format video stream transcoding playing method, system, device and medium
CN112968750B (en) Satellite image compressed data block analysis method and system based on AOS frame
CN115802045A (en) Data packet filtering method and decoding method based on data packet filtering method
KR20160023777A (en) Picture referencing control for video decoding using a graphics processor
CN113596469A (en) Soft-hard combined and high-efficiency transmission video decoding method
CN113727116B (en) Video decoding method based on filtering mechanism
CN113645467B (en) Soft and hard combined video decoding method
CN113747171B (en) Self-recovery video decoding method
CN112291483B (en) Video pushing method and system, electronic equipment and readable storage medium
CN113727114B (en) Transcoded video decoding method
CN113747171A (en) Self-recovery video decoding method
CN113645467A (en) Soft and hard combined video decoding method
CN113727116A (en) Video decoding method based on filtering mechanism
CN113727114A (en) Transcoding video decoding method
CN113727115B (en) Efficient transcoded video decoding method
CN108200481B (en) RTP-PS stream processing method, device, equipment and storage medium
KR20100029010A (en) Multiprocessor systems for processing multimedia data and methods thereof
KR20190101579A (en) Reconfigurable Video System for Multi-Channel Ultra High Definition Video Processing
CN116647702A (en) Self-recovery video decoding method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant