CN117750028A - FC protocol-based video decoding and original frame video networking display method - Google Patents
FC protocol-based video decoding and original frame video networking display method Download PDFInfo
- Publication number
- CN117750028A CN117750028A CN202311855924.5A CN202311855924A CN117750028A CN 117750028 A CN117750028 A CN 117750028A CN 202311855924 A CN202311855924 A CN 202311855924A CN 117750028 A CN117750028 A CN 117750028A
- Authority
- CN
- China
- Prior art keywords
- video
- original frame
- image
- format
- frame image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 230000006855 networking Effects 0.000 title claims abstract description 11
- 230000005540 biological transmission Effects 0.000 claims abstract description 41
- 238000012545 processing Methods 0.000 claims abstract description 20
- 238000006243 chemical reaction Methods 0.000 claims abstract description 13
- 230000001133 acceleration Effects 0.000 claims description 12
- 238000004891 communication Methods 0.000 claims description 11
- 230000000903 blocking effect Effects 0.000 claims description 7
- 239000012634 fragment Substances 0.000 claims description 6
- 238000005265 energy consumption Methods 0.000 abstract 2
- 239000000835 fiber Substances 0.000 description 8
- 239000013307 optical fiber Substances 0.000 description 7
- 230000006835 compression Effects 0.000 description 6
- 238000007906 compression Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000012795 verification Methods 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005538 encapsulation Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Landscapes
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention relates to a video decoding and original frame video networking display method based on an FC protocol, belongs to the field of multimedia, and solves the problems that H.264/H.265 compressed video in an FC system needs to be locally decoded by a display unit to display high energy consumption and high cost. Acquiring a compressed video stream through a network switch based on a service call command; performing self-adaptive judgment on the video format of the compressed video stream, then performing hard decoding, and converting the video format into a corresponding original frame video; performing video processing on the original frame video to obtain an original frame image, performing image format conversion on the original frame image, and storing the original frame image into an image sending buffer area; and circularly reading the image to be transmitted from the image transmission buffer area to the FC-AV protocol data frame load area, transmitting the image by using an FC-AV transmission thread, and receiving the FC-AV protocol data frame by using the FC display unit for multi-mode display. The method realizes hard decoding of the compressed video stream, and unicast or multicast of the decoded video is sent to the FC display unit for display through the FC-AV transmission protocol, thereby reducing energy consumption and saving cost.
Description
Technical Field
The invention belongs to the field of multimedia, and particularly relates to a video decoding and original frame video networking display method based on an FC protocol.
Background
The Fiber Channel is a computer communication protocol designed for adapting to the high-performance data transmission requirement, meets the standardization of a system structure, adapts to the requirements of high-speed, large-quantity, reliable and effective information communication and processing, is widely adopted in a new generation avionics system, and meanwhile, the FC (Fiber Channel) technology is gradually developed in the field of vehicle-mounted electronics. Video encoding and decoding means that the storage capacity and bandwidth occupied by video can be greatly reduced by eliminating relevant redundant information in video data, so that the video can be transmitted on a network with smaller bandwidth cost. In the conventional optical fiber channel video decoding method, video decoding is performed on display terminals through a special video decoding chip or a display card, and each display terminal needs to have video decoding capability. Under the current large autonomous and controllable trend of domestic production, the following defects mainly exist:
(1) The display terminal adopts a special FPGA (Field Programmable Gate Array ) or a video decoding chip or a display card to carry out video decoding, so that the power consumption is high;
(2) Each receiving display terminal needs to be provided with a video decoding unit for video decoding display, and the cost is high.
Disclosure of Invention
In view of the above analysis, the invention provides a video decoding and original frame video networking display method based on an FC protocol, which is used for solving the technical problems of high decoding capability requirement, high power consumption of a display terminal and high cost in the traditional video decoding method.
In order to achieve the above object, the present invention provides a method for decoding video and displaying original frame video based on FC protocol, comprising the steps of:
based on the service call command, obtaining a compressed video stream through a network switch;
performing self-adaptive judgment on the video format of the compressed video stream, then performing hard decoding, and converting the video format into a corresponding original frame video; performing video processing on the original frame video to obtain an original frame image, performing image format conversion on the original frame image, and storing the original frame image into an image sending buffer area;
and circularly reading the image to be transmitted from the image transmission buffer area to an FC-AV protocol data frame load area, transmitting the image by using an FC-AV transmission thread, and receiving the FC-AV protocol data frame by using an FC display unit to display in various modes.
Further, the FC display unit sends a video decoding request to the main control unit, the main control unit generates the service call command, and obtains an RTP network address from the service call command;
establishing communication connection with a source address of a compressed video stream through a network switch based on the RTP network address, and acquiring RTP data packets based on the communication connection;
and analyzing the RTP data packet, extracting the RTP load in the RTP data packet, and obtaining a compressed video stream.
Further, performing adaptive judgment on the video format of the compressed video stream includes:
bitwise and is carried out on the first byte in the RTP load and 0x1F, and if the result is 28, the video format is judged to be H.264;
otherwise, the first byte of the RTP load is bitwise and is 0x7e, the result is shifted to the right by one bit, and if the right-shifted result is equal to 49, the video format is judged to be H.265;
comparing the judged video format with a currently preset decoder format, and resetting the decoder format if the judged video format is different from the currently preset decoder format.
Further, the compressed video stream is hard decoded in the following manner:
hard decoding is carried out on the compressed video stream by adopting a hardware decoder, so as to obtain an original frame video corresponding to the compressed video stream;
and temporarily storing each original frame image corresponding to the original frame video into a buffer physical memory of the hardware decoder.
Further, performing video processing on the original frame video to obtain an original frame image after video stream decoding includes:
acquiring a decoded original frame image from the buffer physical memory of the hardware decoder in a blocking mode;
storing the acquired original frame image into an operating system virtual memory;
and converting the original frame image into an image format, and storing the original frame image with the converted format into an image sending buffer area by adopting a parallel acceleration method in a ping-pong mode.
Further, the YUV422SP format of the original frame image is converted into a YUV422 packet format supported by FC-AV protocol, wherein Y, U, V represents luminance, blue chrominance and red chrominance components in YUV space of the original frame image, respectively.
Further, the parallel acceleration method stores the original frame image with the converted format into the image sending buffer area in a ping-pong manner, which comprises the following steps:
calculating the size of virtual memory of an operating system occupied by the original frame image according to the span and the height of the original frame image after format conversion is completed;
taking the initial address of the virtual memory occupied by the original frame image as a Y component initial address, and taking the initial address of the memory occupied by the original frame image plus half of the size of the virtual memory as an offset to serve as a UV component initial address;
and traversing the Y component and the UV component by taking 128bit data as step length from the Y component starting address and the UV component starting address, respectively obtaining a first 128bit vector and a second 128bit vector from a Y component space and a UV component space in a vector reading mode, and performing ping-pong storage on the first 128bit vector and the second 128bit vector by adopting a 128bit cross storage instruction to the writable image transmission buffer zone until the Y component space and the UV component space are completely traversed.
Further, circularly reading the original frame image to be transmitted from the image transmission buffer area to an FC-AV protocol data frame load area, and transmitting by an FC-AV transmission thread comprises:
circularly reading original frame images to be transmitted from the image transmission buffer zone, and dividing the original frame images into segments according to 2048 bytes after each original frame image is read, wherein the last data segment of each original frame image is less than 2048 bytes;
and writing all the data sheets of the original frame image into an FC-AV frame load area to obtain an FC-AV protocol data frame of the original frame image, and starting an FC-AV sending thread to send the protocol data frame.
Further, the FC display unit receiving the FC-AV protocol data frame includes:
and the FC display unit receives the FC-AV protocol data frame, and alternately puts the received FC-AV protocol data frame fragment data into a memory buffer area 1 and a memory buffer area 2 in the image layer by utilizing ping-pong operation until the last fragment data is received.
Further, the FC display unit performs various modes of display, including: point-to-point, one-to-many, and many-to-many modes of display;
the point-to-point multicast is that a single FC video decoding module displays through a single FC display unit in a unicast mode;
the pair of multicast FC video decoding modules display the single FC video decoding module through a plurality of FC display units in a multicast mode;
and the multi-to-one multicast is that a plurality of FC video decoding modules display through a single FC display unit in a multicast mode.
Compared with the prior art, the invention has at least one of the following beneficial effects:
1. the embedded video decoding Soc hardware decoding equipment is adopted to carry out video decoding, and FC control is adopted to carry out protocol conversion, so that the power consumption can be reduced, 1 compressed video stream transmission source can be supported to transmit FC-AV protocol data frame images to a plurality of FC display units through the function of the optical fiber channel exchanger, and the requirement on the decoding capability of the optical fiber channel display units is reduced;
2. the traditional decoding scheme requires the FC display unit to have video decoding capability, the FC display unit is required to have decoding unit, the requirement on the video decoding capability of the display unit is higher, the embedded Soc hardware decoding equipment is adopted to realize video decoding, and decoded video is unicast or multicast transmitted to the FC display unit through the FC-AV transmission protocol, so that the FC display unit can save the cost of devices, reduce the power consumption and improve the flexibility of the system.
In the invention, the technical schemes can be mutually combined to realize more preferable combination schemes. Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, like reference numerals being used to refer to like parts throughout the several views.
FIG. 1 is a flow chart of a method for FC protocol-based video decoding and original frame video networking display;
FIG. 2 is a schematic diagram of a method for FC protocol-based video decoding and original frame video networking display;
FIG. 3 is a schematic diagram illustrating adaptive judgment of video formats by a video streaming media protocol processing unit;
FIG. 4 is a schematic diagram of a parallel acceleration method in ping-pong manner to put a format-converted image into an image transmission buffer;
FIG. 5 is a schematic diagram of a fiber channel display unit displaying an original frame video image;
fig. 6 is a schematic diagram illustrating verification of an FC protocol-based video decoding and original frame video networking display method.
Detailed Description
Preferred embodiments of the present invention will now be described in detail with reference to the accompanying drawings, which form a part hereof, and together with the description serve to explain the principles of the invention, and are not intended to limit the scope of the invention.
As shown in fig. 1 and 2, the present invention provides a video decoding and original frame video networking display method based on FC protocol, comprising the following steps:
step S1, based on a service call command, obtaining a compressed video stream through a network switch;
s2, performing self-adaptive judgment on the video format of the compressed video stream, performing hard decoding, and converting the video format into a corresponding original frame video; performing video processing on the original frame video to obtain an original frame image, performing image format conversion on the original frame image, and storing the original frame image into an image sending buffer area;
and S3, circularly reading the image to be transmitted from the image transmission buffer area to the FC-AV protocol data frame load area, transmitting the image to be transmitted by using an FC-AV transmission thread, and receiving the FC-AV protocol data frame by using an FC display unit to display in various modes.
Step S1, specifically.
The FC display unit sends a video decoding request to the main control unit, the main control unit generates the service calling command, and an RTP (Real-time transport protocol) network address used for streaming media data transmission by a transmission layer is obtained from the service calling command;
establishing communication connection with a source address of a compressed video stream through a network switch based on the RTP network address, and acquiring RTP data packets based on the communication connection;
and analyzing the RTP data packet, extracting the RTP load in the RTP data packet, and obtaining a compressed video stream.
Wherein the RTP network address includes a source address of the compressed video stream, including an IP address and a port number. And establishing communication connection with the source address of the compressed video stream through the network switch by using the acquired IP address and the port number.
The FC display unit sends a video decoding request to the main control unit;
the main control unit firstly inquires the working state of each FC video decoding module;
it will be appreciated that the video decoding module operating state includes busy and idle.
When the working state of the FC video decoding module is idle, the main control unit sends a service calling instruction to the FC video decoding module.
The service invocation command includes an FC video decode module ID, a target FC display unit ID, video image resolution, video ID, RTP network address.
The working state of the video decoding thread is used for setting the working state of the current FC video decoding module, and the working state comprises busy and idle.
An FC video decoding module ID for designating one or more FC video decoding modules for video decoding, each FC video decoding module having a unique ID;
a target FC display unit ID for designating one or more FC display units that display, each FC display unit having a unique ID;
the video image resolution is used for designating the resolution of the original frame video sent by the FC video decoding module and received by the FC display unit, and comprises width information and height information.
And the video ID is used for designating one video object corresponding to one or more FC-AV protocol data frames received by the FC video decoding module and the FC display unit.
RTP is a network protocol for transmitting real-time data over the internet. In video transmission, RTP is used to carry compressed video streams. Compressed video streaming is a process of reducing the amount of data and the transmission bandwidth requirements by encoding and compressing video data.
RTP Payload refers to the actual media data, e.g. audio or video data, carried in RTP packets. For compressed video streams, the RTP payload is the compression encoded video data.
In video transmission, common compression coding standards include h.264 (AVC) and H.265 (HEVC). These coding standards compress video to produce a compressed video stream. These compressed video stream data are then divided into small packets and transmitted using the RTP protocol.
The specific flow is as follows:
(1) Video compression: performing compression coding on an original video by using an H.264 or H.265 compression coding standard to generate a compressed video stream;
(2) RTP encapsulation: the compressed video stream is encapsulated into RTP packets. The RTP header contains some metadata information such as a time stamp, a serial number and the like, and the RTP payload is compressed video stream data;
(3) Network transmission: the RTP data packet is transmitted to the receiving end through the network. May be performed by UDP (User Datagram Protocol ) or other protocol;
(4) RTP decapsulation: after receiving the RTP data packet, the receiving end decapsulates the RTP data packet and extracts the RTP load.
RTP is used to transmit various media data in real-time communication, including compressed video streams processed through compression encoding.
Step S2, comprising steps S21-S23, in particular.
Step S21, the video format of the compressed video stream is adaptively judged.
As shown in fig. 3, performing adaptive judgment on the video format of the compressed video stream includes:
bitwise and is carried out on the first byte in the RTP load and 0x1F, and if the result is 28, the video format is judged to be H.264;
otherwise, the first byte of the RTP load is bitwise and is 0x7e, the result is shifted to the right by one bit, and if the right-shifted result is equal to 49, the video format is judged to be H.265;
comparing the judged video format with a currently preset decoder format, and resetting the decoder format if the judged video format is different from the currently preset decoder format.
The video format self-adaptive judging method dynamically determines the coding format of the video according to the information in the RTP load, and adjusts the setting of a decoder when necessary so as to adapt to different video formats.
Step S22, hard decoding is performed on the compressed video stream.
The compressed video stream is hard decoded as follows:
hard decoding is carried out on the compressed video stream by adopting a hardware decoder, so as to obtain an original frame video corresponding to the compressed video stream;
and temporarily storing each original frame image corresponding to the original frame video into a buffer physical memory of the hardware decoder.
A hardware encoder is a hardware module dedicated to accelerating the video decoding process, performing decoding operations more efficiently than soft decoding.
Illustratively, embedded Soc is used, for example Hai Si Hi3559AV100.
The original frame video after video decoding is temporarily stored in a buffer physical memory of a hardware decoder so that a subsequent video processing unit can perform subsequent image processing.
Step S23, video processing is carried out on the original frame video.
The video processing unit completes the image processing process after decoding, including image acquisition and image format conversion, and stores the processed image into an image sending buffer area. The method relates to a blocking mode, virtual memory operation and parallel acceleration method.
Comprising steps S231-S233, in particular.
Step S231, performing video processing on the original frame video to obtain an original frame image after video stream decoding.
The step of performing video processing on the original frame video to obtain an original frame image after video stream decoding comprises the following steps:
acquiring a decoded original frame image from the buffer physical memory of the hardware decoder in a blocking mode;
storing the acquired original frame image into an operating system virtual memory;
and converting the original frame image into an image format, and storing the original frame image with the converted format into an image sending buffer area by adopting a parallel acceleration method in a ping-pong mode.
The blocking mode is to wait for data all the time, and if the data is not stored, the next step is not executed.
The decoded image is obtained by adopting a blocking mode, and the decoded image is mainly obtained from a buffer physical memory of a hardware decoder;
and mapping and copying the obtained decoded original frame image to an operating system virtual memory. The image is manipulated for more convenient subsequent processing stages.
Step S232, converting the image format of the original frame image.
And converting the YUV422SP format of the original frame image into a YUV422 packet format supported by an FC-AV protocol, wherein Y, U, V respectively represents brightness, blue chromaticity and red chromaticity components in a YUV space of the original frame image.
And step S233, storing the original frame image with the format converted into an image transmission buffer area in a ping-pong mode by adopting a parallel acceleration method.
As shown in fig. 4, the storing the original frame image with the format converted by using the parallel acceleration method in the image sending buffer area in a ping-pong manner includes:
calculating the size of virtual memory of an operating system occupied by the original frame image according to the span and the height of the original frame image after format conversion is completed;
taking the initial address of the virtual memory occupied by the original frame image as a Y component initial address, and taking the initial address of the memory occupied by the original frame image plus half of the size of the virtual memory as a UV component initial address;
and traversing the Y component and the UV component by taking 128bit data as step length from the Y component starting address and the UV component starting address, respectively obtaining a first 128bit vector and a second 128bit vector from a Y component space and a UV component space in a vector reading mode, and performing ping-pong storage on the first 128bit vector and the second 128bit vector by adopting a 128bit cross storage instruction to the writable image transmission buffer zone until the Y component space and the UV component space are completely traversed.
By adopting the parallel acceleration method, the original frame image with the converted format is stored in the image sending buffer area in a ping-pong mode, the data of the Y component and the UV component can be processed in parallel, and the data processing efficiency is further improved in the buffer area by adopting the ping-pong mode.
As shown in fig. 4, the specific description is as follows:
(1) And calculating the virtual memory size of the operating system occupied by the image according to the span and the height of the original frame image after the format conversion is completed, taking the initial address of the memory occupied by the original frame image as a Y-component initial address, and taking the initial address of the memory occupied by the original frame image plus half of the virtual memory size as a UV-component initial address. The occupied range of the Y component space is from the starting address of the memory occupied by the original frame image to the starting address of the UV component, and the occupied range of the UV component space is from the starting address of the UV component to the tail of the space occupied by the original frame image.
(2) And starting traversing the Y component and the UV component by taking 128bit data as step length from the Y component starting address and the UV component starting address. And obtaining a first 128-bit vector and a second 128-bit vector from a Y-component space and a UV-component space in a vector reading mode, storing the first 128-bit vector and the second 128-bit vector into the writable image sending buffer area by adopting a 128-bit cross storage instruction table tennis according to writable flag bits of the image sending buffer area, and calculating offset addresses processed next time until the Y-component space and the UV-component space are completely traversed. The image is stored in the image sending buffer area after acceleration processing is carried out by adopting the following calculation formula;
VT 1 =128 bit vector read (m start +16×i)
VT 2 =128 bit vector read (m start +m total /2+16×i)
128bit cross memory (m) target +32×i,VT 1 ,VT 2 )
Wherein VT is 1 A first 128bit vector in 8 bits, VT, representing Y luminance 2 A second 128bit vector, m, in 8 bits representing UV chromaticity start For the initial position of the image memory before the image format conversion, m total Memory capacity occupied by image, m target I is the current cycle number and is from 0 to m total /32;
The 128bit vector reading operation continuously reads a total of 128bit bytes of 16 units by taking 8 bits as a unit, and the first 128bit vector and the second 128bit vector are stored in the image sending buffer area in a crossing way by taking 8 bits as a unit.
The video processing unit acquires the decoded image, performs format conversion and stores the image, and the image is stored in the image sending buffer area by using a blocking mode, virtual memory operation and a parallel acceleration method, so that the efficiency of image processing is improved.
Step S3 is divided into steps S31-S33, specifically.
The FC display unit supporting the network browsing display is mainly used for receiving and displaying FC-AV protocol video images.
And step S31, the original frame image is sent to an FC-AV protocol data frame load area, and is sent by an FC-AV sending thread.
Circularly reading the original frame image to be transmitted from the image transmission buffer area to an FC-AV protocol data frame load area, and transmitting by an FC-AV transmission thread comprises the following steps:
circularly reading original frame images to be transmitted from the image transmission buffer zone, and dividing the original frame images into segments according to 2048 bytes after each original frame image is read, wherein the last data segment of each original frame image is less than 2048 bytes;
and writing all the data sheets of the original frame image into an FC-AV frame load area to obtain an FC-AV protocol data frame of the original frame image, and starting an FC-AV sending thread to send the protocol data frame.
Based on the service call instruction, it is determined to which FC display unit is required to be transmitted, and then transmission is performed.
In step S32, the FC display unit receives the FC-AV protocol data frame.
As shown in fig. 5, the FC display unit receiving the FC-AV protocol data frame includes:
and the FC display unit receives the FC-AV protocol data frame, and alternately puts the received FC-AV protocol data frame fragment data into a memory buffer area 1 and a memory buffer area 2 in the image layer by utilizing ping-pong operation until the last fragment data is received.
The network FC display unit receives FC-AV protocol data frames sent by the optical fiber channel and alternately puts the received FC-AV protocol data frames into two buffer areas of a memory buffer 1 and a memory buffer 2 in the layer by utilizing ping-pong operation;
based on parameters of the FC video decoding module ID, the target FC display unit ID, the video image resolution and the video ID in the service control command sent by the main control unit, determining which one or more FC display units the FC-AV protocol data frames from the optical fiber channel are sent to.
In step S33, the FC display unit performs various modes of display.
The FC display unit performs various modes of display, including: point-to-point, one-to-many, and many-to-many modes of display;
the point-to-point multicast is that a single FC video decoding module displays through a single FC display unit in a unicast mode;
the pair of multicast FC video decoding modules display the single FC video decoding module through a plurality of FC display units in a multicast mode;
and the multi-to-one multicast is that a plurality of FC video decoding modules display through a single FC display unit in a multicast mode.
As shown in fig. 6, for the verification of the FC protocol-based video decoding and original frame video networking display method, the video decoding and video transmitting system of this embodiment is based on the FC transmission protocol, the FC video decoding module hardware decoding device, the embedded Soc employs haisi Hi3559AV100, and receives the h.264/h.265 compressed video through ethernet; the FC controller selects an FC multi-protocol controller chip with the model of FC4000A, and the FC multi-protocol controller chip are connected through a PCIe interface; the FC video decoding module comprises an embedded Soc and an FC controller.
The Haisi Hi3559AV100 and the storage Flash are connected through a Flash interface; the Haisi Hi3559AV100 and the memory ddr are connected by a memory interface; the FC4000A and the fibre channel switch are connected through a fibre channel, and the decoded image is output to the FC display unit through the fibre channel.
The method comprises the following steps:
(1) Video decoding verification based on FC-AV protocol: the Haishi 3559AV100 is connected with the FC4000A chip through a PCIe bus, and the Haishi 3559AV100 receives compressed video streams of H.264 or H.265 through an Ethernet interface of a network switch;
(2) Initialization of the FC controller: loading an FC4000A controller driver on an operating system of Hai Si Hi3559AV100, and setting a communication port mode, an FC port rate, an FC port login registration, an FC port name, a node name, an FC port switching network login, an N port login, an FC-AV transmission protocol and an interrupt for verification system equipment according to the fiber channel FC protocol requirement;
(3) Setting FC-AV video transmission parameters: allocating continuous memory space according to the number of video resolution width x resolution height x 4 bytes, wherein the memory is used for storing the decoded original image, obtaining a memory address through memory mapping, and setting a target FC display terminal ID, video image resolution, color space and the like;
(4) Video decoding Soc hardware decoder settings: loading a Haishi 3559AV100 decoding library, setting decoding resolution of video decoding Soc according to the width and the height of the resolution of the decoded video, distributing memory space for storing the video after hardware decoding, and starting a hardware decoder;
(5) H.264 or h.265 format compressed video stream reception: when the video stream network protocol is RTP, the encoding format of the video stream can be obtained according to the previously agreed Payload Type in the RTP data packet, and a decoder of a video decoding Soc is dynamically set according to the encoding format;
(6) Obtaining a decoded original frame image, and sending the decoded image through an FC-AV protocol: performing loop reading from the FC-AV image transmission buffer;
(7) Decoding image output: the decoded image is output to a plurality of FC display units through the fiber channel switch to be displayed and output in various modes.
Compared with the prior art, the invention has at least one of the following beneficial effects:
1. the embedded video decoding Soc hardware decoding equipment is adopted to carry out video decoding, and FC control is adopted to carry out protocol conversion, so that the power consumption can be reduced, 1 compressed video stream transmission source can be supported to transmit FC-AV protocol data frame images to a plurality of FC display units through the function of the optical fiber channel exchanger, and the requirement on the decoding capability of the optical fiber channel display units is reduced;
2. the traditional decoding scheme requires the FC display unit to have video decoding capability, the FC display unit is required to have decoding unit, the requirement on the video decoding capability of the display unit is higher, the embedded Soc hardware decoding equipment is adopted to realize video decoding, and decoded video is unicast or multicast transmitted to the FC display unit through the FC-AV transmission protocol, so that the FC display unit can save the cost of devices, reduce the power consumption and improve the flexibility of the system.
Those skilled in the art will appreciate that all or part of the flow of the methods of the embodiments described above may be accomplished by way of a computer program to instruct associated hardware, where the program may be stored on a computer readable storage medium. Wherein the computer readable storage medium is a magnetic disk, an optical disk, a read-only memory or a random access memory, etc.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention.
Claims (10)
1. A video decoding and original frame video networking display method based on FC protocol is characterized by comprising the following steps:
based on the service call command, obtaining a compressed video stream through a network switch;
performing self-adaptive judgment on the video format of the compressed video stream, then performing hard decoding, and converting the video format into a corresponding original frame video; performing video processing on the original frame video to obtain an original frame image, performing image format conversion on the original frame image, and storing the original frame image into an image sending buffer area;
and circularly reading the image to be transmitted from the image transmission buffer area to an FC-AV protocol data frame load area, transmitting the image by using an FC-AV transmission thread, and receiving the FC-AV protocol data frame by using an FC display unit to display in various modes.
2. The method according to claim 1, wherein the FC display unit sends a video decoding request to the main control unit, the main control unit generates the service call command, and obtains an RTP network address from the service call command;
establishing communication connection with a source address of a compressed video stream through a network switch based on the RTP network address, and acquiring RTP data packets based on the communication connection;
and analyzing the RTP data packet, extracting the RTP load in the RTP data packet, and obtaining a compressed video stream.
3. The method of claim 2, wherein adaptively determining the video format of the compressed video stream comprises:
bitwise and is carried out on the first byte in the RTP load and 0x1F, and if the result is 28, the video format is judged to be H.264;
otherwise, the first byte of the RTP load is bitwise and is 0x7e, the result is shifted to the right by one bit, and if the right-shifted result is equal to 49, the video format is judged to be H.265;
comparing the judged video format with a currently preset decoder format, and resetting the decoder format if the judged video format is different from the currently preset decoder format.
4. A method according to claim 3, characterized in that the compressed video stream is hard decoded in the following way:
hard decoding is carried out on the compressed video stream by adopting a hardware decoder, so as to obtain an original frame video corresponding to the compressed video stream;
and temporarily storing each original frame image corresponding to the original frame video into a buffer physical memory of the hardware decoder.
5. The method of claim 4, wherein video processing the raw video to obtain a decoded raw image of the video stream comprises:
acquiring a decoded original frame image from the buffer physical memory of the hardware decoder in a blocking mode;
storing the acquired original frame image into an operating system virtual memory;
and converting the original frame image into an image format, and storing the original frame image with the converted format into an image sending buffer area by adopting a parallel acceleration method in a ping-pong mode.
6. The method of claim 5, wherein the YUV422SP format of the original frame image is converted to a YUV422 packet format supported by FC-AV protocol, wherein Y, U, V represents luminance, blue chrominance and red chrominance components, respectively, in YUV space of the original frame image.
7. The method of claim 6, wherein the parallel acceleration method for storing the converted original frame image in the image transmission buffer in a ping-pong manner comprises:
calculating the size of virtual memory of an operating system occupied by the original frame image according to the span and the height of the original frame image after format conversion is completed;
taking the initial address of the virtual memory occupied by the original frame image as a Y component initial address, and taking the initial address of the memory occupied by the original frame image plus half of the size of the virtual memory as an offset to serve as a UV component initial address;
and traversing the Y component and the UV component by taking 128bit data as step length from the Y component starting address and the UV component starting address, respectively obtaining a first 128bit vector and a second 128bit vector from a Y component space and a UV component space in a vector reading mode, and performing ping-pong storage on the first 128bit vector and the second 128bit vector by adopting a 128bit cross storage instruction to the writable image transmission buffer zone until the Y component space and the UV component space are completely traversed.
8. The method of claim 7, wherein cyclically reading the original frame image to be transmitted from the image transmission buffer into an FC-AV protocol data frame payload area for transmission by an FC-AV transmission thread comprises:
circularly reading original frame images to be transmitted from the image transmission buffer zone, and dividing the original frame images into segments according to 2048 bytes after each original frame image is read, wherein the last data segment of each original frame image is less than 2048 bytes;
and writing all the data sheets of the original frame image into an FC-AV frame load area to obtain an FC-AV protocol data frame of the original frame image, and starting an FC-AV sending thread to send the protocol data frame.
9. The method of claim 8, wherein the FC display unit receiving the FC-AV protocol data frame comprises:
and the FC display unit receives the FC-AV protocol data frame, and alternately puts the received FC-AV protocol data frame fragment data into a memory buffer area 1 and a memory buffer area 2 in the image layer by utilizing ping-pong operation until the last fragment data is received.
10. The method of claim 9, wherein the FC display unit performs a plurality of modes of display, comprising: point-to-point, one-to-many, and many-to-many modes of display;
the point-to-point multicast is that a single FC video decoding module displays through a single FC display unit in a unicast mode;
the pair of multicast FC video decoding modules display the single FC video decoding module through a plurality of FC display units in a multicast mode;
and the multi-to-one multicast is that a plurality of FC video decoding modules display through a single FC display unit in a multicast mode.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311855924.5A CN117750028A (en) | 2023-12-29 | 2023-12-29 | FC protocol-based video decoding and original frame video networking display method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311855924.5A CN117750028A (en) | 2023-12-29 | 2023-12-29 | FC protocol-based video decoding and original frame video networking display method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117750028A true CN117750028A (en) | 2024-03-22 |
Family
ID=90256544
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311855924.5A Pending CN117750028A (en) | 2023-12-29 | 2023-12-29 | FC protocol-based video decoding and original frame video networking display method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117750028A (en) |
-
2023
- 2023-12-29 CN CN202311855924.5A patent/CN117750028A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1883244A2 (en) | Apparatus and method for transmitting moving picture stream using bluetooth | |
JP2012533220A (en) | System and method for transmitting content from a mobile device to a wireless display | |
CN102480618B (en) | Realize H264 video code model and play the method and system optimized | |
CN109194982B (en) | Method and device for transmitting large file stream | |
KR20080045276A (en) | Rtp payload format for vc-1 | |
CN104683863A (en) | Method and equipment for multimedia data transmission | |
JP2009272724A (en) | Video coding-decoding device | |
WO2021168649A1 (en) | Multifunctional receiving device and conference system | |
US11700414B2 (en) | Regrouping of video data in host memory | |
CN110958431A (en) | Multi-channel video compression post-transmission system and method | |
US9083954B2 (en) | Video processing method and system and related device | |
CA3065899A1 (en) | Regrouping of video data by a network interface controller | |
CN114339263A (en) | Lossless processing method for video data | |
KR101710011B1 (en) | Image data transmission and reception method and apparatus | |
WO2023216798A1 (en) | Audio and video transcoding apparatus and method, and device, medium and product | |
CN117750028A (en) | FC protocol-based video decoding and original frame video networking display method | |
CN218103327U (en) | Distributed video splicing system without switch | |
CN117793363A (en) | Video decoding and FC video transmitting system based on embedded Soc | |
CN102377977A (en) | Method, device and system for processing video in video call process | |
CN107172366A (en) | A kind of video previewing method | |
CN201123043Y (en) | Household wireless multimedia game system | |
CN106412684A (en) | High-definition video wireless transmission method and system | |
CN110381080B (en) | Multimedia data packet sending method and device | |
CN110248047B (en) | Synchronous switching method and system of video matrix | |
CN112235663B (en) | System-on-chip for realizing fusion of optical network unit and set top box |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |