CN113840101A - Video image processing method and device based on FPGA - Google Patents

Video image processing method and device based on FPGA Download PDF

Info

Publication number
CN113840101A
CN113840101A CN202010584950.9A CN202010584950A CN113840101A CN 113840101 A CN113840101 A CN 113840101A CN 202010584950 A CN202010584950 A CN 202010584950A CN 113840101 A CN113840101 A CN 113840101A
Authority
CN
China
Prior art keywords
video
fpga
video data
image processing
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010584950.9A
Other languages
Chinese (zh)
Inventor
何来
翟佳
迟宇
杨小龙
刘记文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Shansong Information Technology Co ltd
Original Assignee
Chongqing Shansong Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Shansong Information Technology Co ltd filed Critical Chongqing Shansong Information Technology Co ltd
Priority to CN202010584950.9A priority Critical patent/CN113840101A/en
Publication of CN113840101A publication Critical patent/CN113840101A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/268Signal distribution or switching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention provides a video image processing method and a device based on an FPGA (field programmable gate array), wherein the method comprises the following steps: the video input module converts and decodes various source video signals output by various video acquisition devices to obtain first video data; the FPGA carries out superposition of corresponding characters on the first video data according to the superposition instruction to obtain second video data; the FPGA compresses the second video data according to the storage instruction to obtain compressed video data; and the FPGA stores the compressed video data into a corresponding folder of a video memory. The core functions of the video image processing are completed in the FPGA, and the core functions are not required to be separately performed in each independent functional module, so that the number of components is reduced, the size and the cost of the video image processing device are further reduced, and the requirements of embedded video image processing on compatibility, stability and processing performance are met because the IP core integrated in the FPGA can compress and store an input video source in real time according to instructions.

Description

Video image processing method and device based on FPGA
Technical Field
The invention relates to the field of embedded video processing, in particular to a video image processing method and device based on an FPGA (field programmable gate array).
Background
The video image processing mainly completes the functions of video signal acquisition and conversion, character superposition, video compression, data storage, video output and the like, and has the core of superposition, compression and storage, and the reliability and the operation processing capability of each functional module directly influence the performance of image processing.
In the prior art, character superposition, compression processing and data storage of video images are realized through each independent module, so that the problems of multiple components, large volume and high price of a video image processing device are caused, and the problems of poor compatibility and poor stability exist among the independent modules, so that the requirements of embedded video image processing on performance cannot be met.
Disclosure of Invention
Aiming at the defects in the prior art, the method and the device for processing the video image based on the FPGA solve the problems that in the prior art, character superposition, compression processing and data storage of the video image are realized through independent modules, so that the video image processing device has multiple components, large volume and high price, and meet the requirements of embedded video image processing on compatibility, stability and processing performance.
In a first aspect, the present invention provides a video image processing method based on an FPGA, the method comprising: the video input module converts and decodes various source video signals output by various video acquisition devices to obtain first video data; the FPGA carries out superposition of corresponding characters on the first video data according to a superposition instruction to obtain second video data; the FPGA compresses the second video data according to a storage instruction to obtain compressed video data; and the FPGA stores the compressed video data into a corresponding folder of a video memory.
Optionally, the superimposing, by the FPGA, the corresponding character on the first video data according to the superimposing instruction to obtain second video data, including: the FPGA acquires a superposition instruction, wherein the superposition instruction comprises character information and superposition position information; the FPGA calls a corresponding character dot-matrix chart from a character memory according to the character information; and the FPGA replaces the color value of each pixel in the character bitmap with the color value of the corresponding pixel in each frame of image of the first video data according to the superposition position information to obtain the second video data.
Optionally, the compressing, by the FPGA, the second video data according to a storage instruction to obtain compressed video data includes: when the storage instruction is obtained, the FPGA generates a plurality of original image blocks for each frame of image of the second video data according to a preset rule; the FPGA carries out discrete cosine transform on the plurality of original image blocks to obtain the change coefficients of the plurality of original image blocks; and the FPGA quantizes the change coefficients of the plurality of original image blocks according to a preset quantization rule to obtain the compressed video data.
Optionally, when the storage instruction includes a start storage instruction and an end storage instruction, the FPGA stores the compressed video data into a corresponding folder of a video memory, including: when the FPGA acquires the storage starting instruction, newly building a folder in the video memory according to a first preset rule; the FPGA stores the compressed video data into the new folder; and when the FPGA acquires the storage ending instruction, stopping storing the compressed video data into the new folder.
Optionally, when the FPGA acquires the storage start instruction, before a new folder is created in the video memory according to a first preset rule, the method further includes: the FPGA acquires the number of folders in the video memory; the FPGA judges whether the number of the folders is larger than a preset number or not; and when the number of the folders is larger than the preset number, deleting one folder by the FPGA according to a second preset rule.
Optionally, the method further comprises: the video output module converts the second video data into RGB format and then outputs DVI video data or VGA video data; or/and the video output module decompresses and converts the compressed data in the video memory into RGB format, and then outputs DVI video data or VGA video data.
In a second aspect, the present invention provides an FPGA-based video image processing apparatus, comprising: the video input module is connected with various video acquisition devices and used for converting and decoding various source video signals output by the various video acquisition devices to obtain first video data; the FPGA is connected with the video input module and is used for performing character superposition, compression and storage on the first video data output by the video input module; and the video output module is connected with the FPGA and used for converting the video signals output by the FPGA into various target video signals and outputting the target video signals to various video display equipment for display.
Optionally, the FPGA is configured to superimpose corresponding characters on the first video data according to a superimposition instruction to obtain second video data; the FPGA is further used for compressing the second video data according to a storage instruction to obtain compressed video data; and the FPGA is also used for storing the compressed video data into a corresponding folder of a video memory.
Optionally, the video input module includes: a PAL video input unit and an SDI video input unit; the PAL video input unit is connected with PAL video acquisition equipment and used for converting the differential signal output by the PAL video acquisition equipment into a single-ended signal, decoding the single-ended signal by PAL decoding and outputting the single-ended signal to the FPGA for processing; the SDI video input unit is connected with SDI video acquisition equipment and used for outputting video signals output by the SDI video acquisition equipment to the FPGA for processing after SDI decoding.
Optionally, the video output module includes: VGA video output unit and DVI video output unit; the VGA video output unit is connected with VGA video display equipment and is used for converting the second video data or the compressed video data output by the FPGA into RGB format and outputting the RGB format to the VGA video display equipment; the DVI video output unit is connected with DVI video display equipment and used for converting the second video data or the compressed video data output by the FPGA into RGB format and outputting the RGB format to the DVI video display equipment.
Compared with the prior art, the invention has the following beneficial effects:
the FPGA-based video image processing method carries out format conversion and time sequence decoding on various types of source video signals output by various video acquisition equipment through the video input module, and then inputs the converted and decoded video signals into the FPGA for character superposition, video compression and data storage, so that the core functions of video image processing are all completed in the FPGA, and are not required to be separately carried out in each independent functional module, the number of components is reduced, the size and the cost of a video image processing device are further reduced, and because the IP core integrated in the FPGA can compress and store the input video source in real time according to instructions, the requirements of embedded video image processing on compatibility, stability and processing performance are met.
Drawings
Fig. 1 is an application environment diagram of a video image processing method based on an FPGA according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a method for processing a video image based on an FPGA according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an FPGA-based video image processing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of another FPGA-based video image processing apparatus according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a video data channel according to an embodiment of the present invention;
fig. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is an application environment diagram of a video image processing method based on an FPGA according to an embodiment of the present invention. Referring to fig. 1, the character superimposition method of a video image is applied to a character superimposition system of a video image. The character superimposition system for video images includes a terminal 110 and a server 120. The terminal 110 and the server 120 are connected through a network. A video input module in a terminal or a server converts and decodes various source video signals output by various video acquisition equipment to obtain first video data; the FPGA carries out superposition of corresponding characters on the first video data according to a superposition instruction to obtain second video data; the FPGA compresses the second video data according to a storage instruction to obtain compressed video data; and the FPGA stores the compressed video data into a corresponding folder of a video memory. The terminal 110 may specifically be a desktop terminal or a mobile terminal, and the mobile terminal may specifically be at least one of a mobile phone, a tablet computer, a notebook computer, and the like. The server 120 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers.
Fig. 2 is a schematic flowchart of a video image processing method based on an FPGA according to an embodiment of the present invention. The embodiment is mainly illustrated by applying the method to the terminal 110 (or the server 120) in fig. 1. Referring to fig. 2, the video image processing method based on the FPGA specifically includes the following steps:
in step S201, the video input module converts and decodes multiple types of source video signals output by multiple types of video capture devices to obtain first video data.
Specifically, the video capture device in this embodiment includes, but is not limited to, a PAL video capture device and an SDI video capture device, and PAL video signals output by the PAL video capture device are transmitted by using differences, so that the PAL video difference signals output by the PAL video capture device are converted into single-ended signals by the video input module, then decoded into binary video data in YCBCR4:2:2 format by the video chip, and then input into the FPGA for video processing; and an SDI video output by the SDI video acquisition equipment is input into the video input module and decoded into binary video data in a YCBCR4:2:2 format, and then the binary video data are input into the FPGA for video processing, wherein the decoded binary video data are the first video data.
And S202, the FPGA carries out superposition of corresponding characters on the first video data according to a superposition instruction to obtain second video data.
In an embodiment of the present invention, the superimposing, by the FPGA, the corresponding character on the first video data according to the superimposing instruction to obtain the second video data includes: the FPGA acquires a superposition instruction, wherein the superposition instruction comprises character information and superposition position information; the FPGA calls a corresponding character dot-matrix chart from a character memory according to the character information; and the FPGA replaces the color value of each pixel in the character bitmap with the color value of the corresponding pixel in each frame of image of the first video data according to the superposition position information to obtain the second video data.
It should be noted that the superimposition instruction may be an instruction sent by a client, the superimposition instruction includes character information to be superimposed and superimposed position information, the FPGA acquires a corresponding character lattice degree according to the character information to be superimposed, and replaces the color of each pixel point in the character lattice diagram with the color of a corresponding pixel point in each frame of image in the video data to obtain the superimposed video data.
The method comprises the steps that a plane coordinate system is established by taking positioning pixel points as an original point in a character dot matrix diagram, therefore, coordinate information is set in the plane coordinate system for each pixel point in the character dot matrix diagram, the positioning pixel points in the character dot matrix diagram are overlapped with the overlapping position in the overlapping instruction, the coordinates of each pixel point in the character dot matrix diagram are converted into the coordinates in each frame of image in the video data, and then each pixel point needing to be replaced in each frame of image can be obtained. Further, the color of each pixel point of which the color needs to be replaced in each frame of image may be from the color corresponding to the pixel point corresponding to the character bitmap, or other colors may be designated according to actual conditions.
And step S203, the FPGA compresses the second video data according to a storage instruction to obtain compressed video data.
Further, when the storage instruction is obtained, the FPGA generates a plurality of original image blocks for each frame of image of the second video data according to a preset rule; the FPGA carries out discrete cosine transform on the plurality of original image blocks to obtain the change coefficients of the plurality of original image blocks; and the FPGA quantizes the change coefficients of the plurality of original image blocks according to a preset quantization rule to obtain the compressed video data.
It should be noted that the overlay portion of the previous image processing module can already process the video into YUV components, and for example, the YUV components of the video data represent the luminance and two color difference signals, respectively. For example, for the existing PAL television system, the sampling frequency of the brightness signal is 13.5 MHz; the frequency band of the chrominance signal is typically half or less of that of the luminance signal, 6.75MHz or 3.375 MHz. And (3) adding the following components in percentage by weight of 4:2:2, the Y signal is sampled at 13.5MHz, the chrominance signals U and V are sampled at 6.75MHz, and the sampling signal is quantized with 8 bits, so that the bit rate of the digital video can be calculated as follows: 13.5 × 8+6.75 × 8 — 216Mbit/s, such a large amount of data would be difficult to store or transmit directly, and therefore compression techniques must be used to reduce the code rate. The image processing module performs image compression processing and transform coding in a transform coding mode, and has the functions of transforming an image signal described in a spatial domain into a frequency domain and then performing coding processing on a transformed coefficient. Generally, images have strong correlation in space, and transformation to the frequency domain can achieve decorrelation and energy concentration. Common orthogonal transforms are discrete fourier transforms, discrete cosine transforms, and the like. The application of the digital video compression process is widely discrete cosine transform.
The discrete cosine transform, abbreviated as DCT transform, can transform L × L image blocks from the spatial domain to the frequency domain. Therefore, in the DCT-based image compression encoding process, an image needs to be first divided into image blocks that do not overlap with each other. Assuming that a frame image has a size of 1280 × 720, it is first divided into 160 × 90 image blocks having a size of 8 × 8 without overlapping each other in a grid-like manner, and then DCT-transformed on each image block. After blocking, each 8 x 8 point image block is fed into a DCT coder to transform the 8 x 8 image block from the spatial domain into the frequency domain. Table 1 gives an example of an actual 8 x 8 image block, the numbers representing the luminance values of each pixel. As can be seen from table 1, the luminance values of the pixels in the image block are relatively uniform, and especially, the luminance values of the adjacent pixels do not change greatly, which indicates that the image signal has a strong correlation. Table 2 shows the result of DCT-transforming the image blocks in table 1. It can be seen from table 2 that after DCT transformation, the low frequency coefficients in the upper left corner have a large amount of energy concentrated, while the high frequency coefficients in the lower right corner have a small amount of energy.
Figure BDA0002554317410000061
The signals need to be quantized after being subjected to DCT transformation, and because human eyes are sensitive to low-frequency characteristics of the images, such as information of overall brightness of objects, and are insensitive to high-frequency detail information in the images, less high-frequency information or no high-frequency information can be transmitted in the transmission process, and only a low-frequency part is transmitted; in the quantization process, the coefficient of the low frequency region is subjected to fine quantization, and the coefficient of the high frequency region is subjected to coarse quantization, so that high-frequency information insensitive to human eyes is removed, and the information transmission amount is reduced.
And step S204, the FPGA stores the compressed video data into a corresponding folder of a video memory.
In an embodiment of the present invention, when the storage instruction includes a start storage instruction and an end storage instruction, the FPGA stores the compressed video data into a corresponding folder of a video memory, including: when the FPGA acquires the storage starting instruction, newly building a folder in the video memory according to a first preset rule; the FPGA stores the compressed video data into the new folder; and when the FPGA acquires the storage ending instruction, stopping storing the compressed video data into the new folder.
Further, when the FPGA acquires the storage start instruction, before a new folder is created in the video memory according to a first preset rule, the method further includes: the FPGA acquires the number of folders in the video memory; the FPGA judges whether the number of the folders is larger than a preset number or not; and when the number of the folders is larger than the preset number, deleting one folder by the FPGA according to a second preset rule.
It should be noted that the video and data recorded after each power-on initialization are required to be stored in a newly-created folder, each folder contains an RS422 data file, a CAN data file and a video file, the folder naming format is "XX days in XX months to XX times", wherein the recording times are circularly accumulated from 1 to 30; the maximum number of folders in the memory is 30, otherwise, the oldest 1 folder is deleted, and 1 folder is re-created according to a naming rule to store data, for example, before the 31 st folder is to be established, the software automatically deletes the first folder and names the 31 st folder as 'XX year, XX month, XX day-01 st time'; before the 32 nd folder is about to be created, the software automatically deletes the second folder and names the 32 nd folder as "XX month XX day XX year-02 th time" and so on in a cycle.
In an embodiment of the present invention, the video output module converts the second video data into RGB format, and outputs DVI video data or VGA video data; or/and the video output module decompresses and converts the compressed data in the video memory into RGB format, and then outputs DVI video data or VGA video data.
Therefore, the FPGA-based video image processing method of the invention carries out format conversion and time sequence decoding on various types of source video signals output by various video acquisition equipment through the video input module, and then inputs the converted and decoded video signals into the FPGA for character superposition, video compression and data storage, so that the core functions of video image processing are all completed in the FPGA without being separately carried out in each independent functional module, thereby reducing the number of components and parts, further reducing the volume and the cost of a video image processing device.
Fig. 3 shows an FPGA-based video image processing apparatus according to an embodiment of the present invention, and as shown in fig. 3, the FPGA-based video image processing apparatus 200 specifically includes:
the video input module 201 is connected with various video acquisition devices and is used for converting and decoding various source video signals output by the various video acquisition devices to obtain first video data;
the FPGA 202 is connected with the video input module 201 and is used for performing character superposition, compression and storage on the first video data output by the video input module;
and the video output module 203 is connected with the FPGA 202 and is used for converting the video signals output by the FPGA 202 into various target video signals and outputting the target video signals to various video display devices for display.
In the embodiment of the invention, the FPGA is configured to superimpose corresponding characters on the first video data according to a superimposition instruction to obtain second video data; the FPGA is further used for compressing the second video data according to a storage instruction to obtain compressed video data; and the FPGA is also used for storing the compressed video data into a corresponding folder of a video memory.
In an embodiment of the present invention, the video input module includes: a PAL video input unit and an SDI video input unit; the PAL video input unit is connected with PAL video acquisition equipment and used for converting the differential signal output by the PAL video acquisition equipment into a single-ended signal, decoding the single-ended signal by PAL decoding and outputting the single-ended signal to the FPGA for processing; the SDI video input unit is connected with SDI video acquisition equipment and used for outputting video signals output by the SDI video acquisition equipment to the FPGA for processing after SDI decoding.
In an embodiment of the present invention, the video output module includes: VGA video output unit and DVI video output unit; the VGA video output unit is connected with VGA video display equipment and is used for converting the second video data or the compressed video data output by the FPGA into RGB format and outputting the RGB format to the VGA video display equipment; the DVI video output unit is connected with DVI video display equipment and used for converting the second video data or the compressed video data output by the FPGA into RGB format and outputting the RGB format to the DVI video display equipment.
Fig. 4 is a schematic structural diagram of another FPGA-based video image processing apparatus according to an embodiment of the present invention, where the FPGA-based video image processing apparatus mainly performs video processing and data recording functions. The video processing is completed to select 1-path video source signal from 2-path differential PAL video and 1-path SDI video input, receive OSD superimposed information through a serial port, superimpose character information on the input video, output through one-path DVI interface and one-path VGA interface, and simultaneously complete the functions of compressing, storing, recording video, playing back and the like; data recording function: and receiving 3 paths of RS422 communication data and 2 paths of CAN communication data, and storing and playing back the data.
The main functions implemented by the FPGA-based video image processing apparatus provided in this embodiment include: the system has a 1-path UAR communication function, and is used for character superposition and response control according to commands; can select 1 path from 2 paths of differential PAL-D video and 1 path of HD-SDI visual acuity to be converted into color VGA and DTI-D at the same time; the capability of transmitting VGA signals in a long distance of 50m is provided; 3 paths of RS422 serial port data and 2 paths of CAN bus data CAN be recorded; the recorded data can be exported through an ethernet port; enabling erasure of recorded data; the power failure data protection function is provided, and the recorded files cannot be damaged when abnormal power failure occurs in the working process; the compressed H264 standard code stream video can be recorded; the video file can be selected to be recorded, and the recorded video can be played back; can respond to the command of starting recording and stopping recording.
The FPGA-based video image processing device is provided with the following interfaces:
Figure BDA0002554317410000081
the PAL video interface is connected with PAL video acquisition equipment, PAL system video signals are transmitted by adopting a differential line, AD8130 is used for converting the PAL system video signals into single-ended signals, the gain is set to be 1 during conversion, images are converted into a level range accepted by a video decoder, the video decoder selects a GM5150 chip of a national Teng electron, the chip has low power consumption and supports double-channel decoding, I2C is used for configuration, and output signals adopt an ITU656 mode + line-field synchronization mode.
The SDI video decoding chip selects a GS2971 chip corresponding to an input source, the chip comprises four basic modes, namely an SMPTE mode (default mode), a DVB-ASI mode, a Data-Through mode, a standard by mode, 8 channels of audio output with 48khz and an audio clock generator, extra Data can be filtered, a parallel bus can use a 20-bit mode or a 10-bit mode, has error indication and correction characteristics, and can generate an HVF or CEA 861 timing signal.
The output of VGA adopts GM7123 of China Teng electronics, the chip input is data format of RGB24BIT, DVI output control logic is realized inside FPGA, DVI video drive is realized through a high-speed interface integrated by FPGA, the logic control part of the serial port is realized by FPGA, FPGA designs a UART IP controller with 1KByte FIFO, the interface part selects RS422 interface chip, the requirement of maximum communication rate 921.6kbps, here, 10Mbit/s far-reaching electronics isolation transceiver RSM3422 chip is selected, two CAN bus interface CAN controller selects SJA1000, transceiver selects CTM1051AM, these devices are specially designed for the application of data rate up to 1Mbps, and include many protection characteristics to provide the robustness of devices and CAN network; the GA driving chip is GM7123 of China Teng electronics, and can convert the digital output of RGB888 into analog VGA signal output. In actual use, the VGA cable is likely to be up to 50 m. The VGA signal long-line transmission is realized mainly from the following aspects: (1) the special VGA display cable is used, analog signals in VGA signals have special requirements on impedance and shielding of the signals, and when the VGA signals are transmitted by long lines, high-quality special display lines meeting the requirements are required; (2) the driving capability of digital signals (line field signals) is increased, a digital driver with the driving capability reaching 24mA is used for driving the signals in the scheme, the method is used in the previous scheme, and 50m long lines can be driven through verification; (3) the driving capability of the signal of the analog signal (R, G, B) is increased, the GM7123 has two driving modes, the driving current of the signal is 18.5mA in the low driving mode, and the driving current of the signal reaches 26.5mA in the high driving mode, in the scheme, the GM7123 works in the high driving mode. A 50m long line can be driven. Therefore, a 50m long line drive can be achieved using GM 7123;
the video image processing device in the embodiment also has power-on initialization and self-test functions, and the processing device enters an initialization/self-test stage after being powered on, and can complete initialization within 4s, immediately realize an image processing function and display images; the initialized content comprises the following steps: completing system starting and configuration of Zynq-7z035, receiving parameter configuration of UART of control command, parameter configuration of 3-path RS422 serial port, parameter configuration of 2-path CAN bus, configuring relevant status register of video function; the self-checking function comprises the following contents: (1) setting a serial port as an inner loop mode, carrying out data receiving and transmitting verification, and judging a self-checking result; (2) setting CAN as an inner ring mode, carrying out data receiving and transmitting verification, and judging a self-checking result; (3) the read-write test self-check can be carried out on the hardware register; (4) the self-checking result can be reported to the main control board.
Fig. 5 is a schematic diagram of a video data channel according to an embodiment of the present invention, and as shown in fig. 5, two video data streams, a storage data stream channel and a video playback data stream channel, are designed in this embodiment; when in storage: video data completes video superposition in the FPGA, and processed data streams are simultaneously sent to a playing device for playing and a memory for compressing and storing; during playback: and the FPGA gates a data playback channel, and reads and decompresses the compressed video data from the memory. And the decompressed data is transmitted to the playing equipment through the data playback channel to be played.
FIG. 6 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be the terminal 110 (or the server 120) in fig. 1. As shown in fig. 6, the computer apparatus includes a processor, a memory, a network interface, an input device, and a display screen connected via a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement the model generation method. The internal memory may also have stored therein a computer program that, when executed by the processor, causes the processor to perform the model generation method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A video image processing method based on FPGA is characterized by comprising the following steps:
the video input module converts and decodes various source video signals output by various video acquisition devices to obtain first video data;
the FPGA carries out superposition of corresponding characters on the first video data according to a superposition instruction to obtain second video data;
the FPGA compresses the second video data according to a storage instruction to obtain compressed video data;
and the FPGA stores the compressed video data into a corresponding folder of a video memory.
2. The FPGA-based video image processing method according to claim 1, wherein the FPGA superimposes the corresponding characters on the first video data according to a superimposing instruction to obtain second video data, comprising:
the FPGA acquires the superposition instruction, and the superposition instruction comprises character information and superposition position information;
the FPGA calls a corresponding character dot-matrix chart from a character memory according to the character information;
and the FPGA replaces the color value of each pixel in the character bitmap with the color value of the corresponding pixel in each frame of image of the first video data according to the superposition position information to obtain the second video data.
3. The FPGA-based video image processing method of claim 1, wherein said FPGA compresses said second video data according to a storage instruction to obtain compressed video data, comprising:
when the storage instruction is obtained, the FPGA generates a plurality of original image blocks for each frame of image of the second video data according to a preset rule;
the FPGA carries out discrete cosine transform on the plurality of original image blocks to obtain the change coefficients of the plurality of original image blocks;
and the FPGA quantizes the change coefficients of the plurality of original image blocks according to a preset quantization rule to obtain the compressed video data.
4. The FPGA-based video image processing method of claim 1, wherein when said storing instructions include a start storing instruction and an end storing instruction, said FPGA stores said compressed video data in a corresponding folder of a video memory, comprising:
when the FPGA acquires the storage starting instruction, newly building a folder in the video memory according to a first preset rule;
the FPGA stores the compressed video data into the new folder;
and when the FPGA acquires the storage ending instruction, stopping storing the compressed video data into the new folder.
5. The FPGA-based video image processing method according to claim 4, wherein when said FPGA acquires said storage start instruction, said FPGA creates a folder in said video memory according to a first preset rule, said method further comprising:
the FPGA acquires the number of folders in the video memory;
the FPGA judges whether the number of the folders is larger than a preset number or not;
and when the number of the folders is larger than the preset number, deleting one folder by the FPGA according to a second preset rule.
6. The FPGA-based video image processing method of claim 1, further comprising:
the video output module converts the second video data into RGB format and then outputs DVI video data or VGA video data; and/or the first and/or second light sources,
and the video output module decompresses and converts the compressed data in the video memory into RGB format and outputs DVI video data or VGA video data.
7. An apparatus for processing video images based on an FPGA, the apparatus comprising:
the video input module is connected with various video acquisition devices and used for converting and decoding various source video signals output by the various video acquisition devices to obtain first video data;
the FPGA is connected with the video input module and is used for performing character superposition, compression and storage on the first video data output by the video input module;
and the video output module is connected with the FPGA and used for converting the video signals output by the FPGA into various target video signals and outputting the target video signals to various video display equipment for display.
8. The FPGA-based video image processing apparatus of claim 7,
the FPGA is used for superposing corresponding characters on the first video data according to a superposition instruction to obtain second video data;
the FPGA is further used for compressing the second video data according to a storage instruction to obtain compressed video data;
and the FPGA is also used for storing the compressed video data into a corresponding folder of a video memory.
9. The FPGA-based video image processing apparatus of claim 7 wherein said video input module comprises: a PAL video input unit and an SDI video input unit;
the PAL video input unit is connected with PAL video acquisition equipment and used for converting the differential signal output by the PAL video acquisition equipment into a single-ended signal, decoding the single-ended signal by PAL decoding and outputting the single-ended signal to the FPGA for processing;
the SDI video input unit is connected with SDI video acquisition equipment and used for outputting video signals output by the SDI video acquisition equipment to the FPGA for processing after SDI decoding.
10. The FPGA-based video image processing apparatus of claim 7 wherein said video output module comprises: VGA video output unit and DVI video output unit;
the VGA video output unit is connected with VGA video display equipment and is used for converting the second video data or the compressed video data output by the FPGA into RGB format and outputting the RGB format to the VGA video display equipment;
the DVI video output unit is connected with DVI video display equipment and used for converting the second video data or the compressed video data output by the FPGA into RGB format and outputting the RGB format to the DVI video display equipment.
CN202010584950.9A 2020-06-24 2020-06-24 Video image processing method and device based on FPGA Pending CN113840101A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010584950.9A CN113840101A (en) 2020-06-24 2020-06-24 Video image processing method and device based on FPGA

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010584950.9A CN113840101A (en) 2020-06-24 2020-06-24 Video image processing method and device based on FPGA

Publications (1)

Publication Number Publication Date
CN113840101A true CN113840101A (en) 2021-12-24

Family

ID=78964290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010584950.9A Pending CN113840101A (en) 2020-06-24 2020-06-24 Video image processing method and device based on FPGA

Country Status (1)

Country Link
CN (1) CN113840101A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114938433A (en) * 2022-07-25 2022-08-23 四川赛狄信息技术股份公司 Video image processing method, system, terminal and medium based on FPGA
CN115278160A (en) * 2022-06-22 2022-11-01 上海卓昕医疗科技有限公司 Device and method for integrating and displaying surgical robot data

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004135009A (en) * 2002-10-10 2004-04-30 Alpine Electronics Inc On-screen display unit
CN101605259A (en) * 2009-05-31 2009-12-16 华亚微电子(上海)有限公司 Multi-medium data is carried out the device and method of conversion coding and decoding
CN102148988A (en) * 2011-04-20 2011-08-10 上海交通大学 High speed JPEG (joint photographic expert group) image processing system based on FPGA (field programmable gate array) and processing method thereof
CN103905830A (en) * 2012-12-27 2014-07-02 联芯科技有限公司 Inverse discrete cosine transformation (IDCT) method and apparatus
CN103927734A (en) * 2013-01-11 2014-07-16 华中科技大学 Method for evaluating quality of blurred images based on no-reference
CN104717466A (en) * 2015-02-09 2015-06-17 深圳市振华微电子有限公司 HD-SDI video processing board based on FPGA
CN104954633A (en) * 2014-03-28 2015-09-30 北京中投视讯文化传媒有限公司 Live broadcasting instruction method, client and system
CN205883440U (en) * 2016-06-29 2017-01-11 江苏三棱智慧物联发展股份有限公司 PC video capture card with video compression function
CN107169950A (en) * 2017-06-02 2017-09-15 江苏北方湖光光电有限公司 A kind of high-definition picture fusion treatment circuit
CN107943782A (en) * 2017-10-10 2018-04-20 深圳市金立通信设备有限公司 A kind of character processing method and terminal
CN207530948U (en) * 2017-10-13 2018-06-22 北京富力天创科技有限公司 A kind of Video Character Superpose system
CN111010541A (en) * 2019-12-11 2020-04-14 重庆山淞信息技术有限公司 Video processing module based on FPGA and compression processor
CN111726634A (en) * 2020-07-01 2020-09-29 成都傅立叶电子科技有限公司 High-resolution video image compression transmission method and system based on FPGA

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004135009A (en) * 2002-10-10 2004-04-30 Alpine Electronics Inc On-screen display unit
CN101605259A (en) * 2009-05-31 2009-12-16 华亚微电子(上海)有限公司 Multi-medium data is carried out the device and method of conversion coding and decoding
CN102148988A (en) * 2011-04-20 2011-08-10 上海交通大学 High speed JPEG (joint photographic expert group) image processing system based on FPGA (field programmable gate array) and processing method thereof
CN103905830A (en) * 2012-12-27 2014-07-02 联芯科技有限公司 Inverse discrete cosine transformation (IDCT) method and apparatus
CN103927734A (en) * 2013-01-11 2014-07-16 华中科技大学 Method for evaluating quality of blurred images based on no-reference
CN104954633A (en) * 2014-03-28 2015-09-30 北京中投视讯文化传媒有限公司 Live broadcasting instruction method, client and system
CN104717466A (en) * 2015-02-09 2015-06-17 深圳市振华微电子有限公司 HD-SDI video processing board based on FPGA
CN205883440U (en) * 2016-06-29 2017-01-11 江苏三棱智慧物联发展股份有限公司 PC video capture card with video compression function
CN107169950A (en) * 2017-06-02 2017-09-15 江苏北方湖光光电有限公司 A kind of high-definition picture fusion treatment circuit
CN107943782A (en) * 2017-10-10 2018-04-20 深圳市金立通信设备有限公司 A kind of character processing method and terminal
CN207530948U (en) * 2017-10-13 2018-06-22 北京富力天创科技有限公司 A kind of Video Character Superpose system
CN111010541A (en) * 2019-12-11 2020-04-14 重庆山淞信息技术有限公司 Video processing module based on FPGA and compression processor
CN111726634A (en) * 2020-07-01 2020-09-29 成都傅立叶电子科技有限公司 High-resolution video image compression transmission method and system based on FPGA

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115278160A (en) * 2022-06-22 2022-11-01 上海卓昕医疗科技有限公司 Device and method for integrating and displaying surgical robot data
CN114938433A (en) * 2022-07-25 2022-08-23 四川赛狄信息技术股份公司 Video image processing method, system, terminal and medium based on FPGA
CN114938433B (en) * 2022-07-25 2022-10-11 四川赛狄信息技术股份公司 Video image processing method, system, terminal and medium based on FPGA

Similar Documents

Publication Publication Date Title
US9129445B2 (en) Efficient tone-mapping of high-bit-depth video to low-bit-depth display
JP6469631B2 (en) Method, apparatus and medium for encoding, decoding and representation of high dynamic range images
JP7049437B2 (en) Techniques for encoding, decoding and representing high dynamic range images
CN111316625B (en) Method and apparatus for generating a second image from a first image
TWI626841B (en) Adaptive processing of video streams with reduced color resolution
CN104704839A (en) Video compression method
CN113840101A (en) Video image processing method and device based on FPGA
US11638040B2 (en) Eco-friendly codec-based system for low latency transmission
CN105144726A (en) Custom data indicating nominal range of samples of media content
US11991412B2 (en) Standard dynamic range (SDR) / hybrid log-gamma (HLG) with high dynamic range (HDR) 10+
CN114640882B (en) Video processing method, video processing device, electronic equipment and computer readable storage medium
US6646688B1 (en) High quality video and graphics pipeline
US20110286663A1 (en) Method And Apparatus Of Color Image Rotation For Display And Recording Using JPEG
US7373002B2 (en) Image processing apparatus and method, and computer program
US20240205491A1 (en) Method and image-processing device for adding an overlay to a video sequence
US20210203952A1 (en) Encoder, encoding system and encoding method
JP2001144934A (en) Method and device for image synthesis
JPH04326665A (en) Picture encoder
JP2001006296A (en) Image processor and its method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination