CN114554155A - Remote image acquisition device - Google Patents

Remote image acquisition device Download PDF

Info

Publication number
CN114554155A
CN114554155A CN202210180526.7A CN202210180526A CN114554155A CN 114554155 A CN114554155 A CN 114554155A CN 202210180526 A CN202210180526 A CN 202210180526A CN 114554155 A CN114554155 A CN 114554155A
Authority
CN
China
Prior art keywords
data
image
module
signal
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210180526.7A
Other languages
Chinese (zh)
Inventor
李祺
程晓光
郑维宁
朱婕
杨宪铭
赵越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Electromechanical Engineering Research Institute
Original Assignee
Beijing Electromechanical Engineering Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Electromechanical Engineering Research Institute filed Critical Beijing Electromechanical Engineering Research Institute
Priority to CN202210180526.7A priority Critical patent/CN114554155A/en
Publication of CN114554155A publication Critical patent/CN114554155A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/617Upgrading or updating of programs or applications for camera control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet

Abstract

The invention discloses a remote image acquisition device, which is characterized in that an image is acquired through an image acquisition module, then system control, data storage and transmission and post-processing are mainly completed by utilizing an FPGA (field programmable gate array) of a ZYNQ (z-YNQ) chip in an image processing module, and an ARM (advanced RISC machine) processor is used for pre-processing ADC (analog-to-digital converter) input data and outputting the data to a DAC (digital-to-analog converter); performing data interaction between the PS end and the PL end through an AXI bus, wherein the data interaction comprises an AXI4 bus and an AXI4_ Lite bus, and the data bit width of the AXI4 bus is 64 bits and is used for transmitting image data with large data volume and high transmission delay requirement; the AXI4_ Lite bus has a data bit width of 32 bits and is used for controlling data transmission.

Description

Remote image acquisition device
Technical Field
The invention belongs to the technical field of image acquisition, and particularly relates to a remote image acquisition device.
Background
The image acquisition and storage device is widely applied to the fields of industrial production, medical health, aerospace and the like, and the image acquisition terminal device has great significance for improving the image data processing speed and reducing the image transmission delay. Under specific scenes, such as remote operations, space probes, unmanned aerial vehicle power inspection and the like, the requirements on the real-time performance and stability of image remote transmission are high, and a large amount of time is needed in the image acquisition, transmission and processing processes. For example, the size of 8-bit digitized image data converted by the analog-to-digital converter is 1728 × 625, and the size of the digitized image data is 1.03MB, so that a high transmission bandwidth is required to ensure the integrity of the image data when the sensor continuously acquires images at a high frame rate. In addition, data processing is required to effectively utilize image information. The conventional DSP or ARM processor has excellent control capability, but has a low sampling rate, serial instruction execution and floating point, and it is difficult to directly process image data with large data size, large pixel correlation and wide frequency band.
The characteristics of concurrent data processing, pipeline technology, simultaneous receiving and processing, high-speed interface and the like of the FPGA device are quite fit with the requirements of image data transmission and processing at present, but the FPGA device has a defect in the aspect of peripheral control capability. In order to construct an image acquisition system which is low in image transmission delay, can realize remote network image transmission and can be programmed, a ZYNQ high-performance chip provided by Xilinx company is adopted, an ARM processor and an FPGA are fused in the ZYNQ chip, the processor and the FPGA are interconnected through a high-speed AXI bus, the maximization of processing speed, control capacity and transmission rate can be realized through the characteristics, and meanwhile, the ARM processor can also push processed image data to a network to realize remote image transmission by means of carrying Gstreamer streaming media application through a Linux system.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a remote image acquisition device, which carries out real-time data processing on data acquired by an image sensor through embedded software, hardware and an expansion port and pushes the data to a network port so as to realize remote acquisition of image data.
In order to achieve the above object, the present invention provides a remote image capturing apparatus, comprising: the system comprises a dual-channel image acquisition module, a high-speed real-time image processing module and a human-computer interaction module;
the dual-channel image acquisition module comprises two image acquisition sensors, the image acquisition sensors are used for acquiring images, converting the acquired images into electric signals in an optical signal mode, and inputting the converted electric signals into the high-speed real-time image processing module through a coaxial cable;
the high-speed real-time image processing module comprises a video decoding chip, a Zynq chip, a network port and a memory; the Zynq chip also comprises an FPGA and an ARM processor;
the ARM processor runs an embedded control program, is responsible for receiving instructions of a remote PC, transmits control signals to the FPGA end through an AXI-Lite bus after the instructions are analyzed, realizes the configuration of a video decoding chip configuration module, a channel selection module and an AXI _ DMA IP core, and performs function scheduling on an acquisition device after the configuration is completed, and specifically comprises the following steps: the video decoding chip decodes the electric signal converted by the image acquisition sensor, outputs a digital video signal with an ITU-R BT.656 standard format, and the ARM processor controls the FPGA to carry out image data recombination on the digital video signal, transmits the recombined image data to a memory in a DMA mode for storage, and simultaneously transmits the recombined image data to the human-computer interaction module through a network port;
the man-machine interaction module is used for configuring the IP address of the image acquisition device, realizing remote transmission of signals and displaying acquired images in real time through the display.
The invention aims to realize the following steps:
the invention relates to a remote image acquisition device, which comprises an image acquisition module, an image processing module, an ARM processor and a DAC (digital-to-analog converter), wherein the image acquisition module is used for acquiring an image, and then the FPGA of a ZYNQ chip is used for mainly finishing system control, data storage and transmission and post-processing in the image processing module; performing data interaction between the PS end and the PL end through an AXI bus, wherein the data interaction comprises an AXI4 bus and an AXI4_ Lite bus, and the data bit width of the AXI4 bus is 64 bits and is used for transmitting image data with large data volume and high transmission delay requirement; the AXI4_ Lite bus has a data bit width of 32 bits and is used for controlling data transmission.
Meanwhile, the remote image acquisition device provided by the invention also has the following beneficial effects:
(1) the image processing part of the device realizes high-speed parallel data processing by using the logic resource of the FPGA, and can realize the simultaneous receiving and processing of the image data stream;
(2) the gray-scale value Y is kept unchanged by using the format of the input image data, the transmission bit width is expanded through the serial-parallel conversion module, and then Cr and Cb are further compressed, so that the data transmission amount is reduced while effective data is kept, and the data transmission efficiency is effectively improved;
(3) the image data stream can be directly converted into data with memory mapping by using the AXI-DMA transmission processing completed data, and the data is stored in a DDR memory, and the DMA transmission through an AXI interconnection bus in the Zynq chip is higher in transmission efficiency compared with the DMA transmission of other interfaces;
(4) and the function selection and the power failure processing of part of circuits are realized by adopting a software programming mode, so that the phenomenon that some functions of the traditional hardware platform are not used and still have large power consumption is avoided.
Drawings
FIG. 1 is a block diagram of a remote image acquisition device according to the present invention;
FIG. 2 is a block diagram of the structure of an image processing module;
FIG. 3 is a schematic diagram of the 625 column data format of the ITU-R BT.656 standard;
FIG. 4 is a row data diagram of the 625 column data format of the ITU-R BT.656 standard;
FIG. 5 is a schematic diagram of an auxiliary signal encoding scheme;
FIG. 6 is a schematic diagram of an arrangement of valid video signal data;
FIG. 7 is a schematic diagram of a human-computer interaction interface of the image acquisition system;
fig. 8 is an image capture remote display effect diagram.
Detailed Description
The following description of the embodiments of the present invention is provided in order to better understand the present invention for those skilled in the art with reference to the accompanying drawings. It is to be expressly noted that in the following description, a detailed description of known functions and designs will be omitted when it may obscure the subject matter of the present invention.
Examples
For convenience of description, the related terms appearing in the detailed description are explained:
FPGA (field Programmable Gate array): a field programmable gate array;
arm (advanced RISC machine): an advanced compact instruction set processor;
LVDS (Low-Voltage Differential Signaling): a low voltage differential signal;
dma (direct Memory access): a direct memory access;
fifo (First Input First output): first-in first-out;
pc (personal computer): a personal computer;
fig. 1 is a schematic diagram of an embodiment of a remote image capturing device according to the present invention.
In this embodiment, as shown in fig. 1, a remote image capturing apparatus according to the present invention includes: the system comprises a dual-channel image acquisition module, a high-speed real-time image processing module and a human-computer interaction module;
the dual-channel image acquisition module comprises two image acquisition sensors, the image acquisition sensors are used for acquiring images, converting the acquired images into electric signals in an optical signal mode, and inputting the converted electric signals into the high-speed real-time image processing module through a coaxial cable;
the high-speed real-time image processing module comprises a video decoding chip, a Zynq chip, a network port and a memory; the Zynq chip also comprises an FPGA and an ARM processor;
the video decoding chip is used for converting the electric signal output by the image sensor into a digital video signal with an ITU-R BT.656 standard format; as shown in fig. 2, the working state of the video decoding chip is configured by a video decoding chip configuration module in the FPGA portion of the Zynq chip through an I2C bus, after the video decoding chip is configured, the chip continuously performs data conversion on the analog signal of the image sensor under a 27Mhz clock, and accesses the converted digital signal to the FPGA end of the Zynq chip through an LVDS interface;
as shown in fig. 2, the FPGA includes a video chip configuration module, a channel selection module, a data analysis module, an image preprocessing module, and an image data reassembly module;
the FPGA firstly selects one path of digital video signals through the channel selection module and outputs the digital video signals to the data analysis module, and the data analysis module analyzes the mark information of the digital video signals according to the composition characteristics of the ITU-R BT.656 format, and the method comprises the following steps: valid data, blanking signal flag, data error flag, head and tail signals of image frame, line synchronizing signal, field synchronizing signal, composite synchronizing signal; then judging whether the currently analyzed image data contains complete information according to the data error mark, discarding the current image frame when the information is incomplete, and waiting for the input of the next frame of image data by the data analysis module; otherwise, the data analysis module inputs the complete and error-free image data to the image preprocessing module along with the analyzed mark information;
in this embodiment, the ITU-R bt.656 standard divides a video sequence into N frames, and the PAL format converted bt.656 standard has 625 lines, the bottom field valid data is also 288 lines, and the remaining lines are vertical blanking signals for marking and distinguishing the two fields. Interlaced scanning is adopted when an image is acquired, each frame generally comprises two fields, one field is called a top field, the other field is called a bottom field, and the top field and the bottom field can also be called even fields and odd fields due to the interlaced scanning. Wherein, the 625-column data format of ITU-R BT.656 standard is shown in FIG. 3;
each row of the bt.656 standard consists essentially of four parts: line ═ end code (EAV) + Horizontal Blanking (Horizontal Blanking) + start code (SAV) + valid data (Active Video), as shown in fig. 4.
Each row of data signals is encoded in the form of 8 bits, including auxiliary signals (SAV, EAV), line blanking signals, active video signals. The auxiliary signal includes SAV, EAV, respectively indicating the beginning and end of a data line, and 4byte data is composed of 16-ary FF 0000 XY. Wherein FF 0000 is the data flag bit of SAV and EAV, XY is the information bit of the auxiliary signal, its coding format is shown in FIG. 5, the most significant bit (bit7) of XY is fixed data 1; f-0 means that the even field F-1 means the odd field; v ═ 0 denotes that the line active video data V ═ 1 denotes that the line is inactive; h-0 is denoted as SAV signal H-1 is denoted as EAV signal; P3-P0 are protection signals and are generated by F, V, H signal calculation; p3 ═ V exclusive or H; p2 ═ F exclusive or H; p1 ═ F exclusive or V; p0 exclusive or V exclusive or H. When V of the time base signal is 0, the behavior video data is indicated; this behavior helper data is indicated when V is 1 (blanking typically occurs alternately at 10, 80 when there is no helper data). Different EAV and SAV are shown in Table 1.
Table 1 shows row assist signals for 656 column data
Line number F V EAV SAV
1-22 0 1 0xb6 0xab
23-310 0 0 0x9d 0x80
311-312 0 1 0xb6 0xab
313-335 1 1 0xf1 0xec
336-623 1 0 0xda 0xc7
624-625 1 1 0xf1 0xec
In the image preprocessing module, a data separation module firstly utilizes data mark information to respectively extract lines containing valid data in an even field and an odd field from image data in an ITU-R BT.656 format, and simultaneously strips an invalid field blanking signal;
the lines extracted preliminarily still comprise auxiliary signals, line blanking signals and effective image signals; the auxiliary signals comprise SAV and EAV which respectively represent the beginning and the end of a data line, line blanking data are further removed by the auxiliary signals, and at the moment, the removed image data only comprise effective gray-scale values Y and colorimetric values Cr and Cb;
in this embodiment, the blanking line data consists of 80 and 10, total 280 bytes, and the design can eliminate the useless data of the blanking line at the PL end according to the change of the auxiliary signal, so as to reduce the transmission time of a frame of useful signals and the time for processing the useless data by software. For the Valid video signal (Valid data), the arrangement sequence is shown in fig. 6, wherein Y represents brightness (Luma) or Luma, i.e., gray scale value; and Cr and Cb are used to represent chrominance, where Cr reflects the difference between the red portion of the RGB input signal and the luminance value of the RGB signal. Cb reflects the difference between the luminance values of the RGB input signal and the blue portion of the RGB input signal.
In this embodiment, the effective image data is combined in such a way that one pixel includes an individual Y value, two adjacent pixels share a pair of Cr and Cb, and the ratio of Y, Cr, and Cb is 4:2: 2; in order to further reduce the transmission amount of data, the post-stage needs to separately process the Cr and Cb data, so the data separation module separately outputs the Y data and the Cr and Cb data in the valid data. And the serial-parallel conversion module in the image preprocessing module is used for expanding the bandwidth of data transmission and improving the transmission capability of data. The Y data in the image data has a large influence on the image quality, and the Cr and Cb components have a small influence on the image formation of the image data, so the data amount is reduced by further compressing the effective data of the Cr and Cb components, and the specific processing procedure is as follows: keeping the gray scale value Y unchanged, firstly expanding the transmission bit width through a serial-parallel conversion module, and then further compressing Cr and Cb: adding Cb and Cr at corresponding positions in the progressive image data and dividing the added Cb and Cr by 2 to obtain new Cb and Cr, and finally synchronously outputting the processed Cb, Cr and Y to an image data recombination module after FIFO (first in first out) buffering;
the image data recombination module is used for recombining the effective image data of Y, Cr and Cb; the specific recombination mode is as follows: recombining the image data into YUV 4by using a Y value corresponding to a group of Cr and Cb: 2: the data format of 0, the image data after being combined is output to AXI _ DMA IP core through an interface conforming to AXI data stream, and is transmitted to the network port of the acquisition device in the form of network data packet and is sent to the man-machine interaction module;
in this embodiment, the external PAL tv broadcast signal is an analog video signal, and in order to implement ZYNQ data processing, a/D conversion is also required, and the analog video signal is converted into a digital video signal of ITU-R bt.656 standard after TVP5150AM1 conversion, but the video signal at this time contains a large amount of unnecessary signals such as column blanking signal, row blanking signal, and auxiliary signal, and cannot be directly transmitted to an upper computer for data processing and image display. The final display still needs to rearrange the line field data in the ITU-R bt.656 video standard, extract the gray scale and chrominance information in the effective line, i.e. Y, Cr, Cb are accurately separated out, and rearrange according to the odd and even fields, so as to be available for software playing. Therefore, video decoding work is particularly important in the system, and the current modes for realizing video decoding are divided into two types: software decoding and hardware decoding.
In the design, software decoding is applied at first, hardware is only responsible for a data transmission function, under the framework, a large amount of software is operated repeatedly in a circulating mode, the data amount to be processed is large, the power consumption of a system and the occupancy rate of a CPU are increased, and the time for data transmission and analysis is increased due to frequent access to DDR. Finally, only 3 to 5 frames per second of video playing can be realized, which is far lower than the speed of 25 frames per second of the image acquisition front end. The digital video signals of the BT.656 standard are analyzed according to the image preprocessing requirement through the FPGA, and a field synchronizing signal (F), a vertical synchronizing signal (V) and a horizontal synchronizing signal (H) are analyzed according to XY control words of each line. And splicing the progressive images by extracting effective data in each frame of image.
Every two adjacent lines form a group, and all Y data, Cb data and Cr data of every two lines are extracted from the original CbYCrY … … sequence data. After arranging 720 bytes of Y data (DMA _ tvalid for 180 consecutive 100MHz clock cycles) in the first row, 720 bytes of Y data, 360 bytes of preprocessed Cb data, and 360 bytes of preprocessed Cr data (DMA _ tvalid for 360 consecutive 100MHz clock cycles for the second row Y and Cb and Cr), the groups can be formed, and 288 groups of such data are transmitted to the DMA after fifo buffering.
In order to further improve the data transmission efficiency and ensure the real-time performance of the system, the digital video data needs to be further compressed and converted into YCrCb ═ 4:2:2, i.e., YCrCb ═ 4:2: 0.
The ratio between the luminance and chrominance values delivered as a line is scanned is often used to describe various sampling regimes. This ratio is typically based on luminance values and is then described in terms of 4: X: Y, X and Y being the relative number of values in each two chrominance channels. 4:2:2 means that each horizontal scan line corresponds to two chrominance values every 4 luminance values, simply 4:1:1 means that each 4 luminance value corresponds to 1 chrominance value, and 4:4:4 means that the chrominance values are not subsampled. However, this is not completely continuous, 4:2:0 would correspond to four luminance values with 1 chrominance value, two samples for the first chrominance element and no sampling for the second chrominance element, which would not produce a complete color image. In practice, 4:2:0 means that there are two chroma samples per scan line, and only interlaced samples are sampled.
Although converting 4:2:2 to 4:2:0 may reduce the saturation of colors at details, it does not generally reduce color saturation within a large object. The specific treatment process comprises the following steps: the Y data in the original image is reserved, Cb and Cr at corresponding positions in the progressive image data are added and divided by 2 to obtain new Cb and Cr, and then the data can be grouped in two lines, and 288 groups of data are transmitted in each frame of image, and the data are transmitted to the DMA through the FIFO buffer. The processing mode further compresses the data volume of one frame of image, and effectively improves the data transmission efficiency.
Finally, a frame of data of 625-line BT.656 standard digital video signals is removed from line blanking signals at PL end according to the change of auxiliary signals, useful data is rearranged and transmitted to a memory through DMA, and the software can correctly display the image after inserting the data through odd and even lines.
The ARM processor runs an embedded control program, is responsible for receiving instructions of a remote PC, transmits control signals to the FPGA end through an AXI-Lite bus after the instructions are analyzed, realizes the configuration of a video decoding chip configuration module, a channel selection module and an AXI _ DMA (advanced peripheral interface memory) IP core, and performs function scheduling on a collection device after the configuration is completed, and specifically comprises the following steps: the video decoding chip decodes the electric signal converted by the image acquisition sensor, outputs a digital video signal with an ITU-R BT.656 standard format, and the ARM processor controls the FPGA to carry out image data recombination on the digital video signal, transmits the recombined image data to a memory in a DMA mode for storage, and simultaneously transmits the recombined image data to the human-computer interaction module through a network port;
the man-machine interaction module is used for configuring the IP address of the image acquisition device, realizing remote transmission of signals and displaying the acquired image in real time through the display.
Results and analysis of the experiments
And (3) utilizing Vivado software of Xilinx company to complete the development of the FPGA side and export a hardware platform file. And (3) constructing an embedded Linux system adaptive to the hardware platform by using the Petalinux software, and importing the driving software and the application program into the embedded Linux system after the system is transplanted. After the operation is finished, the equipment is powered up again, the driver is loaded and the application program is run after the embedded system is entered, the initialization operation of each module of the equipment and the network monitoring are started, and then the remote client is waited to be accessed.
And (4) functional test: the method comprises the following steps of accessing the remote monitoring equipment and the image acquisition equipment to the same subnet for network transmission and video display test, wherein the test steps are as follows:
a) connecting the designed image acquisition equipment with a computer through a network cable, connecting a direct current power supply with the acquisition equipment through a 12V power line, setting the current of the direct current power supply to be 3A and outputting;
b) starting up upper computer software in PC, setting IP address of image acquisition equipment, and clicking to open connection;
c) and connecting a video source with an input channel of the image acquisition equipment and outputting image data. And configuring an acquisition channel to be checked through upper computer software, and starting transmission of video data.
The software of the upper computer is provided with a video display window, when the data transmission is closed, the display window is in a default black screen state, and the configuration menu bar is in a non-configurable state. The human-computer interaction interface of the image acquisition system is shown in fig. 7.
And when the upper computer software establishes network connection with the image acquisition equipment and the acquisition equipment completes self-inspection and feeds back a self-inspection success signal to the upper computer software, other control windows of the man-machine interaction interface are switched to a configurable state. The acquisition equipment is configured to open the channel 1 and start remote acquisition, and the experimental result is shown in fig. 8.
The display duration of the acquired data and the accumulation of the image display frame number are displayed at the upper left of the display window, the whole video acquisition and display process is very smooth, the refresh rate is 25 frames per second, and the acquisition rate is completely consistent with the acquisition rate of the front-end ADC.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, and various changes may be made apparent to those skilled in the art as long as they are within the spirit and scope of the present invention as defined and defined by the appended claims, and all matters of the invention which utilize the inventive concepts are protected.

Claims (3)

1. A remote image acquisition device, comprising: the system comprises a dual-channel image acquisition module, a high-speed real-time image processing module and a human-computer interaction module;
the dual-channel image acquisition module comprises two image acquisition sensors, the image acquisition sensors are used for acquiring images, converting the acquired images into electric signals in an optical signal mode, and inputting the converted electric signals into the high-speed real-time image processing module through a coaxial cable;
the high-speed real-time image processing module comprises a video decoding chip, a Zynq chip, a network port and a memory; the Zynq chip also comprises an FPGA and an ARM processor;
the ARM processor runs an embedded control program, is responsible for receiving instructions of a remote PC, transmits control signals to the FPGA end through an AXI-Lite bus after the instructions are analyzed, realizes the configuration of a video decoding chip configuration module, a channel selection module and an AXI _ DMA IP core, and performs function scheduling on an acquisition device after the configuration is completed, and specifically comprises the following steps: the video decoding chip decodes the electric signal converted by the image acquisition sensor, outputs a digital video signal with an ITU-R BT.656 standard format, and the ARM processor controls the FPGA to carry out image data recombination on the digital video signal, transmits the recombined image data to a memory in a DMA mode for storage, and simultaneously transmits the recombined image data to the human-computer interaction module through a network port;
the man-machine interaction module is used for configuring the IP address of the image acquisition device, realizing remote transmission of signals and displaying acquired images in real time through the display.
2. The remote image capturing device as claimed in claim 1, wherein the working state of the video decoding chip is configured by the video chip configuration module through an I2C bus, and after the configuration of the video decoding chip is completed, the chip continuously decodes the electrical signal converted by the image capturing sensor under a 27Mhz clock, and the converted digital video signal is connected to the FPGA side of the Zynq chip through the LVDS interface.
3. The remote image capturing device as claimed in claim 1, wherein the FPGA comprises a video chip configuration module, a channel selection module, a data parsing module, an image preprocessing module, and an image data reorganizing module
FPGA selects a path of digital video signal to be output to a data analysis module through a channel selection module, and the data analysis module analyzes the mark information of the digital video signal, and the method comprises the following steps: valid data, blanking signal flag, data error flag, head and tail signals of image frame, line synchronizing signal, field synchronizing signal, composite synchronizing signal; then judging whether the currently analyzed image data contains complete information according to the data error mark, discarding the current image frame when the information is incomplete, and waiting for the input of the next frame of image data by the data analysis module; otherwise, the data analysis module inputs the complete and error-free image data to the image preprocessing module along with the analyzed mark information;
in the image preprocessing module, a data separation module firstly utilizes data mark information to respectively extract lines containing valid data in an even field and an odd field from image data in an ITU-R BT.656 format, and simultaneously strips an invalid field blanking signal; the method comprises the steps that lines extracted preliminarily still comprise auxiliary signals, line blanking signals and effective image signals, line blanking data are further removed through the auxiliary signals, at the moment, the removed image data only comprise effective gray-scale values Y and chromatic values Cr and Cb, wherein Cr reflects the difference between the red part of an RGB input signal and the brightness value of the RGB signal, and Cb reflects the difference between the blue part of the RGB input signal and the brightness value of the RGB signal; keeping the gray-scale value Y unchanged, firstly expanding the transmission bit width through a serial-parallel conversion module, then further compressing Cr and Cb to new Cr and Cb, and finally synchronously outputting the processed Cr, Cb and Y to an image data recombination module after FIFO (first in first out) caching;
the image data recombination module is used for recombining the effective image data of Y, Cr and Cb; the specific recombination mode is as follows: recombining the image data into YUV 4by using a Y value corresponding to a group of Cr and Cb: 2: and the data format of 0, the combined image data is output to an AXI _ DMA IP core through an interface conforming to an AXI data stream, and is transmitted to a network port of the acquisition device in the form of a network data packet and is sent to the man-machine interaction module.
CN202210180526.7A 2022-02-25 2022-02-25 Remote image acquisition device Pending CN114554155A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210180526.7A CN114554155A (en) 2022-02-25 2022-02-25 Remote image acquisition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210180526.7A CN114554155A (en) 2022-02-25 2022-02-25 Remote image acquisition device

Publications (1)

Publication Number Publication Date
CN114554155A true CN114554155A (en) 2022-05-27

Family

ID=81679858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210180526.7A Pending CN114554155A (en) 2022-02-25 2022-02-25 Remote image acquisition device

Country Status (1)

Country Link
CN (1) CN114554155A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114755961A (en) * 2022-06-15 2022-07-15 浙江杭可仪器有限公司 Control system of aging test box

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114755961A (en) * 2022-06-15 2022-07-15 浙江杭可仪器有限公司 Control system of aging test box

Similar Documents

Publication Publication Date Title
US7564470B2 (en) Compositing images from multiple sources
EP1625509B1 (en) Strategies for processing image information using a color information data structure
CN100435570C (en) Video display control apparatus and video display control method
RU2008134468A (en) PREPARATION OF IMAGE DATA FOR IMPROVED COMPRESSION
US20060050076A1 (en) Apparatus for and method of generating graphic data, and information recording medium
CN102737614A (en) Method for realizing multi-layer image display on joint screen and joint screen
CN102761738A (en) Image compression method and device on basis of mixed chromaticity sampling rate
CN110381278A (en) Method and apparatus for color space 4:4:4 transmission
CN114554155A (en) Remote image acquisition device
US8130317B2 (en) Method and system for performing interleaved to planar transformation operations in a mobile terminal having a video display
Havrilov et al. Real-time video processing system based on field programmable gate array
KR20050113500A (en) Compression and decompression device of graphic data and therefor method
CN102905080A (en) Equipment and method for implementing twin-channel video output by single processor
US7688334B2 (en) Method and system for video format transformation in a mobile terminal having a video display
CN101282437A (en) Decoding device
CN202160225U (en) Digitized network video image synthesizer
CN111406404A (en) Compression method, decompression method, system and storage medium for obtaining video file
CN213279882U (en) Video compression coding service terminal
US8189681B1 (en) Displaying multiple compressed video streams on display devices
CN115733940A (en) Multi-source heterogeneous video processing display device and method for ship system
CN113225509A (en) Device and method for converting CEDS video format signals into HDMI interface signals
US6646686B1 (en) Managing alpha values for video mixing operations
CN1751523A (en) Processing signals for a color sequential display
CN2922341Y (en) Video meeting terminal capable of realizing high-definition rideo signal input and output
CN201044472Y (en) Image processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination