CN112995505B - Image processing method, device and storage medium - Google Patents

Image processing method, device and storage medium Download PDF

Info

Publication number
CN112995505B
CN112995505B CN202110179495.9A CN202110179495A CN112995505B CN 112995505 B CN112995505 B CN 112995505B CN 202110179495 A CN202110179495 A CN 202110179495A CN 112995505 B CN112995505 B CN 112995505B
Authority
CN
China
Prior art keywords
fpga
image data
data
cache
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110179495.9A
Other languages
Chinese (zh)
Other versions
CN112995505A (en
Inventor
秦明伟
王焕
李瑶
侯宝临
焦慧龙
姚远程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Ming Lin Hui Technology Co ltd
Southwest University of Science and Technology
Original Assignee
Sichuan Ming Lin Hui Technology Co ltd
Southwest University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Ming Lin Hui Technology Co ltd, Southwest University of Science and Technology filed Critical Sichuan Ming Lin Hui Technology Co ltd
Priority to CN202110179495.9A priority Critical patent/CN112995505B/en
Publication of CN112995505A publication Critical patent/CN112995505A/en
Application granted granted Critical
Publication of CN112995505B publication Critical patent/CN112995505B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image processing method and device, which are used for improving the image processing efficiency and the image processing accuracy. The method can comprise a first FPGA and a second FPGA, wherein the first FPGA is connected with a client and the second FPGA is connected with a camera, and the second FPGA is connected with the camera, and the method comprises the following steps: the first FPGA receives image data transmitted from the second FPGA, performs circle searching processing on the received image data, and outputs circle searching processing results and the image data to a client; the second FPGA receives original image data from the camera and processes the original image data to obtain a first processing result; and the first processing result is used for the first FPGA to combine with the original image data of the object to be detected and the first processing result to identify the object to be detected. Because the first FPGA and the second FPGA are adopted to work cooperatively, the first FPGA and the second FPGA can respectively bear partial tasks of image processing, and the data processing efficiency can be improved.

Description

Image processing method, device and storage medium
Technical Field
The present application relates to the field of vision measurement technologies, and in particular, to an image processing method and apparatus.
Background
Visual measurement is one of the important means to identify the characteristics of an object. In the vision measurement, a camera with high resolution and high frame rate is generally required to capture an image of a target object, and the captured image is analyzed and processed to obtain a characteristic parameter related to the target object, so as to achieve the purpose of identifying or tracking the object.
However, as the resolution of the camera becomes higher, the data amount of the image to be processed and the data rate become higher, for example, the data rate is as high as 2GB/s or more. In this case, the existing processing mechanism is used, so that the flexibility of image data scheduling is low, the image processing efficiency is low, and the accuracy of image processing is low.
Disclosure of Invention
The application provides an image processing method and device, which are used for improving the image processing efficiency and the image processing accuracy.
In a first aspect, an execution subject of the method is a first programmable logic device (FPGA) and a second FPGA, the first FPGA is connected to a client and the second FPGA is connected to a camera, and the method includes:
the first FPGA receives image data transmitted from the second FPGA, performs circle searching processing on the received image data, and outputs circle searching processing results and the image data to a client; the second FPGA receives original image data from the camera and processes the original image data to obtain a first processing result; and the first processing result is used for the first FPGA to combine with the original image data of the object to be detected and the first processing result to identify the object to be detected.
According to the embodiment of the application, the first FPGA and the second FPGA are adopted to work cooperatively, so that the first FPGA and the second FPGA can respectively bear partial tasks of image processing, and the data processing efficiency can be improved.
In one possible implementation, the receiving, by the first FPGA, the image data transmitted from the second FPGA includes:
the first FPGA sends a first instruction from the client to the second FPGA, and the first instruction is used for indicating online visual measurement; the first FPGA receives original image data sent by the second FPGA from the camera; or,
the first FPGA sends a second instruction from the client to the second FPGA, wherein the second instruction is used for indicating off-line visual measurement; the first FPGA receives image data read from an SSD storage array by the second FPGA, the SSD storage array is used for storing the image data of the second FPGA after the second FPGA compresses original image data from the camera, and the offline visual measurement is used for indicating the first FPGA to output the received image data and a circle searching processing result together; or,
the first FPGA sends a third instruction from the client to the second FPGA, and the third instruction is used for indicating offline data uploading; the first FPGA receives image data read from an SSD storage array from the second FPGA, the SSD storage array is used for storing the image data of the second FPGA after the second FPGA compresses original image data from the camera, and the offline data uploading is used for indicating the first FPGA to upload the received image data.
The embodiment of the application provides three application scenes, namely online vision measurement, offline vision measurement and offline data uploading, and the second FPGA can selectively send appropriate image data to the first FPGA according to an instruction sent by the client, so that the method is more flexible and has wider applicability.
In a possible implementation manner, the receiving, by the second FPGA, raw image data from the camera, and processing the raw image data to obtain a first processing result includes:
the second FPGA sequentially stores the original image data into a first cache and a second cache in a data buffering module; when the original image data is stored in the second cache, the second FPGA reads the original image data from the first cache and processes the read original image data; or, when the original image data is stored in the first cache, the second FPGA reads the original image data from the second cache, and processes the read original image data.
The data buffering module in the embodiment of the application is provided with two buffers, and when image data are stored, the two buffers are switched back and forth, namely, a ping-pong mechanism is adopted to store original image data, so that when one buffered image data is written, the other buffered image data is read in time, and thus, the stored image frames of each buffer are as few as possible, and the storage space is saved. Compared with the prior art that the original image data are stored and read out, the method saves the storage space especially in the scene with a larger data rate.
In one possible implementation, the reading, by the second FPGA, the original image data from the first cache or the second cache includes:
and the second FPGA detects that a first signal is set, the second FPGA does not read the original image data from the first cache or the second cache, otherwise, the original image data is read from the first cache or the second cache, and the first signal is generated according to a data processing rate.
In the embodiment of the application, the first signal is generated according to the data processing rate, the setting of the first signal indicates that the data processing rate is low, when the second FPGA detects the setting of the first signal, the data processing can be considered to be low, and if the image data is still read from the cache, the image data cannot be processed, so that the image data is lost, and the accuracy of the data processing is low. Therefore, when the second FPGA detects that the first signal is not set, the processing capacity can be considered to meet the requirement for processing the image data, and under the condition, the second FPGA reads the image data from the buffer memory, so that the storage space can be saved, the image data can be ensured to be lost as little as possible, and the accuracy of image processing is ensured. And the condition of data rate mutation can be effectively smoothed, and the stability of the system is improved.
In a possible implementation manner, the sequentially storing, by the second FPGA, the original image data in the first buffer or the second buffer of the data buffering module includes:
the second FPGA judges whether frame loss conditions are met or not; and when the frame loss condition is determined to be met, the second FPGA discards the current image frame.
When the image data rate is far greater than 2GB/s, and the data processing rate is lower than the writing rate of the cache for a long time, the data of image pixel points are lost, and the buffer scheduling is disordered. Therefore, the embodiment of the application can set a proper frame loss condition, namely, the current image frame can be discarded under the condition of meeting the frame loss condition, so that the lost image frame cannot cause great influence on the image processing result, and the accuracy of the image processing can be ensured.
Illustratively, the frame loss condition is any one of the following conditions:
the number of the image frames in the first cache and the second cache is equal to the number of the preset stored image frames;
the number of image frames in the first cache is equal to the number of preset stored image frames, and the second cache is in a reading state;
the number of image frames in the second buffer is equal to the number of preset stored image frames, and the first buffer is in a reading state.
In a possible implementation manner, the read address and the write address of the first cache and the second cache are predefined, and the base addresses of the first cache and the second cache are predefined, and when the original data is stored in the first cache or the second cache, the original data is written into the base addresses of the first cache and the second cache plus an offset address in a frame unit; when reading stored original image data from the first buffer or the second buffer, reading is started in units of lines from the base addresses of the first buffer and the second buffer plus an offset address.
The write address and the read address of the two caches are independently managed, so that the situation that a group of caches share the same address and read-write scheduling is disordered due to continuous address switching can be avoided.
In one possible implementation, the method further includes:
the first FPGA is used for receiving configuration parameters from the client and sending the configuration parameters to the second FPGA through a serial port channel between the first FPGA and the second PFGA, and the configuration parameters are used for configuring one or more of image processing parameters of the second FPGA for processing image data and parameters of the camera.
The embodiment of the application can flexibly write one frame of image data into the DDR designated address and flexibly read out one row, multiple lines and one frame of image data from the DDR designated address by utilizing the characteristics of the image frame interval and the line interval.
In a second aspect, an image processing apparatus is provided, for example, the image processing apparatus includes the aforementioned first FPGA and second FPGA. The image processing apparatus has a function of realizing the behavior in the method embodiment of the first aspect described above. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above-described functions. The communication device comprises a communication interface, a processor and optionally a memory. Wherein the memory is configured to store a computer program or instructions, and the processor is coupled to the memory and the communication interface, and when the processor executes the computer program or instructions, the image apparatus is configured to perform the method performed by the first FPGA and the second FPGA in the above method embodiments.
In a third aspect, the present application provides a computer readable storage medium storing a computer program which, when executed, implements the method of the above aspects performed by the first FPGA and the second FPGA.
The embodiment of the application adopts a plurality of FPGAs to cooperatively work to realize the processing of the image, thereby improving the data processing rate and meeting the scene that the data rate is more than 2G/s. In addition, a ping-pong cache mechanism is adopted in the embodiment of the application to ensure that each cache stores fewer image frames as much as possible, so that the storage space is saved, and the requirement that the data rate is greater than 2G/s is met.
Drawings
Fig. 1 is a schematic block diagram of an image processing apparatus according to an embodiment of the present application.
FIG. 2 is a timing diagram of a CoaXPress protocol according to an embodiment of the present application;
FIG. 3 is a diagram illustrating data buffering module scheduling according to an embodiment of the present disclosure;
FIG. 4 is a diagram illustrating a DDR ping-pong write state provided in an embodiment of the present application;
fig. 5 is a state diagram of a DDR ping-pong read-in state provided in the embodiment of the present application;
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
The embodiment of the application can be suitable for visual measurement and used for tracking and identifying the target object. Referring to fig. 1, a schematic view of a scenario applicable to the embodiment of the present application is shown. Fig. 1 includes a camera for capturing an image of an object to be recognized, an image processing apparatus, and an upper computer (client), and after the camera captures the image, the image may be transmitted to the image processing apparatus. The client may send instructions to the camera for setting some camera parameters of the camera. For example, the client may send instructions to the camera carrying camera parameters (e.g., resolution, frame rate, exposure time, etc.). The client may also be configured to set some parameters for processing the image (e.g., some parameters related to the image decompression algorithm, etc.), and send the image processing parameters to the image processing apparatus. The image processing module is used for processing the image from the camera according to the parameters transmitted by the client, outputting the processing result to the client and finally achieving the purpose of identifying the target object.
It should be understood that the image processing process involves a plurality of sub-tasks, such as writing, reading, and scheduling processes of image data, and a decompression process of an image, which have high requirements on the computational processing capability of the image processing apparatus. Particularly, the realization of online vision measurement requires real-time processing of image data, and the requirement on the computing processing capacity of an image processing device is higher. As the data rate of image processing is higher, for example, greater than 2G/s, the existing image processing apparatus cannot meet the demand for image processing. For example, to meet the latency requirement of image processing, when the data rate is higher and the image processing apparatus schedules image data at a slower rate, the image processing apparatus may choose to discard some image data, which may result in accuracy of image recognition.
Therefore, the application provides an image processing architecture, and in the architecture, online visual measurement or offline visual measurement is realized through the cooperative work of a plurality of FPGAs. Because the FPGAs work cooperatively, each FPGA shares a plurality of subtasks in the image processing process, and even if the data rate is high, the requirement of image processing can be met for any FPGA.
The technical scheme provided by the embodiment of the application is described below with reference to the accompanying drawings. In the following, the technical solution provided by the embodiment of the present application is implemented as an example by 2 FPGAs, and the 2 FPGAs are respectively referred to as a first FPGA and a second FPGA. Of course, the number of FPGAs is not limited in the embodiment of the present application, that is, the image processing apparatus provided in the embodiment of the present application may also include more than 2 FPGAs. It should be noted that the FPGA may be an FPGA chip, or may be a device capable of supporting an FPGA function. Of course, the image processing apparatus may include other necessary functional modules besides the first FPGA and the second FPGA, for example, a transceiver interface for data interaction with an external device, such as a client.
Fig. 1 is a schematic block diagram of an image processing apparatus according to an embodiment of the present disclosure. The image processing device comprises a first FPGA and a second FPGA which are connected with each other, wherein the first FPGA is connected with a client, and the second FPGA is connected with a camera. The first FPGA and the second FPGA may be respectively configured to perform a plurality of subtasks involved in image processing. For example, in the online visual measurement, the second FPGA may be configured to receive image data transmitted by the camera, send the image data to the second FPGA, and execute an image compression/decompression algorithm of the image, compress/decompress the image data, and store the processed image data for offline use. For another example, in the offline visual measurement, the second FPGA may read stored image data from the storage area, decompress the read image data, and send the decompressed image data to the first FPGA; or the second FPGA receives the image data sent by the first FPGA, then the second FPGA compresses the received image data, and writes the compressed image data into the storage area.
In the embodiment of the application, the first FPGA and the second FPGA work cooperatively, and the first FPGA and the second FPGA can mutually transmit image data and some parameters related to an image processing process. For example, data transmission can be achieved between the first FPGA and the second FPGA through the serial port module, namely the first FPGA comprises the serial port module 1, the second FPGA comprises the serial port module 2, and the serial port module 1 is connected with the serial port module 2 to achieve data transmission between the first FPGA and the second FPGA. For example, in the embodiment of the present application, the client and the second FPGA may communicate with each other through a PCI-e interface. The user can set camera parameters at the client and send to first FPGA, and serial port module 2 is sent to the central control module 1 control serial port module 1 control camera parameter of first FPGA, and camera control module is given with the camera parameter transmission from serial port module 2 to central control module 2 of second FPGA, and then gives the camera. For another example, a user may set image processing parameters (e.g., image lap searching algorithm parameters, image compression/decompression algorithm parameters, etc.) at a client and send the image processing parameters to a first FPGA, the central control module 1 of the first FPGA controls the serial port module 1 to send the image compression/decompression algorithm parameters to the serial port module 2, and the central control module 2 of the second FPGA transmits the image compression/decompression algorithm parameters from the serial port module 2 to the image compression/decompression algorithm processing module of the second FPGA, so that the image compression/decompression algorithm processing module processes a received image according to the received image compression/decompression algorithm parameters. And the central control module 1 of the first FPGA sends the image circle searching algorithm parameters to the circle searching algorithm processing module.
In the following, two application scenarios of line vision measurement and offline vision measurement are taken as examples respectively to describe how the first FPGA and the second FPGA work cooperatively.
The first FPGA comprises a serial port module 1, a central control module 1, a PCIE read-write control module, a data cache module 1, a data uploading module, a circle searching processing module, an image data receiving and transmitting module 1, an image data splicing module 1 and an image recovery module. The second FPGA comprises a camera control module, a data selector 1, a data selector 2, an image data transceiving module 2, an image data splicing module 2, a data buffer module 2, an image compression/decompression module, an SSD read-write control module, a serial port module 2 and a central control module 2. The serial port module 1 and the serial port module 2 can be used for transmitting parameters, such as camera parameters, image processing parameters and the like, between the first FPGA and the second FPGA. In addition, the serial module 1 and the serial module 2 may be used to transmit some related control commands, for example, a command to start a camera to start capturing image data. Image data can be transmitted between the image data transceiver module 1 in the first FPGA and the image data transceiver module 2 in the second FPGA through a GTH transmission channel supported by the FPGAs.
For example, in the online visual measurement, a user may set camera parameters at a client and send the set camera parameters to the first FPGA. The central control module 1 of the first FPGA controls the serial port module 1 to transmit camera parameters to the serial port module 2 in the second FPGA. And the central control module 2 in the second FPGA controls the serial port module 2 to transmit the camera parameters to the camera control module, and then the camera parameters are transmitted to the camera through the camera control module, so that the camera acquires images of the target object according to the parameters set by the client.
When the target object needs to be measured, a user can set a starting instruction at the client, and the instruction is used for instructing the camera to start image acquisition on the target object. The client transmits the instruction to the camera, which is similar to the process of transmitting the camera parameters to the camera by the client, and the process is not repeated here for simplicity.
And the camera receives the opening instruction and acquires an image of the target object according to the received camera parameters. In the embodiment of the application, the camera can respectively send the acquired image data to the first FPGA and the second FPGA, and the first FPGA and the second FPGA respectively process the received image data, namely respectively realize partial tasks in the visual measurement process, and transmit the processed results to each other. For example, in the embodiment of the present application, a first FPGA may be used to perform, for example, an image compression/decompression task, a second FPGA may be used to perform, for example, an image search task, and the first FPGA may inform the second FPGA of the result of the image compression/decompression, so that the second FPGA combines the result of the image compression/decompression and the image search result to obtain an image processing result, and finally, implement the visual measurement. Due to the fact that the first FPGA and the second FPGA work cooperatively, even if image data collected by the high-speed camera are processed, for example, under the condition that the data rate is larger than 2G/s, the image data can be processed timely, and the image data processing efficiency is guaranteed as much as possible.
Specifically, the camera may transmit the acquired image data to the camera control module, and the camera control module may transmit the image data to the data selector 1 and the data selector 2, respectively. It should be understood that the camera control module can convert the image data into image data supporting a specified protocol format (e.g., CoaXPress, CameraLink, USB) according to actual needs, the data selector 1 can send the received image data to the image data transceiver module 2, and the image data transceiver module 2 forwards the image data to the image data transceiver module 1 in the first FPGA through the GTH transmission channel. The data selector 1 may transmit the image data from the camera to the image data transceiver module 1, and may also be configured to transmit the image data processed by the image compression/decompression module 2 to the image data transceiver module 1. For example, in the online vision measurement, the data selector 1 may select to transmit the image data from the camera to the image data transceiving module 2; in the off-line vision measurement, the data selector 1 may select to send the image data processed by the image compression/decompression module 2 to the image data transceiver module 2. It should be understood that in the off-line vision measurement, the image data need not be acquired from the camera in real time, so the camera control module can only work in the on-line vision measurement, and can be turned off when in the off-line vision measurement, so as to save energy consumption as much as possible.
The image data transceiver module 2 receives the image data from the data selector 1, and may transmit the image data to the image data transceiver module 1 through a transmission channel (e.g., a GTH or a high-speed serial transceiver) between the first FPGA and the second FPGA. Of course, the image data transceiver module 1 in the first FPGA may also transmit the image data from the first FPGA to the image data transceiver module 2. It can be considered that the image data transceiver module 1 and the image data transceiver module 2 can implement the interaction of the image data between the first FPGA and the second FPGA. For example, during online visual measurement, the image data transceiver module 2 receives the image data sent by the data selector 1, and sends the image data to the first FPGA through the GTH transmission channel; during off-line vision measurement, the image data transceiver module 2 receives the image data sent by the second FPGA through the GTH transmission channel and the image data transceiver module 1, outputs the image data to the data selector 2, and outputs the image data to other function modules for processing by the data selector 2.
The data selector 2 is similar to the data selector 1, and the data selector 2 may also input two paths of image data and select which path of image data is output. For example, in the case of online vision measurement, the data selector 2 may select to transmit image data from the camera to the image stitching module 2; in the off-line vision measurement, the data selector 2 can select and output image data from the second FPGA or the client.
After the image stitching module 2 receives the image data from the data selector 2, the image data may be output to the data buffer module 2. Since the bit width of the image supported by the data buffer module 2 may not match the bit width corresponding to the image data output by the data selector 2, the image stitching module 2 may expand the bit width of the image data, so that the bit width of the output image data matches the bit width supported by the data buffer module 2. Once image data is input, the image stitching module 2 processes the received image data to shorten the image processing delay as much as possible.
The embodiment of the present application aims to support processing of image data in a scene with a data rate greater than 2G/s, and for this reason, the data buffering module 2 in the embodiment of the present application may be provided with two buffers, which are referred to as a first buffer and a second buffer. The first cache and the second cache may adopt a ping-pong mechanism, that is, the first cache performs a write operation while the second cache performs a read operation, and the second cache performs a write operation while the first cache performs a read operation. Therefore, the image data only needs to be stored once, and the stored image data can be read in time, so that the storage resource is saved. Especially in the data rate scene, the data size is large, the data buffer module 2 provided by the embodiment of the application can save a large amount of storage resources and process the image data in time, and the problem that the accuracy of image identification is low due to the fact that the image data is not processed timely through frame loss processing in the prior art is avoided. The working mechanism of the data buffer module 2 will be described later.
The image compression/decompression module may compress or decompress the received image data. For example, in the online vision measurement, the image compression/decompression module may generate a read request for requesting to acquire image data from the data buffer module 2 and perform compression processing on the acquired image data. Certainly, a user can set image compression parameters through the client, and control the serial port module 1 to transmit the image compression parameters to the second FPGA through the first FPGA, and then the central control module 2 in the second FPGA transmits the image compression parameters to the image compression/decompression module, so that the image compression/decompression module compresses the acquired image data according to the image compression parameters. The image compression/decompression module sends the compressed image data to the SSD read-write control module, so that the SSD read-write control module writes the image data into the SSD storage array. For another example, during offline vision measurement, the image compression/decompression module generates a read request for requesting to read compressed image data in the SSD storage array, decompresses the acquired image data, and outputs the decompressed image data. It should be noted that, in the embodiment of the present application, the image compression parameter or the image decompression parameter is generally referred to, and may further include, for example, a resolution.
The SSD read/write control module may be configured to receive an instruction sent from the central control module 2 of the second FPGA, for example, a work start instruction, and then perform corresponding read operation or write operation according to the received instruction, so as to read image data from the SSD storage array, or write the image compression/decompression module into the SSD storage array.
And during visual measurement, the first FPGA is also used for performing circle searching processing on the image data transmitted from the second FPGA and sending a processing result and the image data to the client. During off-line measurement, the first FPGA can receive image data sent by the client through the PCI-e interface, and after the image data protocol format is recovered, the image data protocol format is sent to the second FPGA through the GTH transmission channel so that the second FPGA can process the image data to realize off-line visual measurement. Or when the off-line image data is uploaded, the first FPGA receives the image data sent by the second FPGA through the GTH transmission channel, the circle searching processing is not executed, and the image data and the processing result of the second FPGA are directly sent to the client through the PCI-e interface.
Specifically, the implementation manner of each functional module in the first FPGA is similar to that of each functional module in the second FPGA, which is not described herein again.
The difference is that in the off-line vision measurement, the data uploading module in the first FPGA may generate a read request and send the read request to the data buffering module 1 to obtain image data from the data buffering module 1, and in addition, the data uploading module also sends the read request to the circle searching processing module to obtain a circle searching result from the circle searching processing module, and finally, the obtained image data and the circle searching result are output in a combined manner. Certainly, in the offline data uploading process, the data uploading module cannot acquire the circle searching result.
The circle searching processing module can acquire image data from the data buffer module 1 and perform circle searching processing, and finally output a circle searching result. The central control module 1 of the first FPGA may be responsible for completing scheduling of each functional module of the first FPGA, parameter configuration of each functional module, and the like, and may also issue a control instruction. For example, the central control module 1 in the first FPGA receives the control command and the configuration parameter sent by the client through the PCI-e register channel, sends the control command and the configuration parameter to the second FPGA through the serial port module 1, receives the response signal of the second FPGA after configuration, and sends the response signal to the client through the PCI-e register channel.
The image restoration module in the first FPGA may be configured to receive the image resolution parameter configured by the central control module 1, receive image data sent from the client, and restore the image data into image data in a specific protocol format for output.
The PCIE read-write control module may be configured to implement communication between the client and the first FPGA, for example, the client may send the camera parameters, the image processing parameters, and the like to the first FPGA through the PCIE read-write control module, and then the first FPGA sends the camera parameters, the image processing parameters, and the like to the second FPGA. For another example, the client may also send some control commands to the first FPGA through the PCIE read-write control module, and then the first FPGA sends the control commands to the second FPGA. These control commands include, for example, commands for controlling the motion of the camera, commands for starting the camera to start capturing images. Certainly, the client may also receive, through the PCIE read-write control module, a response signal from the first FPGA to forward the control command sent by the second FPGA, and the like.
The embodiment of the application aims to solve the problem that the current FPGA cannot meet the image processing requirement under the scene with the data rate larger than 2G/s, for example, the data rate is high, and in order to process received image data in time, some image data can be lost in the image processing process of the FPGA, so that the accuracy rate of image data identification is low. And the current FPGA can store image data transmitted by the camera, when the data rate is high, a large storage space is needed, and the storage requirement can not be met due to the limitation of the storage space.
Therefore, the embodiment of the application provides a new caching mechanism. For example, as shown in fig. 1, the data buffer module 1 and the data buffer module 2 each include two (set) buffer controllers (for simplicity, they may also be referred to as 2 (set) buffers, hereinafter, for convenience of description, two sets of buffer controllers are referred to as a first buffer and a second buffer.
For example, please refer to fig. 2, which is a timing diagram of the CoaXPress protocol. In the embodiment of the present application, a data protocol format output by a camera is a CoaXPress protocol format. In FIG. 2, CXP _ SOP is a frame start signal, CXP _ SOL is a line start signal, CXP _ EOL is a line end signal, VALID is a data VALID signal, and DADA [127:0] is pixel data. When the resolution is 5120 × 5120, each frame of image includes 5120 lines and 5120 columns, there is a line interval between lines, and there is a frame interval between frames.
In the embodiment of the present application, the data buffering module 1 and the data buffering module 2 include a first buffer (DDRA) and a second buffer (DDRB). Please refer to fig. 3, which is a schematic diagram of the data buffering module 1 (data buffering module 2) for scheduling data according to an embodiment of the present application. Fig. 3 illustrates the data buffer module 1 as an example.
The image data splicing module 1 transmits the image data to the data buffer module 1, wherein the first frame image is written into the DDRA; the second frame image is written into the DDRB while the first frame image is read from the DDRA. I.e. image data is written preferentially to the ddr a. When the third frame image CXP _ SOP is effective, whether the first frame image is read out completely is detected, if the first frame image is read out completely from the DDRA, the third frame image is written into the DDRA, otherwise, the third frame image is written into the DDRB, namely, the read operation and the write operation of the DDRA or the DDRB can not be carried out simultaneously, and the read operation and the write operation of the DDRA and the DDRB can not be carried out simultaneously. Specifically, the DDR can configure the starting address and the data length. For example, the data length of the configurable write DDR is one frame; the read DDR data length may be configured to be one row. The fourth and subsequent frames follow the same writing principle.
It should be understood that, normally, only one frame of image in the DDRA and DDRB is available at most, that is, the first frame of image is written into the DDRA, the second frame of image is written into the DDRB, when the third frame of image is available, the first frame of image has been read out from the DDRA, when the fourth frame of image is available, the second frame of image has been read out from the DDRB, and the data buffering module 1 stores the dynamic balance loop of at most one frame as the subsequent frames are input. If the data processing rate is decreased, data is temporarily not processed, the reading operation may be suspended, the number of frames of images buffered in the DDRA or DDRB may be increased, and if the DDRA and DDRB capacity is large, many frames may be buffered. For this reason, in the embodiment of the present application, if the data processing rate is reduced, the image compression/decompression module may generate FIFO _ prog _ full, and while reading the DDRA and DDRB, may detect a FIFO _ prog _ full signal, and if the FIFO _ prog _ full signal is set, may temporarily not read the DDRA and DDRB to ensure that the back-end FIFO does not overflow.
Since the number of images already buffered in the ddr a or the DDRB may exceed one frame, when the fourth frame and the subsequent frame are written into the data buffer module 1, it needs to detect whether all the image frames in the ddr a or the DDRB have been read out, and if not, the image frames cannot be written in, so that there is only one frame of image in the ddr a or the DDRB at most. Therefore, when the image data rate is far greater than 2GB/s, the data processing rate is lower than the write-in rate of DDR for a long time, and therefore the conditions of image pixel data loss and buffer scheduling imbalance are caused.
Therefore, the embodiment of the present application sets a frame dropping mechanism, that is, the embodiment of the present application may set two sets of maximum frame numbers stored by the DDR, and determine whether to drop the current image frame according to the set maximum frame numbers. Because a group of address ports are shared by single group of DDR3 for reading and writing, address switching is required continuously when data is read and written, read-write scheduling confusion is easily caused, addresses are not easy to manage, and the read-write throughput rate of DDR3 is also reduced. However, in the embodiment of the present application, ping-pong operations are performed using two sets of DDRs 3, which can avoid confusion of read-write scheduling.
The frame loss condition may be: when the effective image frame is input, if the total frame number in the DDRA and the DDRB is equal to the maximum frame number, the current frame can be discarded, the total frame number in the DDRA is equal to the maximum frame number, and the DDRB reading mark is set, the current frame can also be discarded; alternatively, if the total frame number in the DDRB is equal to the maximum frame number and the DDRA read flag is set, then the current frame may also be discarded.
Specifically, two sets of DDR3 memories are respectively controlled using ddra _ ui and ddrb _ ui, and the maximum number of frames stored by the two sets of DDR is set by a parameter prog _ thld, for example, prog _ thld is set to 60. When the image frame is input into the DDR controller, the image frame is numbered according to the input sequence, and the state of the DDR controller is detected. When reading the DDRA or DDRB, the image frames are read out in the write order. Capturing a rising edge of a frame start signal CXP _ SOP before writing DDR, judging whether a frame loss condition is met or not when the CXP _ SOP is set, and if the DDRA and the DDRB do not meet the writing condition, dropping the image frame while outputting an image frame number frame _ num and a frame loss indication pulse frame _ discard _ pulse. And if the DDRA or the DDRB meets the writing condition, writing the DDRA or the DDRB through ping-pong control. And after the image of one frame is written, updating the write starting address of the DDRA or the DDRB through the DDR address management module. In the embodiment of the application, besides managing the DDR address, the flags of the DDR ra and the DDRB need to be managed. For example, the following flags may be defined for DDRA and DDRB, respectively: write flag, read flag, full flag, empty flag. In the initial state, the write flag, the read flag, the full flag and the empty flag of the DDRA and the DDRB are reset. And updating an empty mark, a full mark and a write mark of the DDRA or the DDRB by the DDR mark control module. When the DDRA or DDRB has a complete image frame, the image data is read out through the ping-pong control module, after one frame of image is read out, the read starting address of the DDRA or DDRB is updated through the DDR address management module, and the empty mark, the full mark and the read mark of the DDRA or DDRB are updated through the DDR mark control module.
Fig. 4 shows a DDR ping-pong write state diagram. After power-on and reset, the system jumps to a CXP _ SOP detection state, captures the rising edge of a frame starting signal CXP _ SOP, and captures the jump to the next state. The DDR mark detection state has three conditional branches, whether to lose frames, whether to write in DDRA, whether to write in DDRB. When the frame loss condition is met, setting a frame loss flag discard _ flag, and simultaneously jumping to a frame loss state; when frame loss is not satisfied and DDRA is not read, setting the ddr _ wr _ flag, and simultaneously jumping to a write DDRA state; and when the frame loss is not satisfied and the writing DDRA is not satisfied, setting DDRB _ wr _ flag and simultaneously jumping to a writing DDRB state. The frame loss state is used for capturing the rising edge of the end-of-frame signal CXP _ EOP, capturing the jump to the next state. And writing the DDRA state, namely configuring a DDRA writing initial address and a writing data length, triggering the writing DDRA, detecting whether a DDRA writing completion signal DDRA _ wr _ done is set or not after the triggering is completed, and jumping to the next state if the DDRA writing completion signal DDRA _ wr _ done is set. And writing the DDRB state, namely configuring a DDRB writing starting address and a writing data length, triggering the DDRB, detecting whether a DDRB writing completion signal DDRB _ wr _ done is set or not after the DDRB writing completion signal ddRB is triggered, and jumping to the next state if the DDRB writing completion signal DDRB _ wr _ done is set. And in the DDR mark clearing state, namely, clearing the discard _ flag, the DDR _ wr _ flag and the DDR _ wr _ flag, and skipping to the next state.
Fig. 5 shows a DDR ping-pong write state diagram. The method comprises the steps that a DDR mark detection state is entered after power-on or reset, two conditional branches exist, and if a DDRA write mark is cleared and a DDRA empty mark is cleared, a DDRA read mark DDR _ rd _ flag is set and the DDRA state is skipped to; and if the DDRB write flag is cleared and the DDRB empty flag is cleared, setting the DDRB read flag DDRB _ rd _ flag and jumping to a DDRB reading state. And reading the DDRA state, namely detecting a DDRA empty flag firstly, then detecting a fifo _ prog _ full signal, and if the DDRA empty flag is cleared and the fifo _ prog _ full is cleared, configuring a DDRA reading start address and a data reading length and triggering DDRA reading operation. And after reading one frame of image, detecting whether a DDRA (digital versatile disc) empty flag is set, if the DDRA empty flag is cleared, continuously detecting a fifo _ prog _ full signal, and triggering DDRA reading operation. And if the image frame in the DDRA is completely read, jumping to the next state. And reading the DDRB state, namely detecting a DDRB empty flag firstly, then detecting a fifo _ prog _ full signal, and if the DDRB empty flag is cleared and the fifo _ prog _ full is cleared, configuring a DDRB reading start address and a data reading length and triggering DDRB reading operation. And after reading one frame of image, detecting whether a DDRB empty flag is set, if the DDRB empty flag is cleared, continuously detecting a fifo _ prog _ full signal, and triggering DDRB reading operation. And if the image frame in the DDRB is completely read, jumping to the next state. And (5) clearing the DDR mark, namely clearing the DDR _ rd _ flag and the DDR _ rd _ flag, and jumping to the next state.
In order to improve the read-write scheduling efficiency of the DDR, the DDR write address and the DDR read address are independently managed, and a DDR write base address, a DDR read base address, a DDR write offset address, a DDR read offset address, a DDR write termination address and a DDR read termination address are defined. The unit of the DDR write offset address is one frame, the unit of the DDR read offset address is one line, and the DDR write termination address is calculated by a parameter prog _ thld. Every time the DDR writes in one frame, the DDR writes in the base address and adds the write offset address once, until the DDR is full; and when the DDR is empty, the DDR writing base address and the DDR reading base address are cleared. Thus, by using the characteristics of the image frame interval and the line interval, one frame of image data can be flexibly written into the DDR designated address, and one line, a plurality of lines, and one frame of image data can be flexibly read from the DDR designated address.
The following describes an apparatus for implementing the above method in the embodiment of the present application with reference to the drawings. Therefore, the above contents can be used in the subsequent embodiments, and the repeated contents are not repeated.
The present application further provides an image processing apparatus based on the same inventive concept, and the image processing apparatus can correspondingly implement the functions or steps implemented by the first FPGA and the second FPGA in the above method embodiments. The first FPGA and the second FPGA comprise functional modules as shown in FIG. 1. The modules may perform corresponding functions in the above method examples, and for specific reference, detailed descriptions in the method examples are omitted here for brevity.
An embodiment of the present application further provides a computer-readable storage medium, which includes instructions, and when the computer-readable storage medium runs on a computer, the computer is enabled to execute the method in the foregoing method example, specifically refer to the detailed description in the method example, which is not described herein again.
Those of ordinary skill in the art will understand that: the various numbers of the first, second, etc. mentioned in this application are only used for the convenience of description and are not used to limit the scope of the embodiments of this application, but also to indicate the sequence. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one" means one or more. At least two means two or more. "at least one," "any," or similar expressions refer to any combination of these items, including any combination of singular or plural items.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device including one or more available media integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The various illustrative logical units and circuits described in this application may be implemented or operated upon by design of a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in the embodiments herein may be embodied directly in hardware, in a software element executed by a processor, or in a combination of the two. The software cells may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (9)

1. An image processing method is applied to a first programmable logic device FPGA and a second FPGA, wherein the first FPGA is connected with a client and the second FPGA is connected with a camera, and the method comprises the following steps:
the first FPGA receives the image data transmitted by the second FPGA, performs circle searching processing on the received image data, and outputs a circle searching processing result and the image data to the client;
the second FPGA receives original image data from the camera and processes the original image data to obtain a first processing result; the first processing result is used for the first FPGA to combine with original image data of an object to be detected, and the first processing result is used for identifying the object to be detected;
the first FPGA receives the image data transmitted from the second FPGA, and the method comprises the following steps:
the first FPGA sends a first instruction from the client to the second FPGA, and the first instruction is used for indicating online visual measurement; the first FPGA receives original image data sent by the second FPGA from the camera; or,
the first FPGA sends a second instruction from the client to the second FPGA, wherein the second instruction is used for indicating off-line visual measurement; the first FPGA receives image data read from an SSD storage array by the second FPGA, the SSD storage array is used for storing the image data of the second FPGA after the second FPGA compresses original image data from the camera, and the offline visual measurement is used for indicating the first FPGA to output the received image data and a circle searching processing result together; or,
the first FPGA sends a third instruction from the client to the second FPGA, and the third instruction is used for indicating offline data uploading; the first FPGA receives image data read from an SSD storage array from the second FPGA, the SSD storage array is used for storing the image data of the second FPGA after the second FPGA compresses original image data from the camera, and the offline data uploading is used for indicating the first FPGA to upload the received image data.
2. The method of claim 1, wherein the second FPGA receives raw image data from the camera and processes the raw image data to obtain a first processing result, comprising:
the second FPGA sequentially stores the original image data into a first cache and a second cache in a data buffering module;
when the original image data is stored in the second cache, the second FPGA reads the original image data from the first cache and processes the read original image data; or, when the original image data is stored in the first cache, the second FPGA reads the original image data from the second cache, and processes the read original image data.
3. The method of claim 2, wherein the second FPGA reading the raw image data from the first cache or the second cache comprises:
and the second FPGA detects that a first signal is set, the second FPGA does not read the original image data from the first cache or the second cache, otherwise, the original image data is read from the first cache or the second cache, and the first signal is generated according to a data processing rate.
4. The method of any one of claims 1-3, wherein the second FPGA sequentially stores the raw image data in a first buffer or a second buffer of a data buffer module, comprising:
the second FPGA judges whether a frame loss condition is met or not;
and when the frame loss condition is determined to be met, the second FPGA discards the current image frame.
5. The method of claim 4, wherein the frame loss condition is any one of the following conditions:
the number of the image frames in the first cache and the second cache is equal to the number of the preset stored image frames;
the number of image frames in the first cache is equal to the number of preset stored image frames, and the second cache is in a reading state;
the number of image frames in the second buffer is equal to the number of preset stored image frames, and the first buffer is in a reading state.
6. The method according to claim 5, wherein base addresses of the first buffer and the second buffer are predefined, and when the original data is stored in the first buffer or the second buffer, the base addresses of the first buffer and the second buffer are written in units of frames plus an offset address; when reading stored original image data from the first buffer or the second buffer, reading is started in units of lines from the base addresses of the first buffer and the second buffer plus an offset address.
7. The method of claim 6, wherein the method further comprises:
the first FPGA is used for receiving configuration parameters from the client and sending the configuration parameters to the second FPGA through a serial port channel between the first FPGA and the second FPGA, and the configuration parameters are used for configuring one or more of image processing parameters of the second FPGA for processing image data and parameters of the camera.
8. An image processing apparatus comprising a processor coupled to a memory for storing a computer program, the processor being configured to execute the computer program stored in the memory such that the apparatus implements the method of any of claims 1 to 7.
9. A computer storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a computer, causes the computer to perform the method of any one of claims 1 to 7.
CN202110179495.9A 2021-02-09 2021-02-09 Image processing method, device and storage medium Active CN112995505B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110179495.9A CN112995505B (en) 2021-02-09 2021-02-09 Image processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110179495.9A CN112995505B (en) 2021-02-09 2021-02-09 Image processing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN112995505A CN112995505A (en) 2021-06-18
CN112995505B true CN112995505B (en) 2022-06-17

Family

ID=76393844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110179495.9A Active CN112995505B (en) 2021-02-09 2021-02-09 Image processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN112995505B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113691741A (en) * 2021-07-20 2021-11-23 上海安路信息科技股份有限公司 Display method and device for video image rotation
CN115550589B (en) * 2022-08-12 2024-05-24 哈尔滨工业大学 High-speed real-time conversion device and method for CoaXPress interface data to CameraLink interface data based on FPGA

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859264A (en) * 2017-11-30 2019-06-07 北京机电工程研究所 A kind of aircraft of view-based access control model guiding catches control tracking system
CN110544335A (en) * 2019-08-30 2019-12-06 北京市商汤科技开发有限公司 Object recognition system and method, electronic device, and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5364531B2 (en) * 2009-10-13 2013-12-11 株式会社ホンダエレシス In-vehicle image recognition system
CN103075999B (en) * 2013-01-23 2014-12-24 四川电力科学研究院 Real-time multi-target position detection method and system based on image
US9953432B2 (en) * 2016-08-17 2018-04-24 David M. Smith Systems and methods of detecting motion
CN106851084B (en) * 2016-11-21 2019-12-20 北京空间机电研究所 On-satellite real-time processing algorithm annotation updating platform for remote sensing camera
US10255525B1 (en) * 2017-04-25 2019-04-09 Uber Technologies, Inc. FPGA device for image classification
CN111160224B (en) * 2019-12-26 2022-04-05 浙江大学 High-speed rail contact net foreign matter detection system and method based on FPGA and horizon line segmentation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859264A (en) * 2017-11-30 2019-06-07 北京机电工程研究所 A kind of aircraft of view-based access control model guiding catches control tracking system
CN110544335A (en) * 2019-08-30 2019-12-06 北京市商汤科技开发有限公司 Object recognition system and method, electronic device, and storage medium

Also Published As

Publication number Publication date
CN112995505A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN112995505B (en) Image processing method, device and storage medium
US20190163364A1 (en) System and method for tcp offload for nvme over tcp-ip
US7680962B2 (en) Stream processor and information processing apparatus
KR101245485B1 (en) Methods, computer program products and apparatus providing improved image capturing
US10321179B2 (en) System and monitoring of video quality adaptation
US9565350B2 (en) Storyboards for capturing images
CN108984442A (en) A kind of acceleration-controlled system based on Binarization methods, chip and robot
CN114286035B (en) Image acquisition card, image acquisition method and image acquisition system
CN108667740B (en) Flow control method, device and system
WO2014092551A1 (en) System and method for optimal memory management between cpu and fpga unit
CN116996647B (en) Video transmission method of BMC, BMC and system-level chip
CN115061959B (en) Data interaction method, device and system, electronic equipment and storage medium
CN117033275A (en) DMA method and device between acceleration cards, acceleration card, acceleration platform and medium
JP6058687B2 (en) Method and apparatus for tightly coupled low power imaging
JP6894736B2 (en) Recording device, control method, and program
CN103475871A (en) High-speed camera system with punctual data transmission function
JP2014167763A (en) Electronic device and method for controlling the same
KR101333277B1 (en) Decompression system and method of satellite imagte data
JP2017055217A (en) Image processing apparatus and image processing method and imaging device
JP2014170476A (en) Data processor and method for controlling the same
KR20190011056A (en) Apparatus and method for processing data
US11756154B2 (en) Apparatuses and computer-implemented methods for middle frame image processing
KR101359004B1 (en) Real-time picture acquisition apparatus adapted PC capability
JP2017191565A (en) Image processing apparatus, optical code reading device, image processing method, information processing program and record medium
CN111083413A (en) Image display method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant