CN114095700A - Panoramic infrared vision ground vibration monitoring method - Google Patents

Panoramic infrared vision ground vibration monitoring method Download PDF

Info

Publication number
CN114095700A
CN114095700A CN202111313354.8A CN202111313354A CN114095700A CN 114095700 A CN114095700 A CN 114095700A CN 202111313354 A CN202111313354 A CN 202111313354A CN 114095700 A CN114095700 A CN 114095700A
Authority
CN
China
Prior art keywords
data
image
column
infrared
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111313354.8A
Other languages
Chinese (zh)
Other versions
CN114095700B (en
Inventor
杨旭
赵洪博
冯文全
王强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Innovation Research Institute of Beihang University
Original Assignee
Hefei Innovation Research Institute of Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Innovation Research Institute of Beihang University filed Critical Hefei Innovation Research Institute of Beihang University
Priority to CN202111313354.8A priority Critical patent/CN114095700B/en
Publication of CN114095700A publication Critical patent/CN114095700A/en
Application granted granted Critical
Publication of CN114095700B publication Critical patent/CN114095700B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V1/00Seismology; Seismic or acoustic prospecting or detecting
    • G01V1/28Processing seismic data, e.g. for interpretation or for event detection
    • G01V1/288Event detection in seismic signals, e.g. microseismics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/182Level alarms, e.g. alarms responsive to variables exceeding a threshold
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Environmental & Geological Engineering (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Evolutionary Computation (AREA)
  • Remote Sensing (AREA)
  • Acoustics & Sound (AREA)
  • Geology (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geophysics (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)

Abstract

The invention relates to a panoramic infrared vision ground vibration monitoring method which comprises a servo part and an image processing part, wherein the servo part completes the horizontal rotation control of two non-refrigeration vanadium oxide infrared cameras which are vertically arranged, and the video images of the two cameras are coded and framed and then output through an optical fiber interface. The image processing part receives video data through an optical port, decodes and unframes the video data according to a transmission protocol to obtain an image A and an image B, the two frames of images are subjected to column fusion of front and rear frames at the same time and are used for image noise reduction and smooth splicing marks, and the processed images are stored into an off-chip DDR2SDRAM through a DDR2SDRAM arbitration and controller module. The control signal is generated by the image processing part and controls the binocular infrared camera and the servo part. The invention realizes 24-hour uninterrupted scanning and monitoring of the whole area in a horizontal 360-degree field of view, and greatly improves the monitoring performance and efficiency.

Description

Panoramic infrared vision ground vibration monitoring method
Technical Field
The invention relates to the technical field of Digital Image Processing (Digital Image Processing), in particular to a panoramic infrared vision ground vibration monitoring method.
Background
Digital Image Processing (Digital Image Processing) is a method and technique for performing processes such as denoising, enhancement, restoration, segmentation, feature extraction, and the like on a discretely sampled Image by a digitizing means. The equipment required for digital image processing comprises a camera, a digital image collector (comprising a synchronous controller, an analog-to-digital converter and a frame memory), an image processing computer and an image display terminal. The main processing task is completed by image processing software. In order to process images in real time, very high computing speed is required, a general-purpose computer cannot meet the requirement, and special image processing devices such as a GPU, an FPGA, various special ASICs and the like are required.
The uncooled vanadium oxide type infrared image sensor does not need a refrigerating device, can work at room temperature, and has the advantages of quick start, low power consumption, small size, light weight, long service life, low cost and the like. Compared with a visible light sensor, the uncooled vanadium oxide type infrared image sensor can work 24 hours all day long. Even at night, the imaging quality is clear and stable, and the imaging is basically not influenced by sunlight, thereby creating conditions for occasions needing all-day observation. The infrared camera is a complete machine component which takes an uncooled vanadium oxide infrared image sensor as a core and is formed by a matched lens, a signal processing circuit, a power supply circuit, an interface circuit and the like, and can output analog image signals or digital image signals.
An FPGA (field Programmable Gate array) is a Programmable digital device, and the basic structure of the FPGA comprises a Programmable input/output unit, a configurable logic block, a digital clock management module, an embedded block RAM, a wiring resource, an embedded special hard core and a bottom layer embedded functional unit. The FPGA has the characteristics of abundant wiring resources, high repeatable programming and integration level and low investment, so that the FPGA has wide application in the fields of digital image processing, software radio, data centers, artificial intelligence acceleration and the like. Particularly, in the field of image processing, the image processing algorithm is realized on the basis of hardware, so that the real-time performance is improved by 1-2 orders of magnitude compared with the software realization of a CPU, and the power consumption is reduced by more than two thirds compared with a GPU.
Disclosure of Invention
The panoramic infrared vision ground vibration monitoring method provided by the invention can at least solve one of the technical problems in the background.
In order to achieve the purpose, the invention adopts the following technical scheme:
a panoramic infrared vision ground vibration monitoring method is composed of a servo part and an image processing part, wherein the servo part and the image processing part share the same circuit form and structure, and different functions are distinguished by running different programs. The servo part completes the horizontal rotation control of two non-refrigeration vanadium oxide type infrared cameras (respectively called as an infrared camera A and an infrared camera B) with 640 x 512 sizes which are vertically arranged, and the video images of the two cameras are encoded and framed and then output through an optical fiber interface. The image processing part receives video data through an optical port, decodes and unframes the video data according to a transmission protocol to obtain an image A and an image B, the two frames of images are subjected to column fusion of front and rear frames at the same time and are used for image noise reduction and smooth splicing marks, and the processed images are stored into an off-chip DDR2SDRAM through a DDR2SDRAM arbitration and controller module. And meanwhile, the image A is taken out to match the front frame and the rear frame of the image, and the matching result is used for controlling a servo system and monitoring vibration. The part synchronously takes out image data stored in DDR2SDRAM, carries out line fusion and division superposition, outputs the image data to a gigabit controller, and finally outputs the image data through the gigabit controller by a UDP protocol for back-end display or further processing and analysis.
The panoramic infrared vision ground vibration monitoring method provided by the invention comprises the following implementation steps:
the method comprises the following steps: two uncooled vanadium oxide infrared cameras with 640 multiplied by 512 resolutions are vertically fixed on a structural member of a servo part, the combined vertical field of view of the two cameras is not less than 20 degrees, a servo mechanism is placed on a platform at the center of a monitoring scene, and the pitch angle of a rocker arm is adjusted to enable the monitoring range to be in an output video image. Connecting the servo part with the image processing part through an optical fiber and a cable;
step two: the servo part and the image processing part are respectively powered on to work. An infrared camera A and an infrared camera B of the servo part synchronously output infrared images of a Cameralink protocol, and the two frames of infrared images are recoded and framed in a signal conversion module and then output by an optical port module;
step three: after the image processing part receives the image data of the servo part through the optical port, the data is decoded and deframed to obtain an original synchronous infrared image A and an original synchronous infrared image B. Leftmost data N1 for image A and image BcolumnColumn data and rightmost N1 of the previous frame imagecolumnThe data of the columns are weighted and fused, and the weights are the rightmost N1 of the previous frame image respectivelycolumnThe histogram of the column data is counted and normalized, the processed video data is stored in DDR2SDRAM through arbitration module, and simultaneously the rightmost N1 of each current frame is cached locallycolumnColumn data for weighting processing of respective next frame data;
step four: in an image matching module, taking out image A branch data, and performing matching of 8 (column) × 100 (row) × 3 (block) pixel ranges, wherein the matching process is to perform point-to-point one-to-one mapping XOR operation on the leftmost 8 × 100 × 3 pixel range of the current frame data and the rightmost 8 (column) × 100 (row) × 3 (block) pixel range of the previous frame, add the results, when the addition result is minimum, the matching success of the previous and the next frames is described, record the row coordinate and the column coordinate of the current frame at the moment, output the coordinate values, and locally cache the rightmost 8 (column) × 100 (row) × 3 (block) pixel data of the current frame while processing the current frame for processing the next frame data;
step five: and in the data statistics and information processing module, the difference value between the coordinate values of the row and the column output in the previous step and a set value is defined as delta. Delta is related to the servo rotation speed, the rotation speed of the servo part is adjusted until the absolute value of delta is close to zero, and the rotation speed parameter is fixed. And then monitoring delta while processing each frame of data, and adjusting the rotating speed of the servo part in real time. In a steady state, the result delta of the previous step of processing should fluctuate near a zero value, if the delta suddenly fluctuates greatly, sample statistics is carried out on the fluctuating intensity and time, when the mean square value of the sample statistics exceeds a threshold value within 30 seconds, the ground is considered to vibrate, and alarm information is output outwards;
step six: and (3) taking out the video data stored in the DDR2SDRAM in the first step, performing row fusion and menu superposition, outputting the video data to a gigabit network controller, and finally outputting the video data through the gigabit network by a UDP protocol for back-end display or further processing and analysis.
Wherein, in the step one, the two infrared cameras which are combined to form the vertical visual field not less than 20 degrees are vertically fixed, so that the monitoring range is in the output video image, and the method comprises the following steps:
a land with the size of 50 centimeters multiplied by 50 centimeters is leveled out at the center of a scene to be monitored and used for building a base (hereinafter referred to as a base) of a placement platform, and a monitoring device is installed on the base and consists of a bracket, a rocker arm, a complete machine shell, a functional circuit board and the like. After the installation, the device is connected with the notebook computer through the network cable. And then starting the monitoring device, observing the output binocular infrared image through upper computer software on the notebook computer, and aligning the view field of a binocular infrared camera of the monitoring device to a monitoring area through adjusting the rocker arm.
In step two, the infrared camera a and the infrared camera B of the servo part synchronously output the infrared image of the Cameralink protocol, and the two frames of infrared images are recoded and framed in the signal conversion module and then output by the optical interface module, which includes the following steps:
s21, the decoding and framing module locally decodes data of an output Cameralink protocol of the infrared camera A and the infrared camera B to obtain original data of 2-frame image validity period and stores the original data into an FIFO for caching, when the FIFO is half cached, a write-in request is sent to a DDR2SDRAM arbitration module, after response is obtained, the data are sequentially written into a DDR2SDRAM designated area according to the address increasing sequence, wherein the image of the camera A is written into an area 1, and the image of the camera B is written into an area 2;
s22, the optical interface controller module initiates a read request to the DDR2SDRAM arbitration module, after a response is obtained, video data are sequentially taken out from the address region 1 and the region 2 specified in S21 according to the address increasing sequence, and are sent to an external optical interface device after being framed again according to a self-defined protocol;
the S23 and DDR2SDRAM arbitration and controller module responds to read-write requests initiated from the outside, the read-write module with the response temporarily occupies the transmission bus, and writes or reads data into or from the DDR2SDRAM arbitration and controller module.
After the image data is received by the optical port, the data is decoded and deframed to obtain the original synchronous infrared image a and image B, and then the images are weighted and fused in the column direction, which is as follows:
s31, deframing the input data received by the optical interface according to the self-defined protocol of S22 to respectively obtain the original image data of the infrared camera A and the infrared camera B, and respectively temporarily storing the original image data into an A path FIFO A1 and a B path FIFO B1;
s32, under the control of the dispatching state machine, taking out data from FIFO A1 and FIFO B1 at the same time, and respectively making statistics of region histogram, the statistical range is the rightmost side N1 of image A and image B datacolumnColumn data, normalizing the respective statistical results to obtain a weighting coefficient A of each column1、A2......AN1And B1、B2......BN1
S33, fetching the rightmost N1 of the previous frame from the A way FIFO A2 and the B way FIFO B2 simultaneouslycolumnColumn data, left most N1 of current frame data at this timecolumnCarrying out weighted fusion on the column data, wherein the weighting coefficient of each column is A1、A2......AN1And B1、B2......BN1
And S34, sending a write request to the DDR2SDRAM arbitration module by the data after the weighted columns are fused, and after a response is obtained, sequentially writing the data into DDR2SDRAM designated areas according to the address ascending sequence, wherein the image of the camera A is written into an area 3, and the image of the camera B is written into an area 4. Synchronously outputting image data of the camera A for image registration of the next stage;
s35, merging the two frames and simultaneously, carrying out the rightmost N1 of the current framecolumnThe column data is synchronously stored in the A-way FIFOA2 and the B-way FIFO B2 for weighting processing of the respective next frame data.
In step four, "take out image a branch data, and perform matching in the pixel range of 8 (columns) × 100 (rows) × 3 (blocks)", this is as follows:
s41, truncating the input image and only keeping 1 bit or 2 bits of the highest significant digit;
s42, passing the truncated data through a Shift FIFO structure, wherein the Shift FIFO structure generates 100 Tap taps, each Tap is connected with an 8-level flow structure, and a matching area of 100 x 8 pixels is cached together;
and S43, performing one-to-one corresponding XOR operation on the 100 × 8 pixels buffered in the current frame and the 100 × 8 pixels buffered in the designated coordinate starting position on the right side of the previous frame, and adding the XOR results. The same operation of 100 × 8 pixel size of 3 regions per frame image is performed;
s44, comparing the addition result of S43 in real time, storing the pixel coordinate position at the minimum value, and outputting the result;
s45, buffer the 100 × 8 pixel region data from the designated coordinate on the right side of the current frame for the next frame xor operation.
In step five, the rotation speed of the servo mechanism is adjusted and vibration monitoring and early warning are performed by using the delta obtained by the difference between the row coordinate value and the column coordinate value output in the previous step and the set value, and the method comprises the following steps:
s51, carrying out average value statistics of continuous time T seconds on coordinate values (x, y) of the input pixel to obtain coordinate average value in the period
Figure BDA0003342864210000051
S52, mixing
Figure BDA0003342864210000052
And a set value
Figure BDA0003342864210000053
Making a difference to obtain a difference value delta;
s53, calculating v according to the relation between the rotating speed v and the difference value delta, and converting the v into a servo control parameter;
s54, formatting and outputting the servo control parameters for controlling the servo mechanism;
and S55, monitoring the values of delta of two adjacent times in real time, comparing residual variation, and sending an interrupt signal outwards for monitoring and alarming when the residual exceeds a set threshold.
Wherein, in the step six, the video data stored in the DDR2SDRAM in the first step is taken out, and output to the gigabit controller after row fusion and division superposition, which includes the following steps:
s61, synchronously extracting the data of the infrared camera A and the data of the infrared camera B from the area 3 and the area 4 in the S34, and carrying out weighted fusion on the line data overlapped by the two frames of images;
s62, taking out data from a Menu (Menu) area of the DDR2SDRAM, and overlapping the data with the fused data;
and S63, sending the image data after the menu is overlapped into a bandwidth adaptation module, ensuring that the read-write bandwidth is consistent and framing, and outputting the framed data by the gigabit network MAC module.
Through the steps, the method and the device for monitoring the ground vibration of the panoramic infrared vision based on the FPGA realize 24-hour uninterrupted scanning and monitoring of the whole area in a horizontal 360-degree field of view, and greatly improve the monitoring performance and efficiency.
According to the technical scheme, the panoramic infrared vision ground vibration monitoring method provided by the invention can be used for monitoring micro vibration at the early stage of deformation symptoms such as settlement, wrinkles and fracture of the ground, and has the function of early warning. The traditional monitoring method based on the high-precision differential GNSS or the microseismic sensor can only monitor single points or multiple points, and has limited monitoring range and precision. The invention can continuously scan and monitor the whole area in a horizontal 360-degree view field for 24 hours, the monitoring range depends on the size of the vertical view field of the infrared lens, the monitoring precision depends on the pixel size of the infrared sensor, and compared with the traditional method, the invention can greatly improve the performance and efficiency.
Specifically, the advantages of the invention are as follows:
the invention realizes the panoramic infrared vision ground vibration monitoring method and device based on the FPGA, and has low complexity, accurate measurement result and high reliability.
The invention realizes the panoramic infrared vision ground vibration monitoring method and device based on the FPGA, the device has small volume, is easy to integrate with the prior GNSS monitoring equipment, effectively monitors a large-area, and obviously improves the monitoring efficiency.
The invention realizes a panoramic infrared vision ground vibration monitoring method and device based on FPGA, the device combines the infrared vision characteristic, and other functions such as community security and forest fire prevention in all-day time can be flexibly expanded on the characteristic.
Drawings
FIG. 1 is a system circuit hardware framework diagram;
FIG. 2 is a system digital logic framework diagram;
FIG. 3 is a schematic view of the operation of the monitoring device;
FIG. 4 is a schematic diagram of the system operation principle;
FIG. 5 is a block diagram of DDR2SDRAM controller and arbitration logic;
FIG. 6 is a block diagram of image column fusion logic;
FIG. 7 is a block diagram of image registration logic;
FIG. 8 is a block diagram of data statistics and information processing logic;
FIG. 9 is an image output logic block diagram;
FIG. 10 is a DDR SDRAM controller and arbitration module operating state machine;
FIG. 11 is a servo section interface timing; wherein 11a is the camera input timing and 11b is the optical port output timing;
FIG. 12 is an image column fusion module timing sequence;
FIG. 13 is a schematic diagram of a Shift FIFO architecture;
FIG. 14 is an image registration working principle;
FIG. 15 is an image registration module timing sequence;
FIG. 16 is an image row fusion module timing sequence;
FIG. 17 is an embedded program flowchart;
table 1 is monitoring device performance parameters;
table 2 is the servo section logical interface;
table 3 is the column fusion part logical interface;
table 4 is the row merge portion logical interface;
table 5 is a DDR2SDRAM memory plan.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention.
As shown in fig. 1, the panoramic infrared visual ground vibration monitoring method according to the embodiment includes a hardware component mainly including a video input interface circuit, a video output interface circuit, a video image processing circuit, a storage circuit, a control interface circuit, and a power supply circuit. The video input interface circuit mainly completes conversion from differential serial video data in a CameraLink form to local single-ended parallel video data; the video output interface circuit comprises three types, wherein the first type is used for converting local single-ended parallel video data into differential serial video data in a CameraLink form, the second type is used for converting the local single-ended parallel video data into optical signals and outputting the optical signals by an optical module, the middle of the second type circuit comprises a serializer circuit and a level conversion circuit, and the third type is a gigabit network circuit and outputting the video data after framing according to a UDP protocol; the video image processing circuit is mainly composed of a system level circuit taking an FPGA as a core, and completes the power-on configuration and work of the FPGA; the storage circuit is a large-capacity storage circuit consisting of a plurality of DDR2SDRAM chips and is responsible for temporary access of programs and data during the system working period; the control interface circuit comprises an RS422 interface, a B2B high-speed inter-board interconnection interface and the like, and is mainly used for transmitting control signals; the power supply circuit completes power supply and power-on initialization work of all parts of the system, provides enough current and ensures stable and reliable work of the system.
As shown in fig. 2, is a system framework. The whole system mainly comprises a binocular infrared camera, a servo part and an image processing part. The video data is generated by the binocular infrared camera, passes through the servo part and is transmitted to the image processing part, and the control signal is generated by the image processing part and controls the binocular infrared camera and the servo part. The working process of the whole system specifically comprises the following steps: during the working period of the system, the binocular infrared camera outputs two paths of differential serial video data in a camera Link form, the video data firstly enters the servo part, passes through the video input interface circuit and enters the FPGA chip in a single-ended parallel form; in the chip, decoding is firstly carried out, redundant frame headers and inter-frame data are removed, two paths of complete and effective video data are obtained, and then the video data are sent to a DDR2SDRAM controller and an arbitration circuit after being framed again; the DDR2SDRAM controller and the arbitration circuit complete initialization and read-write control of a DDR2SDRAM chip of an external storage circuit and are responsible for access of video data; the optical interface controller is responsible for operating the external optical module, converting parallel video data into serial optical signals and transmitting the serial optical signals backwards; the servo circuit completes the synchronous control of the rotating speed control of the turntable and the shutter of the external camera; the video data is outputted from the servo section through the optical port and enters the image processing section. In the image processing part, firstly, serial optical signals are converted into local parallel video data through optical port local decoding logic, the video data passes through an image column fusion module and an image registration module, servo control parameters are generated in a statistics and information processing module, and a rotary table of the servo part and a camera synchronous shutter are controlled through an RS422 circuit; after the video data passes through the image column fusion module, the video data is simultaneously sent to a DDR2SDRAM controller and an arbitration circuit; the DDR2SDRAM controller and the arbitration circuit complete initialization and read-write control of a DDR2SDRAM chip of an external storage circuit and are responsible for access of video data; the image row fusion module reads out video data from an external storage circuit through a DDR2SDRAM controller and an arbitration circuit, performs row fusion processing, and outputs the video data to the division and superposition module; the division and superposition module takes out a division graph from a memory area designated by an external storage circuit, carries out superposition operation with video data sent by the image line fusion module, superposes and synthesizes the video data, and sends the superposed and synthesized video data to the gigabit controller module, wherein the division graph is a designated area written into DDR2SDRAM by a program on an embedded controller NIOS II core; and framing the video data in the gigabit network controller module according to a UDP (user Datagram protocol) protocol, and finally outputting the video data to an upper computer through the gigabit network.
The specific implementation steps are as follows:
the first step is as follows: two uncooled vanadium oxide infrared cameras with 640 multiplied by 512 resolutions are vertically fixed on a structural member of a servo part, the combined vertical field of view of the two cameras is not less than 20 degrees, a servo mechanism is placed on a platform at the center of a monitoring scene, and the pitch angle of a rocker arm is adjusted to enable the monitoring range to be in an output video image. The servo part and the image processing part are connected through an optical fiber and a cable.
Fig. 3 is a schematic diagram showing the operation of the monitoring device. The performance parameters of the monitoring device are shown in table 1.
TABLE 1 monitoring device Performance parameters
Figure BDA0003342864210000091
Fig. 4 is a schematic diagram illustrating the operation principle of the system.
The synchronization is divided into two types, one is preset bit synchronization, and the signal is output from the servo section to the image processing sectionAs a reference origin, a preset bit sync generation position may be set. The other synchronization is internal synchronization, which is generated internally by the servo part and is simultaneously supplied to the two infrared cameras, which are invisible externally, and the infrared cameras receive the internal synchronization signal and then have a fixed delay deltaTAnd then outputs the image. Period T of internal synchronizing signal generationshutterAnd the number of one turn generation NcycleCan be set.
In the normal working state of the system, the period T generated by the internal synchronizing signal is reasonably setshutterA number N of generated circlescycleRotational speed VcycleAfter the parameter, there may be an overlap region between the front and back image columns output between two adjacent internal synchronization signals, and the value N set by ush_fuseEquality (default setting is two adjustable 8 or 16 columns), image matching is considered successful, and the logic performs weighted filtering according to the set column coincidence value (8 or 16 columns).
And storing the filtering result into DDR2SDRAM, and finally outputting to form a panoramic stitching effect. In the process of filtering, fusing and splicing images, the rotating speed V is adjustedcyclePixel coordinate position of
Figure BDA0003342864210000102
And the occurrence and degree of the ground vibration can be judged by analyzing the residual errors of the front frame and the rear frame in real time.
The second step is that: the infrared camera A and the infrared camera B of the servo part synchronously output infrared images of a Cameralink protocol, and the two frames of infrared images are recoded and framed in the signal conversion module and then output by the optical interface module.
The core logic of this portion is the DDR2SDRAM controller and arbitration module, as shown in FIG. 5. The module mainly comprises an 8-to-1 arbitration module, a round-training type scheduling module, a DDR2SDRAM controller module and a digital PHY. The external read-write request firstly carries out arbitration of bus use right through an 8-to-1 arbitration module, the request for obtaining the bus use right initiates one read-write of DDR2SDRAM, the read-write control is completed by a DDR2SDRAM controller, the bottom layer of the controller is a digital PHY, and the convergence of the interface timing sequence of a DDR2SDRAM chip is completed.
The module inputs 8 AvalonMM Burst channels and 1 AvalonMM Random channel, different channels initiate access requests to a DDR2SDRAM controller in a time slice rotation mode, once the controller is idle, the channel initiating the access requests responds, bus control power is given to the channel, and the channel releases the bus control power after the access operation is completed and returns to a channel rotation working state. The operating state machine of this section is shown in figure 11,
table 2 is the partial input and output logic interface. The interface logic mainly completes the receiving of the input image data of the camera A and the camera B and sends the stored image data to the optical port according to the self-defined protocol. Fig. 11 is a timing chart of the part of the interface.
TABLE 2 Servo logical interface
Figure BDA0003342864210000101
Figure BDA0003342864210000111
The third step: after the image processing part receives the image data of the servo part through the optical port, the data is decoded and deframed to obtain an original synchronous infrared image A and an original synchronous infrared image B. Leftmost data N1 for image A and image BcolumnColumn data and rightmost N1 of the previous frame imagecolumnThe data of the columns are weighted and fused, and the weights are the rightmost N1 of the previous frame image respectivelycolumnThe histogram of the column data is counted and normalized, the processed video data is stored in DDR2SDRAM through arbitration module, and simultaneously the rightmost N1 of each current frame is cached locallycolumnColumn data for weighting processing of respective next frame data.
As shown in fig. 6, is the internal logical framework of the module. The module mainly comprises an A-path input cache A1, an A-path input cache A2, a B-path input cache B1, a B-path input cache B1, a regional image histogram statistical module, a weighted fusion operation and a scheduling state machine. The main function of the A/B input buffer A1/B1 is to prepare the data to be processed for the region image histogram statistics, the main function of the A/B input buffer A2/B2 is to prepare the data for the weighted fusion operation, and the data buffering and operation inside the module are coordinated and orderly operated under the control of a scheduling state machine.
Wherein, two frames of images N1 before and after the camera A and the camera B are carried outcolumnThe reason for column data fusion is that the image generated in the servo rotation process will generate obvious traces on the edge of the spliced image due to scene change and automatic adjustment of camera contrast in the horizontal direction, and if the image is not processed, the subsequent data statistics will be adversely affected.
Let D (N, m) be the result of the m-th column pixel fusion in the current nth frame, where m is in the range of [1, N1 ]column]And the statistical normalization result of the local histogram on the m columns of pixels in the fusion region of the (n-1) th frame is mumThen, the specific values of D (n, m) can be obtained as follows:
D(n,m)=μm×D(n-1,m)+(1-μm)×D′(n,m)
where D' (n, m) is the input data for the current frame.
Table 3 is the partial input and output logic interface. Fig. 12 is a timing chart of the part of the interface.
Table 3 column fusion part logical interface
Figure BDA0003342864210000121
The fourth step: in the image matching module, the image A branch data is taken out, the 8 (column) × 100 (row) × 3 (block) pixel range is matched, the matching process is that the point-to-point one-to-one mapping exclusive OR operation is carried out on the leftmost 8 × 100 × 3 pixel range of the current frame data and the rightmost 8 (column) × 100 (row) × 3 (block) pixel range of the previous frame, the results are added, when the addition result is the minimum, the success of the matching of the previous frame and the next frame is indicated, the row coordinate and the column coordinate of the current frame at the moment are recorded, the coordinate value is output, and when the current frame is processed, the rightmost 8 (column) × 100 (row) × 3 (block) pixel data of the current frame are locally cached and used for processing the next frame data.
As shown in fig. 7, is the internal logical framework of the module. The module mainly comprises data truncation operation, a Shift-FIFO buffer module, a buffer to be matched, a frame tail buffer and matching statistic operation. The data truncation operation is to truncate the input large-bit-width video data to a small bit-width, so that hardware resources are saved. The Shift-FIFO buffer module, the buffer to be matched and the frame tail buffer are all data prepared for the matching statistical operation. And matching the row-column coordinate position of the minimum value of the statistical operation record and outputting.
The core of the module is a real-time image search matching algorithm realized by using the convenience of a Shift FIFO structure in an FPGA. And (3) after the input data enters the module, performing truncation or low-pass filtering to obtain a slowly-changed image. The image after truncation/filtering is input into a Shift FIFO, and FIG. 13 is a structural schematic diagram of the Shift FIFO inside the FPGA. The Shift FIFO is a structure of 100 × 8 × Nbit, i.e., a 100 × 8 window is opened on the image as a search area. When a new image of a frame is input, the 100 × 8 window is traversed successively on the whole image, each clock respectively performs exclusive or operation on the data in the window and the data of the area to be matched stored in the previous frame, the data and the data are continuously compared, the coordinate position of the minimum result is updated, and when the window sweeps the area to be compared, the data in the window at the moment are stored as the area to be compared of the next frame. The whole algorithm works as shown in fig. 14. The blue frame represents the area to be matched of the previous frame, the red frame represents a sliding window which changes in real time, the sliding window scans the whole image from the upper left corner along the arrow in the figure, different matching results are calculated and stored at different moments, and finally the minimum position coordinate is output as the rotating speed adjusting parameter of the rotary table. Fig. 15 is a timing chart of the part of the interface.
The fifth step: and in the data statistics and information processing module, the difference value between the coordinate values of the row and the column output in the previous step and a set value is defined as delta. Delta is related to the servo rotation speed, the rotation speed of the servo part is adjusted until the absolute value of delta is close to zero, and the rotation speed parameter is fixed. And then monitoring delta while processing each frame of data, and adjusting the rotating speed of the servo part in real time. And under a steady state, the result delta of the previous step of processing should fluctuate near a zero value, if the delta suddenly fluctuates greatly, sample statistics is carried out on the fluctuating intensity and time, when the sample statistical mean square value exceeds a threshold value within 30 seconds, the ground is considered to vibrate, and alarm information is output outwards.
As shown in fig. 8, is the internal logical framework of the module. The module mainly comprises a statistical coordinate mean value, servo control parameter generation, residual error monitoring and formatted output. And continuously comparing the statistical coordinate mean value with a set value in real time, subtracting the statistical coordinate mean value and the set value to obtain a residual error, monitoring the residual error to send an alarm signal, generating a servo control parameter by using the residual error, outputting the control parameter in a formatted mode, and controlling a servo part.
Wherein, the coordinate value (x, y) of the input pixel is subjected to average value statistics of continuous time T seconds to obtain the coordinate average value in the period
Figure BDA0003342864210000141
Namely, the following operations are performed:
Figure BDA0003342864210000142
Figure BDA0003342864210000143
wherein T isclkFor each pixel clock cycle.
Will be provided with
Figure BDA0003342864210000144
And a set value
Figure BDA0003342864210000145
And (3) performing difference to obtain a difference value delta of the current observation epoch, wherein the rotating speed v of the servo mechanism and the difference value delta in a steady state are in a linear relation, namely the following conditions are met:
v=f(Δ)
the definition of the specific function f is determined by the operating characteristics of the servomechanism. When Δ is not zero, v is continuously adjusted until Δ fluctuates around a zero value. Simultaneously, the values of two adjacent delta are monitored in real time, and residual error changes are compared, namely:
Δ′=Δ(n)-Δ(n-1)
when residual error delta' exceeds the set threshold value, an interrupt signal is sent outwards for monitoring and alarming
And a sixth step: and (3) taking out the video data stored in the DDR2SDRAM in the first step, performing row fusion and menu superposition, outputting the video data to a gigabit network controller, and finally outputting the video data through the gigabit network by a UDP protocol for back-end display or further processing and analysis.
As shown in fig. 9, is the internal logical framework of the module. The module mainly comprises an image channel A/B reading module, a menu area reading module, a line weighting fusion operation module, a gigabit network MAC module, a bandwidth adaptation cache module and a scheduling state machine. The video data read by the image channel A and the video data read by the image channel B are used for line weighting fusion operation, the division read by the menu area is overlapped by the weighted fusion data and is sent to the gigabit network MAC, and the data is output through the gigabit network by a UDP protocol under the management of bandwidth adaptive cache. The whole working process is orderly carried out under the management of the scheduling state machine.
Wherein, the current two frames of images N1 of the camera A and the camera B are carried outrowThe reason for the line data fusion is that the image generated in the servo rotation process generates an obvious overlapping area at the edge of the spliced image due to the overlapping of vertical view fields in the vertical direction, and if the image is not processed, the subsequent image display is adversely affected.
Assuming that the result of the fusion of the m-th row pixels of the current nth frame is P (N, m), since the camera A and the camera B are vertically aligned, the value range of m for the camera A is [512-N1 ]row,512]So for camera B, m ranges from [1, N1 ]row]Then, the specific values of P (n, m) can be obtained as follows:
P(n,m)=0.5×PA(n,512-N1row+m)+0.5×PB(n,m)
wherein, PAIs the data of the current frame camera A, PBIs the data of the current frame camera B, and the fusion weight is 0.5。
And overlapping the data after row fusion with the menu layer data, and finally outputting the data through a gigabit Ethernet. Table 4 shows the part of the input and output logic interface, table 5 shows DDR2SDRAM address assignment, and fig. 12 shows the part of the interface timing diagram.
TABLE 4 Row Convergence part logic interface
Figure BDA0003342864210000151
Figure BDA0003342864210000161
TABLE 5 DDR2SDRAM memory Programming
Memory content Memory address starting point (10 system) Memory address starting point (16 system)
Niosll program storage and data storage area 0 Ox000000
A camera buffer area 1 ~ 75 8388608 0x800000
B camera buffer area 1-75 57540608 0x36E0000
Menu layer cache region 1-75 106692608 0x65C0000
DDR2SDRAM self-test area 155844608 0x94A0000
The image processing part also integrates an embedded program, and the working flow chart of the program is shown in figure 17.
In conclusion, the invention can continuously scan and monitor the whole area in a horizontal 360-degree view field for 24 hours, the monitoring range depends on the size of the vertical view field of the infrared lens, the monitoring precision depends on the pixel size of the infrared sensor, and compared with the traditional method, the invention can greatly improve the performance and efficiency.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (7)

1. A panoramic infrared vision ground vibration monitoring method is characterized by comprising the following steps: the method comprises the steps of setting a binocular infrared camera, a servo part and an image processing part; the video data are generated by the binocular infrared camera, transmitted to the image processing part through the servo part, and control signals are generated by the image processing part and control the binocular infrared camera and the servo part;
the method comprises the following steps:
the method comprises the following steps: placing a servo mechanism on a platform at the center of a monitoring scene, and adjusting the pitch angle of a rocker arm to enable the monitoring range to be in an output video image;
step two: an infrared camera A and an infrared camera B of the servo part synchronously output infrared images of a Cameralink protocol, and the two frames of infrared images are recoded and framed in a signal conversion module and then output by an optical port module;
step three: after the image processing part receives the image data of the servo part through the optical port, the data is decoded and deframed to obtain an original synchronous infrared image A and an original synchronous infrared image; then the left-most data of image A and image B N1columnColumn data and rightmost N1 of the previous frame imagecolumnThe data of the columns are weighted and fused, and the weights are the rightmost N1 of the previous frame image respectivelycolumnThe histogram of the column data is counted and normalized, the processed video data is stored in DDR4 SDRAM through an arbitration module, and meanwhile, the rightmost N1 of each current frame is locally cachedcolumnColumn data for weighting processing of respective next frame data;
step four: after the previous step of processing, taking out a branch data, matching 8 columns multiplied by 100 rows multiplied by 3 block pixel ranges, wherein the matching process is to carry out point-to-point one-to-one mapping exclusive-or operation on the leftmost 8 columns multiplied by 100 rows multiplied by 3 block pixel range of the current frame data and the rightmost 8 columns multiplied by 100 rows multiplied by 3 block pixel range of the previous frame, and summing the results, when the summing result is minimum, the matching of the previous frame and the next frame is successful, the row coordinates and the column coordinates of the current frame at the moment are recorded, the coordinate values are output, and when the current frame is processed, the rightmost 8 columns multiplied by 100 rows multiplied by 3 block pixel data of the current frame are locally cached for processing the next frame data;
step five: the difference value between the coordinate values of the row and the column output in the previous step and a set value is defined as delta, the rotating speed of the servo system is adjusted until the absolute value of the delta is close to zero, the rotating speed parameter is fixed, and then the delta is monitored while each frame of data is processed and is used for adjusting the rotating speed of the servo system in real time and outputting alarm information;
step six: and (3) taking out the video data stored in the DDR4 SDRAM in the first step, performing row fusion and menu superposition, outputting the video data to a gigabit network controller, and finally outputting the video data through the gigabit network by a UDP protocol for back-end display or further processing and analysis.
2. The panoramic infrared vision ground vibration monitoring method of claim 1, characterized in that:
the specific process of the step one is as follows:
leveling a 50 cm × 50 cm land at the center of a scene to be monitored for establishing a base of a placement platform, mounting a monitoring device on the base, and connecting the device with a notebook computer through a network cable after the monitoring device is mounted; and then starting the monitoring device, observing the output binocular infrared image through upper computer software on the notebook computer, and aligning the view field of a binocular infrared camera of the monitoring device to a monitoring area through adjusting the rocker arm.
3. The panoramic infrared vision ground vibration monitoring method of claim 2, characterized in that: the second step comprises the following specific processes:
s21, the decoding and framing module locally decodes data of an output Cameralink protocol of the infrared camera A and the infrared camera B to obtain original data of a 2-frame image validity period and stores the original data into an FIFO for caching, when the FIFO is half cached, a write-in request is sent to a DDR2SDRAM arbitration module, after a response is obtained, the data are sequentially written into a DDR2SDRAM designated area according to an address increasing sequence, wherein the image of the camera A is written into an area 1, and the image of the camera B is written into an area 2;
s22, the optical interface controller module initiates a read request to the DDR2SDRAM arbitration module, after a response is obtained, video data are sequentially taken out from the address region 1 and the region 2 specified in S21 according to the address increasing sequence, and are sent to an external optical interface device after being framed again according to a self-defined protocol;
the S23 and DDR2SDRAM arbitration and controller module responds to read-write requests initiated from the outside, the read-write module with the response temporarily occupies the transmission bus, and writes or reads data into or from the DDR2SDRAM arbitration and controller module.
4. The panoramic infrared vision ground vibration monitoring method of claim 3, characterized in that: the third step comprises the following specific processes:
s31, deframing the input data received by the optical interface according to the self-defined protocol of S22 to respectively obtain the original image data of the infrared camera A and the infrared camera B, and respectively temporarily storing the original image data into an A path FIFO A1 and a B path FIFO B1;
s32, under the control of the dispatching state machine, taking out data from FIFO A1 and FIFO B1 at the same time, and respectively making statistics of region histogram, the statistical range is the rightmost side N1 of image A and image B datacolumnColumn data, normalizing the respective statistical results to obtain a weighting coefficient A of each column1、A2......AN1And B1、B2......BN1
S33, fetching the rightmost N1 of the previous frame from the A way FIFO A2 and the B way FIFO B2 simultaneouslycolumnColumn data, left most N1 of current frame data at this timecolumnCarrying out weighted fusion on the column data, wherein the weighting coefficient of each column is A1、A2......AN1And B1、B2......BN1
S34, sending a write request to a DDR2SDRAM arbitration module by the data after the weighted columns are fused, and after a response is obtained, sequentially writing the data into DDR2SDRAM designated areas according to the address ascending sequence, wherein the image of the camera A is written into an area 3, and the image of the camera B is written into an area 4; synchronously outputting image data of the camera A for image registration of the next stage;
s35, merging the two frames and simultaneously, carrying out the rightmost N1 of the current framecolumnThe column data is synchronously stored in the A-way FIFO A2 and the B-way FIFO B2 for weighting processing of the respective next frame data.
5. The panoramic infrared vision ground vibration monitoring method of claim 4, characterized in that: the specific process of the step four is as follows:
s41, truncating the input image and only keeping 1 bit or 2 bits of the highest significant digit;
s42, passing the truncated data through a Shift FIFO structure, wherein the Shift FIFO structure generates 100 Tap taps, each Tap is connected with an 8-level flow structure, and a matching area of 100 x 8 pixels is cached together;
and S43, performing one-to-one corresponding XOR operation on the 100 × 8 pixels buffered in the current frame and the 100 × 8 pixels buffered in the designated coordinate starting position on the right side of the previous frame, and adding the XOR results. The same operation of 100 × 8 pixel size of 3 regions per frame image is performed;
s44, comparing the addition result of S43 in real time, storing the pixel coordinate position at the minimum value, and outputting the result;
s45, buffer the 100 × 8 pixel region data from the designated coordinate on the right side of the current frame for the next frame xor operation.
6. The panoramic infrared vision ground vibration monitoring method of claim 5, characterized in that:
the concrete process of the step five is as follows:
s51, carrying out average value statistics of continuous time T seconds on coordinate values (x, y) of the input pixel to obtain coordinate average value in the period
Figure FDA0003342864200000031
S52, mixing
Figure FDA0003342864200000032
And a set value
Figure FDA0003342864200000033
Making a difference to obtain a difference value delta;
s53, calculating v according to the relation between the rotating speed v and the difference value delta, and converting the v into a servo control parameter;
s54, formatting and outputting the servo control parameters for controlling the servo mechanism;
and S55, monitoring the values of delta of two adjacent times in real time, comparing residual variation, and sending an interrupt signal outwards for monitoring and alarming when the residual exceeds a set threshold.
7. The panoramic infrared vision ground vibration monitoring method of claim 6, characterized in that: the concrete process of the step five is as follows:
the concrete process of the step six is as follows:
s61, synchronously extracting the data of the infrared camera A and the data of the infrared camera B from the area 3 and the area 4 in the S34, and carrying out weighted fusion on the line data overlapped by the two frames of images;
s62, taking out data from the menu area of the DDR2SDRAM, and overlapping the data with the fused data;
and S63, sending the image data after the menu is overlapped into a bandwidth adaptation module, ensuring that the read-write bandwidth is consistent and framing, and outputting the framed data by the gigabit network MAC module.
CN202111313354.8A 2021-11-08 2021-11-08 Panoramic infrared vision ground vibration monitoring method Active CN114095700B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111313354.8A CN114095700B (en) 2021-11-08 2021-11-08 Panoramic infrared vision ground vibration monitoring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111313354.8A CN114095700B (en) 2021-11-08 2021-11-08 Panoramic infrared vision ground vibration monitoring method

Publications (2)

Publication Number Publication Date
CN114095700A true CN114095700A (en) 2022-02-25
CN114095700B CN114095700B (en) 2022-09-16

Family

ID=80299254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111313354.8A Active CN114095700B (en) 2021-11-08 2021-11-08 Panoramic infrared vision ground vibration monitoring method

Country Status (1)

Country Link
CN (1) CN114095700B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117350926A (en) * 2023-12-04 2024-01-05 北京航空航天大学合肥创新研究院 Multi-mode data enhancement method based on target weight

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012131701A2 (en) * 2011-03-11 2012-10-04 The Tata Power Company Ltd. Fpga system for processing radar based signals for aerial view display
CN104952060A (en) * 2015-03-19 2015-09-30 杭州电子科技大学 Adaptive segmentation extraction method for infrared pedestrian region of interests
CN107169950A (en) * 2017-06-02 2017-09-15 江苏北方湖光光电有限公司 A kind of high-definition picture fusion treatment circuit
CN111193877A (en) * 2019-08-29 2020-05-22 桂林电子科技大学 ARM-FPGA (advanced RISC machine-field programmable gate array) cooperative wide area video real-time fusion method and embedded equipment
CN112598567A (en) * 2020-12-29 2021-04-02 北京环境特性研究所 Method for realizing image enhancement in infrared real-time jigsaw through FPGA

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012131701A2 (en) * 2011-03-11 2012-10-04 The Tata Power Company Ltd. Fpga system for processing radar based signals for aerial view display
CN104952060A (en) * 2015-03-19 2015-09-30 杭州电子科技大学 Adaptive segmentation extraction method for infrared pedestrian region of interests
CN107169950A (en) * 2017-06-02 2017-09-15 江苏北方湖光光电有限公司 A kind of high-definition picture fusion treatment circuit
CN111193877A (en) * 2019-08-29 2020-05-22 桂林电子科技大学 ARM-FPGA (advanced RISC machine-field programmable gate array) cooperative wide area video real-time fusion method and embedded equipment
CN112598567A (en) * 2020-12-29 2021-04-02 北京环境特性研究所 Method for realizing image enhancement in infrared real-time jigsaw through FPGA

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117350926A (en) * 2023-12-04 2024-01-05 北京航空航天大学合肥创新研究院 Multi-mode data enhancement method based on target weight
CN117350926B (en) * 2023-12-04 2024-02-13 北京航空航天大学合肥创新研究院 Multi-mode data enhancement method based on target weight

Also Published As

Publication number Publication date
CN114095700B (en) 2022-09-16

Similar Documents

Publication Publication Date Title
US11064110B2 (en) Warp processing for image capture
US8570334B2 (en) Image processing device capable of efficiently correcting image data and imaging apparatus capable of performing the same
CN105872354B (en) Image processing system and method, camera system, video generation device and method
WO2019238114A1 (en) Three-dimensional dynamic model reconstruction method, apparatus and device, and storage medium
CN108492322B (en) Method for predicting user view field based on deep learning
US10681313B1 (en) Home monitoring camera featuring intelligent personal audio assistant, smart zoom and face recognition features
CN108848354B (en) VR content camera system and working method thereof
US11514371B2 (en) Low latency image processing using byproduct decompressed images
US20120075409A1 (en) Image segmentation system and method thereof
US20170178395A1 (en) Light field rendering of an image using variable computational complexity
CN112651903B (en) Thermal infrared imager image preprocessing system and preprocessing method thereof
CN114095700B (en) Panoramic infrared vision ground vibration monitoring method
US10861243B1 (en) Context-sensitive augmented reality
JP2007036748A (en) Monitoring system, monitoring apparatus, monitoring method, and program
CN116486250A (en) Multi-path image acquisition and processing method and system based on embedded type
CN109688328A (en) A kind of method and apparatus of video-splicing fusion and segmentation based on different point video cameras
CN116824080A (en) Method for realizing SLAM point cloud mapping of power transmission corridor based on multi-sensor fusion
CN112752086B (en) Image signal processor, method and system for environment mapping
US20100053326A1 (en) Image sensor, the operating method and usage thereof
CN115598744A (en) High-dimensional light field event camera based on micro-lens array and extraction method
US11978177B2 (en) Method and system of image processing of omnidirectional images with a viewpoint shift
US20220318962A1 (en) Video systems with real-time dynamic range enhancement
Simic et al. Real-time video fusion implemented in gstreamer framework
JP2001197479A (en) Method and device for processing differential image
CN113962842B (en) Dynamic non-polar despinning system and method based on high-level synthesis of large-scale integrated circuit

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant