WO2015161490A1 - Target motion detection method for water surface polarization imaging based on compound eyes simulation - Google Patents

Target motion detection method for water surface polarization imaging based on compound eyes simulation Download PDF

Info

Publication number
WO2015161490A1
WO2015161490A1 PCT/CN2014/076146 CN2014076146W WO2015161490A1 WO 2015161490 A1 WO2015161490 A1 WO 2015161490A1 CN 2014076146 W CN2014076146 W CN 2014076146W WO 2015161490 A1 WO2015161490 A1 WO 2015161490A1
Authority
WO
WIPO (PCT)
Prior art keywords
polarization
image
water surface
target
scene
Prior art date
Application number
PCT/CN2014/076146
Other languages
French (fr)
Chinese (zh)
Inventor
陈哲
徐立中
王鑫
石爱业
王慧斌
严锡君
范超
孔成
Original Assignee
陈哲
徐立中
王鑫
石爱业
王慧斌
严锡君
范超
孔成
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 陈哲, 徐立中, 王鑫, 石爱业, 王慧斌, 严锡君, 范超, 孔成 filed Critical 陈哲
Priority to PCT/CN2014/076146 priority Critical patent/WO2015161490A1/en
Publication of WO2015161490A1 publication Critical patent/WO2015161490A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Definitions

  • the invention relates to a target motion detecting method, in particular to a target motion detecting method based on compound eye simulation for water surface polarization imaging.
  • Optical imaging detection and motion parameter estimation for surface targets such as oceans, lakes, and rivers have been widely used in many fields.
  • Optical imaging studies currently used for surface target detection focus on light intensity or spectral information.
  • the surface target detection based on the optical imaging method is universal and optimal. Therefore, some studies have gradually turned to other optical properties, the most typical of which is optical polarization. Since the researchers first introduced optical polarization characteristics into oil level detection in 1992 and proved that optical polarization is beneficial to improve the accuracy of surface target detection, the optical polarization difference is gradually regarded as a new water surface target detection method.
  • the algorithms currently used for surface target detection mainly include background modeling and non-background modeling. Among them, the target detection method based on background modeling has higher detection rate and higher anti-noise ability, so its robustness is better. However, the high complexity of the background model calculation and the variability of the natural background seriously affect the generalization of the method.
  • the target detection algorithm for non-background modeling does not require a cumbersome modeling process.
  • Target detection based on the information reflected by the surface image itself sometimes results in greater success, but without the support of the background model.
  • background information is the key to this approach.
  • the present invention solves the biomimetic solution to the hydrophilic insect-plutonium.
  • the present invention provides a target motion detection method for water surface polarization imaging based on compound eye simulation, which includes the following steps:
  • Step 1 Obtain a three-channel polarization image of the water surface scene through three sets of polarized water surface imaging systems, The three-pass polarization image is registered, and the three-pass polarization image is calculated by the Stokes model to obtain a polarization degree image of the optical information between the water surface target and the water surface background.
  • Step 2 Overlapping the polarization degree image by computer simulation of the compound vision to obtain the polarization degree information of the polarization degree image, and then constructing a large scene and a small scene channel by using the simulated vision, the large scene and the small scene channel respectively And compressing the water surface background and the water surface moving target in the polarization degree image according to the polarization degree information to obtain a sensitivity characteristic of the water surface moving target relative to the water surface background.
  • the method for superimposing the polarization degree image by computer simulation of the compound vision includes: constructing a virtual complex eye group through a plurality of partial image windows, wherein the virtual eye group is suitable for simulating the eye through sliding scanning
  • the scanning degree sampling is performed on the polarization degree image by using a plurality of small eyes to obtain the polarization degree information of the image.
  • the sliding scanning mode includes: the small eyelets distributed around the virtual eye group in the small eye group respectively The small eye at the center position is slid to achieve a convergent sliding scan, or the small eyes that have been concentrated at the center position are respectively slid around to achieve a split sliding scan.
  • Step 3 Using a pseudo cell model to retrieve a compressed image of a large scene and a small scene channel, and transforming the compressed image through a threshold filter to generate a continuous pulse sequence with a sensitive feature of the water moving target, and according to the sensitivity feature , to achieve target motion detection in the water surface scene.
  • the method for calculating the polarization image by using the polarization image registration technology and the Stokes model in the first step includes: using a feature point matching algorithm based on the SURF corner point to three-channel polarization image of the water surface scene taken at the same time After pixel-level registration, the Stokes equation is used to calculate the degree of polarization information corresponding to each pixel in the image to obtain a frame of polarization image.
  • the method for detecting target motion in the water surface scene according to the sensitivity characteristic of the continuous pulse sequence comprises: analyzing the response and response timing of the continuous pulse sequence by using the sensitivity characteristics of the continuous pulse sequence;
  • the target pulse sequence of the degree image is matched, that is, the pulse sequence generated by the current frame polarization degree image is combined with the pulse sequence of the next frame polarization degree image, and the response timings of the same pulse are corresponding in the pulse sequence features of different frames.
  • the same target achieves the matching of the sensitive features of the water moving target in the continuous pulse sequence, and then completes the motion detection of the water moving target.
  • the above technical solution of the present invention has the following advantages compared with the prior art: (1)
  • the technical solution of the present invention has the characteristics of strong anti-interference ability, polarization imaging of a specific spectral segment and polarization image fusion, without background modeling Under the premise of any prior knowledge, it can effectively suppress the complex surface, underwater and atmospheric optical noise in the scene, enhance the contrast between the target and the background, and improve the motion vector estimation of the water flow tracer. The accuracy.
  • this polarization information is also useful for extracting the invariant features of the optical surface scene and suppressing dynamic changes in the scene, such as illumination changes.
  • the special characteristics of the impulse response characteristics are utilized in the image processing stage, and the noise information and useful information are effectively distinguished to further implement noise filtering; (2)
  • the target motion detection accuracy is high.
  • the bionic technique of the compound eye vision is used, and the extracted pulse information can highlight the features such as texture and edge, and the sensitivity to the target is high.
  • the method is more suitable for multi-target. A water surface scene with uneven spatial and temporal distribution and greater environmental impact.
  • the multi-frame based pulse sequence is based on the pulse response timing to achieve the target matching.
  • the matching accuracy of the method is higher and the matching speed is faster; (3)
  • the complexity of the algorithm is significantly reduced.
  • FIG. 1 is a flow chart of a target motion detecting method according to an embodiment of the present invention.
  • FIG. 2 is a flowchart of a method for detecting a moving target of a compound eye in a pseudo eye
  • FIG. 3 is a process diagram of generating a feature of a SURF corner feature in the embodiment of the present invention
  • FIG. 4 is a schematic diagram of a principle of a nearest neighbor algorithm in an embodiment of the present invention.
  • FIG. 5 is a schematic view showing the sliding of the virtual complex eye group of the present invention.
  • a target motion detection method based on compound eye simulation for water surface polarization imaging three sets of polarized water surface imaging systems realize polarization optical imaging of three-channel specific spectral segments, and use image processing dedicated DSP chip as processing
  • the central processing unit (host computer) is a general-purpose PC or a ruggedized terminal device.
  • the workflow of the polarization imaging system can be divided into four steps: polarization image acquisition, polarization image registration and polarization image fusion.
  • Polarized image acquisition for a particular spectral segment can be taken in front of the camera or Install the filter before the photosensitive element (determined according to the requirements of the imaging equipment and the production process).
  • the three sets of polarized surface imaging systems use three sets of CMOS image sensors, and the CMOS image sensors are affixed with 0 °, 45 ° and 90 ° polarizers and spectral filters. It enables infrared polarization imaging to enhance the optical contrast between the water flow tracer and the water surface.
  • Applying the SURF algorithm to the image registration process can be divided into the following steps: Calculate the gradients fx and fy of each pixel in the horizontal and vertical directions in the image, and the product of the two, to obtain the Hessian matrix H:
  • modulation parameter k (specifically, the difference between the modulation angle point in the X and y directions, used to describe the point and the adjacent pixel The difference is typically 0. 0 0. 06.
  • a feature point is a pixel point corresponding to a maximum interest value in a local range.
  • the threshold value is when the interest value R of the point is greater than a certain value. Select the point for the value of the value or discard the point.
  • the specific steps include:
  • the 8 X 8 pixel feature point neighborhood is taken as an example to illustrate the generation process of the SURF feature descriptor, as shown in Figure 3: The neighborhood of the feature point pixel.
  • the descriptor of the SURF corner point can be obtained by calculating the similarity between the left and right image feature point description vectors to determine whether the two points are matching points.
  • the nearest neighbor (Nearest Neighbor) search algorithm is used to perform matching operations on SURF corner points by exhaustive search, where the nearest neighbor point is the feature point with the shortest Euclidean distance from the sample feature point, and the next nearest neighbor point is Refers to the feature point of Euclidean distance which is slightly longer than the nearest neighbor distance, and takes the ratio d of the Euclidean distance of the two feature points as a similarity measure. This ratio is also called NNDR (Nearest Neighbor Distance Ratio). ), As shown in Figure 4:
  • point p is any point in space
  • point q is its nearest neighbor
  • r is its next nearest neighbor
  • the ratio of nearest neighbor to next nearest Euclidean distance is:
  • D nearest is the nearest neighbor Euclidean distance
  • Dh is the nearest neighbor Euclidean distance
  • ⁇ nearest is the nearest neighbor Euclidean distance
  • the matching steps of the SURF feature points are:
  • the two images are freely defined in the matching process between the standard image and the image to be matched, respectively.
  • the matching is the operation of two images
  • the standard image is one of the two images
  • the standard image and the image to be matched are extracted from the respective SURF feature points.
  • the polarization degree information is normalized and can be characterized as a grayscale feature map, that is, a polarization degree image.
  • the polarization degree image information is converted into a single pulse discharge characteristic.
  • LF Large Scene
  • SF Small Scene
  • two visual mechanisms are simulated, namely, visual ⁇ visual shunt, nonlinear adaptive suppression, and pool cell scheduling processing.
  • FIG. 5 shows a split sliding scan, and the movement of the convergent sliding scan is opposite to that shown in FIG. 5, specifically constructing a virtual image by overlapping five partial image windows (the window size is 3 X 3 or 5 X 5 ).
  • the complex eye group (corresponding to the change of J), the virtual small eye group is scanned and scanned to simulate overlapping images of the fusion polarization image by a plurality of small eyes in the compound eye structure, and the polarization degree information is read.
  • a large scene (LF) and small scene (SF) visual perception systems in the pseudo vision to compress and characterize the surface scene.
  • the large scene integration model mainly produces a strong response to the slowly changing background features in the scene; while the small scene integration model responds to the target of high-speed motion in the scene, Estimate the direction of motion.
  • large scenes suppress complex background features; while small scenes have extremely high sensitivity to target features. ;
  • V u (i,j) I(i,j)xI(i + l,j)
  • V d (i,j) I(i,j)xI(il,j)
  • V r (i,j) I(i,j)xI(i,j+l)
  • Equation (1) is for the local region of size M and N , and calculates the integration result of features in the horizontal direction.
  • Equation (2) corresponds to the integration result in the vertical direction and is the shunt suppression coefficient.
  • q 5 simulates the nonlinear filtering function of cells.
  • the large scene integration process is a regularization process on the features in the local area, eliminating the suppression of the background by the features.
  • the integration model of the small scene realizes the enhancement of the target feature.
  • a and B are simulated nerve fiber integration parameters
  • v (t ) is the positive and negative channels of the bionic target detector, and the brightness increasing signals representing different polarities are distinguished.
  • Y +(t ) represents The brightness from left to right increases
  • v (t ) represents the positive and negative channels of the detector from right to left.
  • ( 1 , ⁇ ') and ( 1 , ⁇ ) respectively simulate the response of the left and right sides of the brain.
  • the LF and SF channels are scheduled by the cell model, and the polarization degree information obtained by the above virtual small eye group is transformed into a pulse sequence by threshold filter simulation to form a sensitivity feature of the water surface moving target, which is used as a motion detection for different targets in the water surface scene.
  • the information obtained by the small eye is divided into the LF and SF neurons according to the topology of the compound eye.
  • the topology of the neurons is consistent with the topology of the small eye, so the spatial information corresponding to the input of the small eye can be completely preserved.
  • LF neurons are sensitive to targets in a wide range; while SF is sensitive to changes in features in a small local range, and the dynamic adjustment of the target size is achieved by local central side suppression mechanism, while rapidly polarizing
  • the adaptive processing mechanism of slow depolarization can effectively suppress the background texture features of local regions.
  • the abrupt signal with low frequency and large amplitude is enhanced; and the texture signal with high frequency and low variation is adaptively suppressed.
  • the characteristic intensity of the on channel in the mechanism is the Euclidean distance between the an(i,j) and ⁇ ⁇ ) pixels.
  • the response information of LF and SF is input into the bionic pool cell model, and interaction is realized under its scheduling.
  • is the connection strength between LF neurons and SF neurons under pool cell scheduling.
  • the polarization degree image information is converted into a pulse sequence feature.
  • the pulse discharge area represents the target area -
  • the timing of the pulses distinguishes the categories of the targets, enabling the detection and classification of multiple targets:
  • the information processing mode of the central medullary neuron is simulated.
  • the pulse sequence represented by the multi-frame image it is considered that the detection targets corresponding to the same sequence in different pulse sequences are the same target, thereby simply implementing Match of goals.
  • the motion vector is estimated to realize the detection of the target motion:
  • MV is the target motion vector, which is the position of the target at 1 time
  • is the target at time 1
  • C is the target category, that is, the target category.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A target motion detection method for water surface polarization imaging based on compound eyes simulation is divided into three steps of polarization image collection calculation, bionic target detection and bionic motion target matching. A bionics method of a compound eyes simulation vision mechanism is adopted to realize the detection and tracking of a water surface target, and in the process of calculation, a scenario and target information are converted to a "compressive sensing" feature represented in the form of an impulse sequence. Subsequent target detection and target matching and tracking are both conducted based on the impulse sequence, and a response time sequence of an impulse and an impulse sequence mode are utilized to detect, match and track multi-targets in a scenario, so as to finally realize the detection of the target motion and estimation of motion vectors thereof. The method can be stably and reliably applied to water surface target motion detection under a complex water surface optical environment, and the computational efficiency is relatively high.

Description

一种基于复眼仿真的水面偏振成像的目标运动检测方法 技术领域  Target motion detection method for water surface polarization imaging based on compound eye simulation
本发明涉及一种目标运动检测方法, 具体的是指一种基于复眼仿真的水面偏 振成像的目标运动检测方法。  The invention relates to a target motion detecting method, in particular to a target motion detecting method based on compound eye simulation for water surface polarization imaging.
背景技术 Background technique
针对海洋、 湖泊、 河流等水面目标光学成像检测及运动参数估计在许多领域 取得了广泛应用。 目前用于水面目标检测的光学成像研究多集中于光强或光谱信 息。 但由于多变的自然、 气候条件以及目标多变的光学属性, 并不能认为基于该 光学成像方法的水面目标检测是普适的、 最优的。 因而有些研究逐渐转向其他的 光学特性, 其中最为典型的是光学偏振性。 自从 1992 年研究者首次将光学偏振 特性引入到油面检测中并证明光学偏振性有利于提高水面目标检测准确度后, 光 学偏振性差异逐渐被视为一种新的水面目标检测方法。  Optical imaging detection and motion parameter estimation for surface targets such as oceans, lakes, and rivers have been widely used in many fields. Optical imaging studies currently used for surface target detection focus on light intensity or spectral information. However, due to the changing natural and climatic conditions and the variable optical properties of the target, it is not considered that the surface target detection based on the optical imaging method is universal and optimal. Therefore, some studies have gradually turned to other optical properties, the most typical of which is optical polarization. Since the researchers first introduced optical polarization characteristics into oil level detection in 1992 and proved that optical polarization is beneficial to improve the accuracy of surface target detection, the optical polarization difference is gradually regarded as a new water surface target detection method.
由于极为复杂的水面光学环境, 即使采用较为完善的前端水面光学信息采集 单元, 所获取的信息仍然极不稳定, 包含有大量的噪声。 因此要最终完成检测工 作, 系统还必须依赖于后端检测算法的支持。 目前用于水面目标检测的算法主要 有基于背景建模及非背景建模的方法。 其中, 基于背景建模的目标检测方法检测 率较高, 且具有较高的抗噪声能力, 因此其鲁棒性较为出色。 但是背景模型计算 的高复杂性及自然背景的多变性严重影响该方法的推广性。  Due to the extremely complex water surface optical environment, even with a relatively sophisticated front-end water optical information acquisition unit, the information obtained is extremely unstable and contains a large amount of noise. Therefore, in order to finalize the inspection, the system must also rely on the support of the back-end detection algorithm. The algorithms currently used for surface target detection mainly include background modeling and non-background modeling. Among them, the target detection method based on background modeling has higher detection rate and higher anti-noise ability, so its robustness is better. However, the high complexity of the background model calculation and the variability of the natural background seriously affect the generalization of the method.
相比较, 非背景建模的目标检测算法不需要繁琐的建模过程, 仅仅根据水面 图像本身所反映出的信息进行目标检测, 有时会获得更大的成功, 但由于没有背 景模型的支持, 因而背景信息同目标信息见的区分是该方法的关键。 针对这一关 键点, 本发明通过对亲水昆虫 -蜻蜓的仿生技术方案加以解决。  In contrast, the target detection algorithm for non-background modeling does not require a cumbersome modeling process. Target detection based on the information reflected by the surface image itself sometimes results in greater success, but without the support of the background model. The distinction between background information and target information is the key to this approach. In response to this critical point, the present invention solves the biomimetic solution to the hydrophilic insect-plutonium.
发明内容 Summary of the invention
本发明的目的是提供一种基于复眼仿真的水面偏振成像的目标运动检测方 法, 该目标运动检测方法解决了复杂水面光学环境下检测和追踪目标的技术问 题。  It is an object of the present invention to provide a target motion detection method for water surface polarization imaging based on compound eye simulation, which solves the technical problem of detecting and tracking targets in a complex water surface optical environment.
为了解决上述技术问题, 本发明提供了一种基于复眼仿真的水面偏振成像的 目标运动检测方法, 包括如下步骤:  In order to solve the above technical problem, the present invention provides a target motion detection method for water surface polarization imaging based on compound eye simulation, which includes the following steps:
步骤一: 通过三组偏振水面成像系统获得水面场景的三通路偏振图像, 对该 三通路偏振图像进行配准, 并通过 Stokes模型对该三通路偏振图像进行计算, 以得到水面目标与水面背景之间的光学信息对比融合的偏振度图像。 Step 1: Obtain a three-channel polarization image of the water surface scene through three sets of polarized water surface imaging systems, The three-pass polarization image is registered, and the three-pass polarization image is calculated by the Stokes model to obtain a polarization degree image of the optical information between the water surface target and the water surface background.
步骤二: 通过计算机模拟蜻蜓复眼视觉对偏振度图像进行重叠采样, 以获取 所述偏振度图像的偏振度信息, 再利用仿蜻蜓视觉构建大场景、 小场景信道, 该 大场景、 小场景信道分别根据所述偏振度信息对偏振度图像中的水面背景和水面 运动目标进行图像压缩, 以获得该水面运动目标相对于水面背景的敏感性特征。  Step 2: Overlapping the polarization degree image by computer simulation of the compound vision to obtain the polarization degree information of the polarization degree image, and then constructing a large scene and a small scene channel by using the simulated vision, the large scene and the small scene channel respectively And compressing the water surface background and the water surface moving target in the polarization degree image according to the polarization degree information to obtain a sensitivity characteristic of the water surface moving target relative to the water surface background.
其中, 通过计算机模拟蜻蜓复眼视觉对偏振度图像进行重叠采样的方法包 括: 通过若干个局部图像窗口构建虚拟蜻蜓复眼小眼群, 所述虚拟蜻蜓复眼小眼 群适于通过滑动扫描方式模拟蜻蜓复眼结构中利用若干只小眼对偏振度图像进 行滑动扫描采样, 以获取图像的偏振度信息; 其中, 所述滑动扫描方式包括: 所 述虚拟蜻蜓复眼小眼群中分布于四周的各小眼分别向位于中心位置的小眼滑动 以实现汇聚式滑动扫描, 或已汇聚在中心位置的各小眼分别向四周滑动以实现散 开式滑动扫描。  The method for superimposing the polarization degree image by computer simulation of the compound vision includes: constructing a virtual complex eye group through a plurality of partial image windows, wherein the virtual eye group is suitable for simulating the eye through sliding scanning In the structure, the scanning degree sampling is performed on the polarization degree image by using a plurality of small eyes to obtain the polarization degree information of the image. The sliding scanning mode includes: the small eyelets distributed around the virtual eye group in the small eye group respectively The small eye at the center position is slid to achieve a convergent sliding scan, or the small eyes that have been concentrated at the center position are respectively slid around to achieve a split sliding scan.
步骤三: 利用仿池细胞模型调取大场景、 小场景信道的压缩图像, 将该压缩 图像通过阈值滤波器转换生成带有水面运动目标的敏感性特征的连续脉冲序列, 并根据该敏感性特征, 实现对水面场景中目标运动检测。  Step 3: Using a pseudo cell model to retrieve a compressed image of a large scene and a small scene channel, and transforming the compressed image through a threshold filter to generate a continuous pulse sequence with a sensitive feature of the water moving target, and according to the sensitivity feature , to achieve target motion detection in the water surface scene.
进一步, 所述步骤一中利用偏振图像配准技术及 Stokes 模型对该偏振图像 进行计算的方法包括: 采用基于 SURF角点的特征点匹配算法, 将同一时刻所拍 摄的水面场景的三通路偏振图像进行像素级的配准后, 再利用 Stokes 方程计算 图像中每个像素点所对应的偏振度信息, 得到一帧偏振度图像。  Further, the method for calculating the polarization image by using the polarization image registration technology and the Stokes model in the first step includes: using a feature point matching algorithm based on the SURF corner point to three-channel polarization image of the water surface scene taken at the same time After pixel-level registration, the Stokes equation is used to calculate the degree of polarization information corresponding to each pixel in the image to obtain a frame of polarization image.
进一步, 所述步骤三中根据连续脉冲序列的敏感性特征实现对水面场景中目 标运动检测的方法包括: 通过连续脉冲序列的敏感性特征, 分析连续脉冲序列的 响应及响应时序; 对多帧偏振度图像的目标脉冲序列进行匹配, 即, 将当前帧偏 振度图像所产生的脉冲序列同下一帧偏振度图像的脉冲序列进行合并, 在不同帧 的脉冲序列特征中, 同一脉冲的响应时序对应的相同目标, 实现连续脉冲序列中 水面运动目标的敏感性特征的匹配, 进而完成水面运动目标的运动检测。  Further, in the step 3, the method for detecting target motion in the water surface scene according to the sensitivity characteristic of the continuous pulse sequence comprises: analyzing the response and response timing of the continuous pulse sequence by using the sensitivity characteristics of the continuous pulse sequence; The target pulse sequence of the degree image is matched, that is, the pulse sequence generated by the current frame polarization degree image is combined with the pulse sequence of the next frame polarization degree image, and the response timings of the same pulse are corresponding in the pulse sequence features of different frames. The same target achieves the matching of the sensitive features of the water moving target in the continuous pulse sequence, and then completes the motion detection of the water moving target.
本发明的上述技术方案相比现有技术具有以下优点: (1 ) 本发明的技术方案 具有抗干扰能力强的特点, 特定谱段的偏振成像及偏振图像融合, 在不进行背景 建模并不借助任何先验知识的前提下, 可有效抑制场景中复杂的水面、 水下及大 气光学噪声、 增强目标与背景间的亮度对比, 从而提高水流示踪物运动矢量估计 的准确性。 此外, 这种偏振信息还有利于提取水面场景光学的不变性特征, 对场 景中的动态变化, 如光照变化具有抑制能力。 对于残余噪声, 在图像处理阶段利 用其在脉冲响应特性上的特殊性, 有效区别噪声信息和有用信息, 进一步实现噪 声滤波; (2 ) 目标运动检测精度高。 目标检测方法中, 采用仿蜻蜓复眼视觉的仿 生技术, 所提取的脉冲信息能够突显纹理和边缘等特征, 对目标的敏感度较高, 相比较普通的目标检测方法, 该方法更适用于多目标时空分布不均, 且受环境干 扰影响更大的水面场景。 在运动检测中, 基于多帧的脉冲序列以脉冲的响应时序 为依据, 来实现目标的匹配, 相比较基于统计决策等方法, 该方法的匹配准确性 更高, 匹配速度更快; (3 ) 算法复杂度显著下降。 在基于仿生的水面目标运动检 测方法运算的过程中, 复杂的、高维的场景、 目标信息均压缩为一维的脉冲序列, 而不丢失关键信息。 在检测、 匹配的过程中, 只需要对一维脉冲序列的模式如、 幅度、 频率、 时序等进行分析计算便可实现目标检测及匹配的。 相比较一般目标 检测跟踪方法对高维、 复杂信息的处理, 算法的复杂度显著下降, 运算效率显著 提高。 满足系统对实时性的要求。 The above technical solution of the present invention has the following advantages compared with the prior art: (1) The technical solution of the present invention has the characteristics of strong anti-interference ability, polarization imaging of a specific spectral segment and polarization image fusion, without background modeling Under the premise of any prior knowledge, it can effectively suppress the complex surface, underwater and atmospheric optical noise in the scene, enhance the contrast between the target and the background, and improve the motion vector estimation of the water flow tracer. The accuracy. In addition, this polarization information is also useful for extracting the invariant features of the optical surface scene and suppressing dynamic changes in the scene, such as illumination changes. For residual noise, the special characteristics of the impulse response characteristics are utilized in the image processing stage, and the noise information and useful information are effectively distinguished to further implement noise filtering; (2) The target motion detection accuracy is high. In the target detection method, the bionic technique of the compound eye vision is used, and the extracted pulse information can highlight the features such as texture and edge, and the sensitivity to the target is high. Compared with the common target detection method, the method is more suitable for multi-target. A water surface scene with uneven spatial and temporal distribution and greater environmental impact. In the motion detection, the multi-frame based pulse sequence is based on the pulse response timing to achieve the target matching. Compared with the method based on statistical decision, the matching accuracy of the method is higher and the matching speed is faster; (3) The complexity of the algorithm is significantly reduced. In the process of bionic-based water surface motion detection method, complex, high-dimensional scene and target information are compressed into one-dimensional pulse sequences without losing key information. In the process of detection and matching, only the mode, amplitude, frequency, timing, etc. of the one-dimensional pulse sequence need to be analyzed and calculated to achieve target detection and matching. Compared with the general target detection and tracking method for the processing of high-dimensional and complex information, the complexity of the algorithm is significantly reduced, and the computational efficiency is significantly improved. Meet the system's requirements for real-time performance.
附图说明 DRAWINGS
为了使本发明的内容更容易被清楚的理解, 下面根据的具体实施例并结合附 图, 对本发明作进一步详细的说明, 其中  In order to make the content of the present invention easier to understand, the present invention will be further described in detail below with reference to the accompanying drawings
图 1是本发明实施例的目标运动检测方法的流程图;  1 is a flow chart of a target motion detecting method according to an embodiment of the present invention;
图 2是本发明实施例中仿蜻蜓复眼视觉目标运动检测算法流程图; 图 3 是本发明实施例中 SURF角点特征描述子生成过程图;  2 is a flowchart of a method for detecting a moving target of a compound eye in a pseudo eye; FIG. 3 is a process diagram of generating a feature of a SURF corner feature in the embodiment of the present invention;
图 4是本发明实施例中最近邻算法原理示意图;  4 is a schematic diagram of a principle of a nearest neighbor algorithm in an embodiment of the present invention;
图 5为本发明的虚拟蜻蜓复眼小眼群的滑动示意图。  FIG. 5 is a schematic view showing the sliding of the virtual complex eye group of the present invention.
具体实 式 Specific form
下面结合附图及实施例对本发明进行详细说明:  The present invention will be described in detail below with reference to the accompanying drawings and embodiments:
实施例  Example
如图 1和图 2所示,一种基于复眼仿真的水面偏振成像的目标运动检测方法, 三组偏振水面成像系统实现三通路特定谱段的偏振光学成像, 并以图像处理专用 DSP芯片作为处理器; 中央处理单元 (上位机) 为通用 PC机或加固型终端设备。 按照信息获取和处理的过程, 偏振成像系统的工作流程可分为偏振图像采集、 偏 振图像配准和偏振图像融合四个步骤。 特定谱段的偏振图像采集可由在镜头前或 感光元件前 (根据成像设备要求及生产工艺确定) 加装滤波片。 三组偏振水面成 像系统采用三组 CMOS图像传感器, CMOS图像传感器表贴 0 ° 、 45 ° 及 90 ° 方向 偏振片及光谱滤光片。 能够实现红外偏振成像, 增强水流示踪物与水面背景间的 光学对比。 As shown in Fig. 1 and Fig. 2, a target motion detection method based on compound eye simulation for water surface polarization imaging, three sets of polarized water surface imaging systems realize polarization optical imaging of three-channel specific spectral segments, and use image processing dedicated DSP chip as processing The central processing unit (host computer) is a general-purpose PC or a ruggedized terminal device. According to the process of information acquisition and processing, the workflow of the polarization imaging system can be divided into four steps: polarization image acquisition, polarization image registration and polarization image fusion. Polarized image acquisition for a particular spectral segment can be taken in front of the camera or Install the filter before the photosensitive element (determined according to the requirements of the imaging equipment and the production process). The three sets of polarized surface imaging systems use three sets of CMOS image sensors, and the CMOS image sensors are affixed with 0 °, 45 ° and 90 ° polarizers and spectral filters. It enables infrared polarization imaging to enhance the optical contrast between the water flow tracer and the water surface.
偏振图像采集后, 通过三条信息通路, 将同时三幅图像输入 DSP芯片中。 在 DSP芯片中已预加载图像配准程序, 本发明采用改进的 SURF算法进行图像配准。  After the polarization image is acquired, three images are simultaneously input into the DSP chip through three information channels. The image registration procedure has been preloaded in the DSP chip, and the present invention employs the improved SURF algorithm for image registration.
应用 SURF算法进行图像配准过程可以分为以下几步: 计算图像中每个像素点在水平和垂直方向上的梯度 fxfy, 以及两者乘积, 得到 Hessian矩阵 H : Applying the SURF algorithm to the image registration process can be divided into the following steps: Calculate the gradients fx and fy of each pixel in the horizontal and vertical directions in the image, and the product of the two, to obtain the Hessian matrix H:
H : H :
(1)对图像进行高斯滤波, 得到滤波后的 H, 离散二维零均值高斯函数为: (1) Perform Gaussian filtering on the image to obtain the filtered H. The discrete two-dimensional zero-mean Gaussian function is:
(x2 + y2) (x 2 + y 2 )
H gauss exp (- ) * Η  H gauss exp (- ) * Η
2 2
其中(χy )为图像像素点的坐标, σ为高斯函数的方差系数。 Where ( χ , y ) is the coordinates of the image pixel and σ is the variance coefficient of the Gaussian function.
(2)计算图像上每个像素点的兴趣值 pu" : 兴趣值决定了角点, 角点也可以 称为兴趣点。 (2) Calculate the interest value p u" of each pixel on the image : the interest value determines the corner point, and the corner point may also be called a point of interest.
: [ f, X f - ( fxfyr] -k[ f, + f ] 其中, 调制参数 k (具体含义是调制角点在 X和 y方向上的差异, 用于描述 该点同相邻像素的差异) 典型的取值范围为 0. 0 0. 06。 : [ f, X f - ( fxf y r] -k[ f, + f ] where modulation parameter k (specifically, the difference between the modulation angle point in the X and y directions, used to describe the point and the adjacent pixel The difference is typically 0. 0 0. 06.
(3)选取局部极值点 (作为特征点)。 SURF算法中, 特征点就是局部范围内的 极大兴趣值对应的像素点。  (3) Select a local extremum point (as a feature point). In the SURF algorithm, a feature point is a pixel point corresponding to a maximum interest value in a local range.
(4)选取适当的阈值 T (本发明的典型选择为 Τ=40(Γ700), 选取一定数量的 角点 (本发明的典型选择为 15CT200角点数)。 阈值就当点的兴趣值 R大于一定 的数值时选择该点否则舍弃该点。  (4) Select an appropriate threshold T (the typical choice of the present invention is Τ=40 (Γ700), and select a certain number of corner points (the typical choice of the present invention is 15CT200 corner points). The threshold value is when the interest value R of the point is greater than a certain value. Select the point for the value of the value or discard the point.
具体的步骤包括:  The specific steps include:
首先为提取出的角点构造一个方向, 取特征点邻域内各像素点梯度方向最大 像素点的方向作为 SURF角点的方向, 这里我们取邻域大小为 3 X 3。  First, construct a direction for the extracted corner points, and take the direction of the largest pixel point in the gradient direction of each pixel in the neighborhood of the feature point as the direction of the SURF corner point. Here we take the neighborhood size to be 3 X 3 .
在根据构造的 SURF特征点方向为特征点建立坐标系; (1) 对图像进行滤波生成尺度空间, 得到不同尺度的图像, 取 SURF特征 点 16 X 16像素邻域, 再将此领域分为 4个相同的子区域, 计算每个子区域的梯 度方向, 均匀分为 8个方向; 16*16 的选择是经验性的 , 分成 4个区域每个区 域是 8*8 和下面的论述也一致 Establishing a coordinate system as a feature point according to the configured SURF feature point direction; (1) Filtering the image to generate the scale space, and obtaining images of different scales, taking the SURF feature point 16 X 16 pixel neighborhood, and then dividing the field into 4 identical sub-regions, calculating the gradient direction of each sub-region, uniform Divided into 8 directions; 16*16 is empirical, divided into 4 regions, each region is 8*8 and the following discussion is also consistent
(2) 对这 8个方向梯度进行排序,得到一个 128维的特征向量, 即为 SURF 特征描述向量。  (2) Sort the eight direction gradients to obtain a 128-dimensional feature vector, which is the SURF feature description vector.
以 8 X 8像素特征点邻域为例说明 SURF特征描述子的生成过程,如图 3所示: 特征点像素的邻域。  The 8 X 8 pixel feature point neighborhood is taken as an example to illustrate the generation process of the SURF feature descriptor, as shown in Figure 3: The neighborhood of the feature point pixel.
通过上述步骤, 可以得到 SURF角点的描述子通过计算左右图像特征点描述 向量的相似性判断两点是否为匹配点。 采用最近邻顧 (Nearest Neighbor) 搜索 算法, 通过穷尽搜索对 SURF角点进行匹配操作, 这里最近邻点是指与样本特征 点具有最短欧氏距离 (Eucl idean distance)的特征点, 次近邻点是指具有比最近 邻距离稍长的欧氏距离的特征点, 并将两个特征点的欧氏距离的比值 d作为相似 性度量, 这个比值又称为 NNDR (Nearest Neighbor Distance Ratio, 最近邻距离 比), 如图 4所示: Through the above steps, the descriptor of the SURF corner point can be obtained by calculating the similarity between the left and right image feature point description vectors to determine whether the two points are matching points. The nearest neighbor (Nearest Neighbor) search algorithm is used to perform matching operations on SURF corner points by exhaustive search, where the nearest neighbor point is the feature point with the shortest Euclidean distance from the sample feature point, and the next nearest neighbor point is Refers to the feature point of Euclidean distance which is slightly longer than the nearest neighbor distance, and takes the ratio d of the Euclidean distance of the two feature points as a similarity measure. This ratio is also called NNDR (Nearest Neighbor Distance Ratio). ), As shown in Figure 4:
其中, 点 p为空间任一点, 点 q为它的最近邻点, r为它的次近邻点, 最近 邻和次近邻欧氏距离的比值为:  Where point p is any point in space, point q is its nearest neighbor, r is its next nearest neighbor, and the ratio of nearest neighbor to next nearest Euclidean distance is:
其中, Dnearest为最近邻欧氏距离, Dh 。― nearest为次近邻欧氏距离。 Among them, D nearest is the nearest neighbor Euclidean distance, Dh. ― nearest is the nearest neighbor Euclidean distance.
SURF特征点的匹配步骤为:  The matching steps of the SURF feature points are:
(1) 分别在标准图像和待匹配图像是在匹配过程中对两幅图像的自由定 义。 其中, 匹配是对两幅图像的操作, 标准图像是两幅图像中选择一幅, 标准图 像和待匹配图像中提取各自的 SURF特征点。  (1) The two images are freely defined in the matching process between the standard image and the image to be matched, respectively. Among them, the matching is the operation of two images, the standard image is one of the two images, and the standard image and the image to be matched are extracted from the respective SURF feature points.
(2) 依次取标准图像中的特征点, 是角点 搜索特征点在另外一幅图像中 的最近邻点与次近邻点, 计算两者的比值;  (2) taking the feature points in the standard image in turn, is the corner point, searching for the nearest neighbor and the nearest neighbor of the feature point in another image, and calculating the ratio of the two;
(3) 比较两者比值与设定阈值的大小, 如果比值小于设定的阈值, 则表示 这个特征点与待匹配图像中的特征点为同名点, 匹配成功。 否则重新搜索。  (3) Compare the ratio between the two and the set threshold. If the ratio is less than the set threshold, it indicates that the feature point is the same point as the feature point in the image to be matched, and the match is successful. Otherwise search again.
将匹配准确的偏振度图像输入到偏振度信息计算融合运算模块,通过 Stokes 方程计算偏振度信息。 根据 Stokes方程 1 = a Sin[2(^— ") + ¾] + bInput the exact polarization degree image into the polarization degree information calculation fusion operation module, through Stokes The equation calculates the degree of polarization information. According to the Stokes equation 1 = a Sin[2 (^- ") + 3⁄4 ] + b and
« = 0°,45°,90°方向上的目标灰度图像可以求解得相应像素参数 ab, 进而可得 偏振度 = ^。最终将偏振度信息归一化后可表征为灰度特征图,即偏振度图像。 The target grayscale image in the «= 0°, 45°, 90° direction can be solved for the corresponding pixel parameters a , b , and then the degree of polarization = ^. Finally, the polarization degree information is normalized and can be characterized as a grayscale feature map, that is, a polarization degree image.
利用以重叠捆绑的 5个局部窗口 (3 X 3或 5 X 5 ) 构建虚拟的小眼群, 小眼 群滑动扫描式地模拟复眼结构中若干只小眼对偏振度图像进行重叠采样, 读取偏 振度信息。 在偏振度图像中偏振度强度以归一化的像素强度值代表。 电生理学研 究, 小眼群输入输出间的系统函数可以描述为高斯函数。 本发明不考虑视域内的 响应的动态变化, 可以将其简化成阶跃函数, 即  Constructing a virtual small eye group with five partial windows (3 X 3 or 5 X 5 ) bundled in an overlapping manner, the small eye group slidingly scanning and scanning several small eyes in the compound eye structure to overlap and sample the polarization degree image, and read Polarization information. The degree of polarization in the degree of polarization image is represented by a normalized pixel intensity value. Electrophysiological studies, the system function between the small eye group input and output can be described as a Gaussian function. The present invention does not consider the dynamic change of the response in the field of view, and can be simplified into a step function, ie
… , , 、 、  ... , , , , ,
r (x) = E(AI (x) -thre) r (x) = E(AI (x) -thre)
Figure imgf000008_0001
Figure imgf000008_0001
其中 X为图像中像素位置, I(x)为该点的强度, thre为响应阈值, Ε(·)为阶跃 函数。 通过以上描述, 可以表达出小眼系统中的输入, 输出及系统响应函数。  Where X is the pixel position in the image, I(x) is the intensity of the point, thre is the response threshold, and Ε(·) is the step function. Through the above description, the input, output and system response functions in the small-eye system can be expressed.
至此, 偏振度图像信息转换为单一的脉冲放电特征。 随后, 利用仿 "大场景 ( LF) "、 "小场景 (SF) "系统对场景的压缩传感和表征, 利用仿 "池细胞" 的优 化 "调度"模型突显纹理和边缘等特征, 形成运动目标的敏感性脉冲序列特征。 依次工程化模拟蜻蜓视觉分流、 非线性自适应抑制、 池细胞调度处理流程两种视 觉机制。  So far, the polarization degree image information is converted into a single pulse discharge characteristic. Subsequently, using the imitation "Large Scene (LF)" and "Small Scene (SF)" systems to compress and characterize the scene, using the optimized "scheduling" model of the "pool cell" to highlight features such as texture and edges to form motion Sensitive pulse sequence characteristics of the target. In turn, two visual mechanisms are simulated, namely, visual 分 visual shunt, nonlinear adaptive suppression, and pool cell scheduling processing.
具体的方法包括:  Specific methods include:
图 5示出了散开式滑动扫描, 汇聚式滑动扫描的移动与该图 5所示相反, 具 体的以重叠捆绑的 5个局部图像窗口 (窗口大小为 3 X 3或 5 X 5 ) 构建虚拟蜻蜓 复眼小眼群 (对应于 、 J的变化), 通过该虚拟小眼群滑动扫描式地模拟复眼结 构中若干只小眼对融合偏振度图像进行重叠采样, 读取偏振度信息。  FIG. 5 shows a split sliding scan, and the movement of the convergent sliding scan is opposite to that shown in FIG. 5, specifically constructing a virtual image by overlapping five partial image windows (the window size is 3 X 3 or 5 X 5 ). The complex eye group (corresponding to the change of J), the virtual small eye group is scanned and scanned to simulate overlapping images of the fusion polarization image by a plurality of small eyes in the compound eye structure, and the polarization degree information is read.
随后, 利用仿蜻蜓视觉中的大场景 (LF)、 小场景 (SF)视觉感知系统对水面 场景压缩传感和表征, 形成大场景 (LF)、 小场景 (SF)信道。  Subsequently, using the large scene (LF) and small scene (SF) visual perception systems in the pseudo vision to compress and characterize the surface scene, a large scene (LF) and small scene (SF) channel is formed.
在昆虫的复眼系统中存在着两套平行的信息整合系统 大场景整合模型 和小场景整合模型, 这两种整合模型分别对视网膜细胞获得的初级运动感知信号 进行不同方式的整合。 大场景整合模型主要对场景中缓慢变化的背景特征产生较 强的响应; 而小场景整合模型则对场景中高速运动的目标产生响应, 对目标的运 动方向进行估计。 在本发明中, 大场景对复杂背景特征进行抑制; 而小场景则对 目标的特征具有极高的敏感性。; In the insect compound eye system, there are two sets of parallel information integration system large scene integration model and small scene integration model. These two integration models integrate the primary motion perception signals obtained by retinal cells in different ways. The large scene integration model mainly produces a strong response to the slowly changing background features in the scene; while the small scene integration model responds to the target of high-speed motion in the scene, Estimate the direction of motion. In the present invention, large scenes suppress complex background features; while small scenes have extremely high sensitivity to target features. ;
首先计算在第 (i,j)个像元与相邻位置像元之间的相关性,即分别计算第 (i,j) 个像元与其周围四个像元在上、 下、 左和右四个方向上的灰度相关系数 Vu(ij); vd(i,j). Vr(i,j) First calculate the correlation between the (i, j )th pixel and the adjacent position pixel, that is, calculate the (i , j)th pixel and the surrounding four pixels in the top, bottom, left and right respectively. Gray correlation coefficient Vu(i , j ) in four directions ; v d (i,j). V r (i,j)
Vu(i,j) = I(i,j)xI(i + l,j) V u (i,j) = I(i,j)xI(i + l,j)
(1) (1)
Vd(i,j) = I(i,j)xI(i-l,j) V d (i,j) = I(i,j)xI(il,j)
X(i,j) = I(i,j)xI(i,j-l)  X(i,j) = I(i,j)xI(i,j-l)
Vr(i,j) = I(i,j)xI(i,j+l) V r (i,j) = I(i,j)xI(i,j+l)
同理可以得到其它三个方向的相关系数。 然后根据大场景整合公式:  Similarly, the correlation coefficients of the other three directions can be obtained. Then integrate the formula according to the big scene:
Figure imgf000009_0001
Figure imgf000009_0001
(2)  (2)
Figure imgf000009_0002
Figure imgf000009_0002
(3)  (3)
对相对应的两个方向的相关性输出进行整合。 公式(1)是针对大小为 MN的 局部区域, 计算水平方向上特征的整合结果, 公式 (2)对应于垂直方向上的整合 结果, 为分流抑制系数。 公式 (2)和公式 (3)中的分母为局部区域内特征之和, 其中 n = 3用于模拟细胞的饱和非线性特性, 能够对相关性较弱的信号进行增强, 对相关性较强的信号进行抑制。 q= 5模拟细胞的非线性滤波功能。 大场景整合 过程是对局部区域内特征的正则化处理, 消除特征对于背景的抑制。 由于 n = 3并 且 -1< _0<1时, (D=l,r,u,d), |VD|n<VD, 因此采用三阶非线性处理可以 将由噪声引起的幅度较小的背景特征进行压制, 而当 VD(i' j) > 1时, lVD|n >VD, 则表示对于灰度特征异常幅度较大的信号, 大场景整合将对其增强。 (是增强 因 为灰度级变化较大) Integrate the correlation outputs in the corresponding two directions. Equation (1) is for the local region of size M and N , and calculates the integration result of features in the horizontal direction. Equation (2) corresponds to the integration result in the vertical direction and is the shunt suppression coefficient. The denominator in formula (2) and formula (3) is the sum of the features in the local region, where n = 3 is used to simulate the saturation nonlinearity of the cell, which can enhance the weaker correlation signal, and the correlation is stronger. The signal is suppressed. q = 5 simulates the nonlinear filtering function of cells. The large scene integration process is a regularization process on the features in the local area, eliminating the suppression of the background by the features. Since n = 3 and -1<_0<1, (D=l, r, u, d), |V D | n <V D , therefore, third-order nonlinear processing can be used. The background feature with a small amplitude caused by noise is suppressed, and when V D (i ' j) > 1 , l V D| n > V D , indicating a signal with a large amplitude abnormality for the gray feature, large scene Integration will enhance it. (is enhanced because the gray level changes greatly)
同理小场景整合模型公式为:  The same small scene integration model formula is:
X[AV+(i,j)-BV-(i,j)]
Figure imgf000010_0001
X[AV + (i,j)-BV-(i,j)]
Figure imgf000010_0001
(4) 小场景的整合模型实现目标特征的增强。 其中 A, B为模拟神经纤维整合参 数, 和 v (t)为仿生目标检测器的正负两个通道, 分辨代表不同极性的亮度 增加信号, 以水平方向为例, Y+(t)代表从左向右的亮度增加, v (t)代表从右向 左的亮度增加检测器的正负两个通道。 (1,』')和 (1,』)分别为模拟大脑左右两侧 的响应。 (4) The integration model of the small scene realizes the enhancement of the target feature. Where A and B are simulated nerve fiber integration parameters, and v (t ) is the positive and negative channels of the bionic target detector, and the brightness increasing signals representing different polarities are distinguished. Taking the horizontal direction as an example, Y +(t ) represents The brightness from left to right increases, and v (t ) represents the positive and negative channels of the detector from right to left. ( 1 , 』') and ( 1 , 』) respectively simulate the response of the left and right sides of the brain.
利用仿池细胞模型调度 LF、 SF信道, 将上述虚拟小眼群所获偏振度信息通 过阈值滤波器模拟转换生成脉冲序列, 形成水面运动目标的敏感性特征, 作为对 水面场景中不同目标运动检测的依据。  The LF and SF channels are scheduled by the cell model, and the polarization degree information obtained by the above virtual small eye group is transformed into a pulse sequence by threshold filter simulation to form a sensitivity feature of the water surface moving target, which is used as a motion detection for different targets in the water surface scene. Basis.
蜻蜓小眼所获取的信息按照复眼的拓扑结构分流输入到 LF及 SF神经元中, 其神经元的拓扑结构同小眼的拓扑结构一致, 因此能够完整保留小眼输入所对应 的空间信息。 其中 LF神经元对较大范围内的目标均较为敏感; 而 SF则对于较小 局部范围内特征变化敏感, 并通过局部的中心侧抑制机理实现检测目标尺寸的动 态可调, 同时通过快速极化和慢速去极化的自适应处理机制能够有效抑制局部区 域的背景纹理特征。 对于出现频率低、 变化幅度大的突变信号进行增强; 而对于 出现频率高、 变化幅度低的纹理信号则进行自适应抑制。 以水平方向为例 (垂直 方向同理), 对于输入信号 Qn(i,j), 非线性自适应机理模拟为- d/dAs{on(i, j)} = (οη(ητ,η)-οη(ϊ, j))/f, 其中 on(i, j)为第(i' j)像元在中心侧抑制 机理中 on通道的特征强度, 为 an(i,j)与 αη η)像元之间的欧氏距离。 ζ"是响 应衰减(增强)系数,若 (ij)位置上信号强度低于其周围领域信号的强度 Qn(I^ η), 则 ζ" = 10, 实现对比度的慢速衰减; 反之 f = i, 实现对比度的快速增加。 The information obtained by the small eye is divided into the LF and SF neurons according to the topology of the compound eye. The topology of the neurons is consistent with the topology of the small eye, so the spatial information corresponding to the input of the small eye can be completely preserved. Among them, LF neurons are sensitive to targets in a wide range; while SF is sensitive to changes in features in a small local range, and the dynamic adjustment of the target size is achieved by local central side suppression mechanism, while rapidly polarizing And the adaptive processing mechanism of slow depolarization can effectively suppress the background texture features of local regions. The abrupt signal with low frequency and large amplitude is enhanced; and the texture signal with high frequency and low variation is adaptively suppressed. Taking the horizontal direction as an example (the same in the vertical direction), for the input signal Qn(i ,j), the nonlinear adaptive mechanism is simulated as -d/dAs{on(i, j)} = (οη(ητ,η)- Ηη(ϊ, j))/f, where on(i, j) is the (i' j) pixel suppressed at the center side The characteristic intensity of the on channel in the mechanism is the Euclidean distance between the an(i,j) and αη η) pixels. ζ" is the response attenuation (enhancement) coefficient. If the signal strength at the (i , j ) position is lower than the intensity Qn(I ^ η ) of the surrounding area signal, then =" = 10, achieving a slow decay of contrast; = i , a fast increase in contrast.
LF及 SF的响应信息输入到仿生池细胞模型中, 在其调度下实现交互  The response information of LF and SF is input into the bionic pool cell model, and interaction is realized under its scheduling.
[n] = e-^U^ [n -1] +VF∑ [n -l] + 对 LF输入及反馈机制: w [n] = e-^U^ [n -1] +V F ∑ [n -l] + For LF input and feedback mechanism: w
U [n] = [n _l] +VL Wl,j,k, [n -1]
Figure imgf000011_0001
U [n] = [ n _l] +V L Wl ,j, k , [n -1]
Figure imgf000011_0001
对 SF的输入及反馈机制: ' w 其中, [Π]为 LF神经元 (i, J )的在时序为 n时的输入对应于该时刻 Qn(i, j) 输入到 LF所激发的响应, 为 LF神经元 (i,j)的 (是一个概念) 在时序为 n时的输 入对应于该时刻∞^^)输入到 SF所激发的响应, 为小眼输入, "为衰减系数, w为整合权重, VpVl为输入的增益。 则最终的整合脉冲输出为-
Figure imgf000011_0002
Input and feedback mechanism for SF: ' w where, [Π] is the response of the LF neuron (i, J) at time n is corresponding to the response of the input Qn (i, j) input to LF at that time, For the LF neuron (i, j) (is a concept), the input at time n corresponds to the response of the input to the SF, which is the input to the SF, for the small-eye input, "for the attenuation coefficient, w is Integrate the weights, Vp , Vl are the gains of the input. Then the final integrated pulse output is -
Figure imgf000011_0002
其中, τ为池细胞调度下 LF神经元同 SF神经元间的连接强度。 Among them, τ is the connection strength between LF neurons and SF neurons under pool cell scheduling.
至此, 偏振度图像信息转换为脉冲序列特征。 其中脉冲放电区域代表目标区 域- So far, the polarization degree image information is converted into a pulse sequence feature. Where the pulse discharge area represents the target area -
0 = ατβ(¾ [η] = 1) 其中, 0为目标区域, arg为取反算子。 0 = ατ β (3⁄4 [η] = 1) where 0 is the target area and arg is the inverse operator.
脉冲的时序区分目标的类别, 实现了多目标的检测和分类:  The timing of the pulses distinguishes the categories of the targets, enabling the detection and classification of multiple targets:
€= ατβ(0Μ [η] = 1) 其中, C为目标类别, arg为取反算子, n为脉冲响应的时序。 €= ατ β (0 Μ [η] = 1) where C is the target class, arg is the inverse operator, and n is the timing of the impulse response.
最后, 基于该脉冲序列, 模仿中央髓质神经元的信息处理模式, 在多帧图像 所表征的脉冲序列中, 认为不同脉冲序列中同一时序所对应的检测目标为同一目 标,从而简单的实现了目标的匹配。根据 t, t _i连续帧间匹配目标间的像素差异, 估计运动矢量, 实现对目标运动的检测:
Figure imgf000011_0003
其中, MV为目标的运动矢量, 为1时刻目标的位置, ^为卜1时刻目标 的位置, C为目标类别, 即目标种类。
Finally, based on the pulse sequence, the information processing mode of the central medullary neuron is simulated. In the pulse sequence represented by the multi-frame image, it is considered that the detection targets corresponding to the same sequence in different pulse sequences are the same target, thereby simply implementing Match of goals. According to the pixel difference between t, t _i consecutive frames, the motion vector is estimated to realize the detection of the target motion:
Figure imgf000011_0003
Where MV is the target motion vector, which is the position of the target at 1 time, ^ is the target at time 1 The location, C is the target category, that is, the target category.
至此完成一次水面目标的运动检测。  This completes the motion detection of the surface target.
显然, 上述实施例仅仅是为清楚地说明本发明所作的举例, 而并非是对本发 明的实施方式的限定。 对于所属领域的普通技术人员来说, 在上述说明的基础上 还可以做出其它不同形式的变化或变动。 这里无需也无法对所有的实施方式予以 穷举。 而这些属于本发明的精神所引伸出的显而易见的变化或变动仍处于本发明 的保护范围之中。  It is apparent that the above-described embodiments are merely illustrative of the invention and are not intended to limit the embodiments of the invention. Other variations or modifications of the various forms may be made by those skilled in the art in light of the above description. There is no need and no way to exhaust all of the implementations. It is to be understood that the obvious changes or modifications which come within the spirit of the invention are still within the scope of the invention.

Claims

权 利 要 求 书 claims
1、 一种基于复眼仿真的水面偏振成像的目标运动检测方法, 其特征在于包 括- 步骤一: 通过三组偏振水面成像系统获得水面场景的三通路偏振图像, 对该 三通路偏振图像进行配准, 并通过 Stokes模型对该三通路偏振图像进行计算, 以得到水面目标与水面背景之间的光学信息对比融合的偏振度图像; 1. A target motion detection method for water surface polarization imaging based on compound eye simulation, which is characterized by including - Step 1: Obtain three-channel polarization images of the water surface scene through three sets of polarization water surface imaging systems, and register the three-channel polarization images , and calculate the three-channel polarization image through the Stokes model to obtain a polarization image that contrasts and fuses the optical information between the water surface target and the water surface background;
步骤二: 通过计算机模拟蜻蜓复眼视觉对偏振度图像进行重叠采样, 以获取 所述偏振度图像的偏振度信息, 再利用仿蜻蜓视觉构建大场景、 小场景信道, 该 大场景、 小场景信道分别根据所述偏振度信息对偏振度图像中的水面背景和水面 运动目标进行图像压缩, 以获得该水面运动目标相对于水面背景的敏感性特征; 其中, 通过计算机模拟蜻蜓复眼视觉对偏振度图像进行重叠采样的方法包 括: 通过若干个局部图像窗口构建虚拟蜻蜓复眼小眼群, 所述虚拟蜻蜓复眼小眼 群适于通过滑动扫描方式模拟蜻蜓复眼结构中利用若干只小眼对偏振度图像进 行滑动扫描采样, 以获取图像的偏振度信息; 其中, 所述滑动扫描方式包括: 所 述虚拟蜻蜓复眼小眼群中分布于四周的各小眼分别向位于中心位置的小眼滑动 以实现汇聚式滑动扫描, 或已汇聚在中心位置的各小眼分别向四周滑动以实现散 开式滑动扫描; Step 2: Perform overlapping sampling of the polarization degree image through computer simulation of dragonfly compound eye vision to obtain the polarization degree information of the polarization degree image, and then use simulated dragonfly vision to construct large scene and small scene channels. The large scene and small scene channels are respectively Image compression is performed on the water surface background and the water surface moving target in the polarization degree image according to the polarization degree information to obtain the sensitivity characteristics of the water surface moving target relative to the water surface background; wherein, the polarization degree image is processed by computer simulation of dragonfly compound eye vision. The method of overlapping sampling includes: constructing a virtual dragonfly compound eye ommatidium group through several local image windows. The virtual dragonfly compound eye ommatidium group is suitable for simulating the dragonfly compound eye structure through sliding scanning, using several ommatidia to slide the polarization image. Scan and sample to obtain the polarization information of the image; wherein, the sliding scanning method includes: each ommatidium distributed in the virtual dragonfly compound eye ommatidium group slides toward the ommatidium at the center to achieve convergent sliding Scan, or each ommatidium that has been gathered at the center slides around to achieve diffuse sliding scanning;
步骤三: 利用仿池细胞模型调取大场景、 小场景信道的压缩图像, 将该压缩 图像通过阈值滤波器转换生成带有水面运动目标的敏感性特征的连续脉冲序列, 并根据该敏感性特征, 实现对水面场景中目标运动检测。 Step 3: Use the imitation pool cell model to retrieve the compressed images of large scene and small scene channels, convert the compressed image through a threshold filter to generate a continuous pulse sequence with sensitivity characteristics of water surface moving targets, and based on the sensitivity characteristics , to achieve target motion detection in water scenes.
2、 如权利要求 1 所述的目标运动检测方法, 其特征在于: 所述步骤一中利 用偏振图像配准技术及 Stokes 模型对该偏振图像进行计算的方法包括: 采用基 于 SURF角点的特征点匹配算法, 将同一时刻所拍摄的水面场景的三通路偏振图 像进行像素级的配准后, 再利用 Stokes 方程计算图像中每个像素点所对应的偏 振度信息, 得到一帧偏振度图像。 2. The target motion detection method according to claim 1, characterized in that: the method of calculating the polarization image using polarization image registration technology and the Stokes model in step one includes: using feature points based on SURF corner points The matching algorithm performs pixel-level registration of the three-channel polarization images of the water scene taken at the same time, and then uses the Stokes equation to calculate the polarization information corresponding to each pixel in the image to obtain a frame of polarization image.
3、 如权利要求 2 所述的目标运动检测方法, 其特征在于: 所述步骤三中根 据连续脉冲序列的敏感性特征实现对水面场景中目标运动检测的方法包括: 通过 连续脉冲序列的敏感性特征, 分析连续脉冲序列的响应及响应时序; 对多帧偏振 度图像的目标脉冲序列进行匹配, 即, 将当前帧偏振度图像所产生的脉冲序列同 下一帧偏振度图像的脉冲序列进行合并, 在不同帧的脉冲序列特征中, 同一脉冲 的响应时序对应的相同目标, 实现连续脉冲序列中水面运动目标的敏感性特征的 匹配, 进而完成水面运动目标的运动检 ί 3. The target motion detection method according to claim 2, characterized in that: in step three, the method for detecting target motion in the water surface scene based on the sensitivity characteristics of the continuous pulse sequence includes: using the sensitivity of the continuous pulse sequence. Features, analyze the response and response timing of the continuous pulse sequence; match the target pulse sequence of the multi-frame polarization image, that is, merge the pulse sequence generated by the current frame of polarization image with the pulse sequence of the next frame of polarization image. , in the pulse sequence characteristics of different frames, the response timing of the same pulse corresponds to the same target, realizing the sensitivity characteristics of water surface moving targets in continuous pulse sequences. matching, and then complete the motion detection of the water surface moving target.
PCT/CN2014/076146 2014-04-24 2014-04-24 Target motion detection method for water surface polarization imaging based on compound eyes simulation WO2015161490A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/076146 WO2015161490A1 (en) 2014-04-24 2014-04-24 Target motion detection method for water surface polarization imaging based on compound eyes simulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/076146 WO2015161490A1 (en) 2014-04-24 2014-04-24 Target motion detection method for water surface polarization imaging based on compound eyes simulation

Publications (1)

Publication Number Publication Date
WO2015161490A1 true WO2015161490A1 (en) 2015-10-29

Family

ID=54331625

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/076146 WO2015161490A1 (en) 2014-04-24 2014-04-24 Target motion detection method for water surface polarization imaging based on compound eyes simulation

Country Status (1)

Country Link
WO (1) WO2015161490A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109557944A (en) * 2018-11-30 2019-04-02 南通大学 A kind of moving target position detection system and method
CN113834487A (en) * 2021-11-23 2021-12-24 北京航空航天大学 Light intensity harmonic interference estimation and compensation method for polarization sensor

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07325924A (en) * 1994-06-02 1995-12-12 Canon Inc Compound eye image pickup device
JP2002010294A (en) * 2000-06-23 2002-01-11 Nippon Hoso Kyokai <Nhk> Stereoscopic image generating apparatus
US6987258B2 (en) * 2001-12-19 2006-01-17 Intel Corporation Integrated circuit-based compound eye image sensor using a light pipe bundle
WO2013012335A1 (en) * 2011-07-21 2013-01-24 Ziv Attar Imaging device for motion detection of objects in a scene, and method for motion detection of objects in a scene
CN103293523A (en) * 2013-06-17 2013-09-11 河海大学常州校区 Hyperspectral remote sensing small target detection method based on multiple aperture information processing
CN103295221A (en) * 2013-01-31 2013-09-11 河海大学 Water surface target motion detecting method simulating compound eye visual mechanism and polarization imaging

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07325924A (en) * 1994-06-02 1995-12-12 Canon Inc Compound eye image pickup device
JP2002010294A (en) * 2000-06-23 2002-01-11 Nippon Hoso Kyokai <Nhk> Stereoscopic image generating apparatus
US6987258B2 (en) * 2001-12-19 2006-01-17 Intel Corporation Integrated circuit-based compound eye image sensor using a light pipe bundle
WO2013012335A1 (en) * 2011-07-21 2013-01-24 Ziv Attar Imaging device for motion detection of objects in a scene, and method for motion detection of objects in a scene
CN103295221A (en) * 2013-01-31 2013-09-11 河海大学 Water surface target motion detecting method simulating compound eye visual mechanism and polarization imaging
CN103293523A (en) * 2013-06-17 2013-09-11 河海大学常州校区 Hyperspectral remote sensing small target detection method based on multiple aperture information processing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109557944A (en) * 2018-11-30 2019-04-02 南通大学 A kind of moving target position detection system and method
CN113834487A (en) * 2021-11-23 2021-12-24 北京航空航天大学 Light intensity harmonic interference estimation and compensation method for polarization sensor

Similar Documents

Publication Publication Date Title
Baldwin et al. Time-ordered recent event (tore) volumes for event cameras
Zhu et al. A fast single image haze removal algorithm using color attenuation prior
CN110929593B (en) Real-time significance pedestrian detection method based on detail discrimination
CN110390249A (en) The device and method for extracting the multidate information about scene using convolutional neural networks
WO2019023921A1 (en) Gesture recognition method, apparatus, and device
CN110007366B (en) Life searching method and system based on multi-sensor fusion
Raghavendra et al. Comparative evaluation of super-resolution techniques for multi-face recognition using light-field camera
Raghavendra et al. A new perspective—Face recognition with light-field camera
CN112036339B (en) Face detection method and device and electronic equipment
CN103984955B (en) Multi-camera object identification method based on salience features and migration incremental learning
CN109190456B (en) Multi-feature fusion overlook pedestrian detection method based on aggregated channel features and gray level co-occurrence matrix
Rao et al. Object tracking system using approximate median filter, Kalman filter and dynamic template matching
Duan et al. Guided event filtering: Synergy between intensity images and neuromorphic events for high performance imaging
CN115311186A (en) Cross-scale attention confrontation fusion method for infrared and visible light images and terminal
Hommos et al. Using phase instead of optical flow for action recognition
Baisware et al. Review on recent advances in human action recognition in video data
Islam et al. Representation for action recognition with motion vector termed as: SDQIO
Zhang et al. EventMD: High-speed moving object detection based on event-based video frames
WO2015161490A1 (en) Target motion detection method for water surface polarization imaging based on compound eyes simulation
Zuo et al. Accurate depth estimation from a hybrid event-RGB stereo setup
Ma et al. MSMA-Net: An Infrared Small Target Detection Network by Multi-scale Super-resolution Enhancement and Multi-level Attention Fusion
Raghavendra et al. Multi-face recognition at a distance using light-field camera
Mishra Persuasive boundary point based face detection using normalized edge detection in regular expression face morphing
Dong et al. Foreground detection with simultaneous dictionary learning and historical pixel maintenance
Lupu Development of optimal filters obtained through convolution methods, used for fingerprint image enhancement and restoration

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14890403

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14890403

Country of ref document: EP

Kind code of ref document: A1