CN116681732B - Target motion recognition method and system based on compound eye morphological vision - Google Patents

Target motion recognition method and system based on compound eye morphological vision Download PDF

Info

Publication number
CN116681732B
CN116681732B CN202310967035.1A CN202310967035A CN116681732B CN 116681732 B CN116681732 B CN 116681732B CN 202310967035 A CN202310967035 A CN 202310967035A CN 116681732 B CN116681732 B CN 116681732B
Authority
CN
China
Prior art keywords
sub
eye
target
image
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310967035.1A
Other languages
Chinese (zh)
Other versions
CN116681732A (en
Inventor
徐梦溪
樊棠怀
樊飞燕
施建强
吕莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Technology
Nanchang Institute of Technology
Original Assignee
Nanjing Institute of Technology
Nanchang Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology, Nanchang Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN202310967035.1A priority Critical patent/CN116681732B/en
Publication of CN116681732A publication Critical patent/CN116681732A/en
Application granted granted Critical
Publication of CN116681732B publication Critical patent/CN116681732B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a target motion recognition method and a target motion recognition system based on compound eye morphological vision, wherein the method comprises the steps of obtaining a sub-eye image; performing internal parameter calibration and external parameter calibration on the first sub-eye and the second sub-eye to obtain an internal parameter and an external parameter; calculating a correction coefficient based on the internal parameter and the external parameter, and correcting the sub-eye image based on the correction coefficient to obtain a corrected image; calculating an overlapping area between the corrected images, and performing stitching processing on the corrected images based on the overlapping area and a preset stitching algorithm to obtain an overall image; the method and the device can efficiently and reliably detect the moving direction and the corresponding movement quantity of the target under the complex background condition without disorder and overlapping of the targets at the mutually overlapped part in the splicing process.

Description

Target motion recognition method and system based on compound eye morphological vision
Technical Field
The invention belongs to the technical field of bionic compound eye vision, and particularly relates to a target motion recognition method and system based on compound eye morphological vision.
Background
Inspired by fly insect compound eye structures such as natural fruit fly, dragonfly, bee and the like, a conventional CCD (Charge-coupled device) or CMOS (Complementary Metal-Oxide-Semiconductor) camera designed by a single-eye imaging principle different from a single-aperture form of a mammal is adopted, the existing artificial compound eye camera is an imitation compound eye imaging system designed by simulating the natural insect compound eye multi-aperture form imaging principle, and the artificial compound eye camera (hereinafter referred to as compound eye camera) is provided with a plurality of sub-eyes which are arranged in an array, so that the sub-eyes have large-angle vision, different depths of objects can be focused simultaneously, the human eyes and all cameras can focus light onto a photosensitive tissue or material by using a single lens, the arrangement can manufacture high-resolution images, but the compound eyes can provide different advantages, can generate panoramic viewing angles, and present remarkable depth sense organs.
Each sub-eye of the compound eye camera has a certain visual range, and a certain overlapping range exists between images shot by adjacent sub-eyes due to the arrangement mode of the sub-eyes, for example, a direct splicing mode is adopted, so that the condition that the finally formed integral image is disordered and the target is overlapped is easily caused, the imaging effect of the final integral image is influenced, and in the prior art, the movement of the target is identified by a common template matching method, but the problems of poor reliability and low efficiency exist.
Disclosure of Invention
In order to solve the technical problems, the invention provides a target motion recognition method and system based on compound eye morphological vision, which are used for solving the technical problems in the prior art.
In one aspect, the invention provides the following technical scheme, namely a target motion recognition method based on compound eye morphological vision, which comprises the following steps:
acquiring sub-eye images of continuous frames shot by sub-eyes of a compound eye camera in a preset period, selecting one of the sub-eyes as a first sub-eye, and taking a sub-eye adjacent to the first sub-eye as a second sub-eye;
performing internal parameter calibration on the first sub-eye and the second sub-eye to obtain internal parameters of the first sub-eye and the second sub-eye, setting targets in the visual ranges of the first sub-eye and the second sub-eye and obtaining corresponding target images, and performing external parameter calibration on the first sub-eye and the second sub-eye based on the target images and the internal parameters to obtain external parameters of the first sub-eye and the second sub-eye;
calculating a correction coefficient based on the internal parameter and the external parameter, and correcting the sub-eye image based on the correction coefficient to obtain a corrected image;
Calculating an overlapping area between the corrected images, and performing stitching processing on the corrected images based on the overlapping area and a preset stitching algorithm to obtain an overall image;
and carrying out differential calculation on the integral images of two adjacent frames to obtain a moving target, calculating an offset simulation value of the moving target in each direction, determining the moving direction of the moving target based on the offset simulation value, calculating the offset of the moving target, and determining the moving distance of the moving target in the corresponding moving direction based on the offset, the inner parameter and the outer parameter.
Compared with the prior art, the application has the beneficial effects that: firstly, acquiring sub-eye images of continuous frames shot by sub-eyes of a compound eye camera in a preset period, selecting one of the sub-eyes as a first sub-eye, and taking a sub-eye adjacent to the first sub-eye as a second sub-eye; performing internal parameter calibration on the first sub-eye and the second sub-eye to obtain internal parameters of the first sub-eye and the second sub-eye, setting targets in the visual ranges of the first sub-eye and the second sub-eye and obtaining corresponding target images, and performing external parameter calibration on the first sub-eye and the second sub-eye based on the target images and the internal parameters to obtain external parameters of the first sub-eye and the second sub-eye; then calculating a correction coefficient based on the internal parameter and the external parameter, and correcting the sub-eye image based on the correction coefficient to obtain a corrected image; then calculating an overlapping area between the corrected images, and performing stitching processing on the corrected images based on the overlapping area and a preset stitching algorithm to obtain an overall image; and finally, carrying out differential calculation on the integral images of two adjacent frames to obtain a moving target, calculating an offset simulation value of the moving target in each direction, determining the moving direction of the moving target based on the offset simulation value, calculating the offset of the moving target, and determining the moving distance of the moving target in the corresponding moving direction based on the offset, the inner parameter and the outer parameter.
Preferably, the step of calibrating the internal parameters of the first sub-eye and the second sub-eye to obtain the internal parameters of the first sub-eye and the second sub-eye includes:
setting a calibration point in a calibration coordinate systemThe projection coordinates of the standard point projected in the image pixel coordinate system are +.>Based on the followingEstablishing a projection equation with the projection coordinates by the calibration points:
in the method, in the process of the invention,for a first scale factor, +.>Is an internal parameter matrix->Is an outer parameter matrix;
determining an internal parameter matrix based on the projection equation
;/>
In the method, in the process of the invention,、/>edge +.>、/>Equivalent focal length of axis, +.>For the second scale factor, +.>Is the intersection of the optical axis and the pixel plane, +.>For focal length replacement amount, +.>Is the longitudinal projection variation;
based on the internal parameter matrixTo obtain internal parameters of the first sub-eye and the second sub-eye:
;/>;/>
;/>
in the method, in the process of the invention,is an internal reference matrix->Elements of the first row and the first column of +.>Is an internal reference matrix->Elements of the second column of the first row, +.>Is an internal reference matrix->Elements of the third column of the first row, +.>Is an internal reference matrix->Elements of the second row and the second column of the second row, < >>Is an internal reference matrix->Elements of the second row and the third column, < >>Is an internal reference matrix- >Elements of a third row and a third column.
Preferably, the step of setting targets in the visual ranges of the first sub-eye and the second sub-eye and obtaining corresponding target images, and calibrating the external parameters of the first sub-eye and the second sub-eye based on the target images and the internal parameters to obtain the external parameters of the first sub-eye and the second sub-eye includes:
setting a first target in the visual range of the first sub-eye and acquiring a first target image, and setting a second target in the visual range of the second sub-eye and acquiring a second target image;
determining a target conversion matrix between the second target to the first target based on the first target image and the second target image
In the method, in the process of the invention,rotate matrix for spatial coordinate system to first target image,>rotating the matrix for the spatial coordinate system to the second target image,>translation matrix for spatial coordinate system to second target image, < > for>Translating the matrix to a first target image for a spatial coordinate system;
based on the target transformation matrixDetermining an extrinsic parameter matrix of the second sub-eye to the first sub-eyeBased on the extrinsic parameter matrix->To obtain the external parameters of the first sub-eye and the second sub-eye, wherein the external parameter matrix +. >The method comprises the following steps:
in the method, in the process of the invention,for the first target to first sub-eye conversion matrix,>is the second oneAnd (3) a conversion matrix of the target to the second sub-eye.
Preferably, the step of calculating a correction coefficient based on the internal parameter and the external parameter, and correcting the sub-eye image based on the correction coefficient to obtain a corrected image includes:
determining a point in the sub-eye imageDetermining a point based on the inner parameter and the outer parameterCorresponding theoretical coordinate point->Spatial coordinate point->Based on dot->Theoretical coordinate Point->Spatial coordinate point->Establishing a correction equation:
in the method, in the process of the invention,is the intersection of the optical axis and the pixel plane, +.>For the first correction factor, +.>Is the second correction coefficient;
solving a first correction coefficient in the correction equationSecond correction coefficient->And based on said first correction factor +.>Said second correction factor->And correcting the pixel point coordinates in the sub-eye image to obtain a corrected image.
Preferably, the step of calculating the overlapping area between the corrected images and performing stitching processing on the corrected images based on the overlapping area and a preset stitching algorithm to obtain an overall image includes:
Calculating an overlap region between the corrected images based on the internal parameters
In the method, in the process of the invention,for the distance of the image sensor to the target, < >>For the distance between adjacent sub-eyes +.>Distance from compound eye camera to target, +.>To correct the length of the image +.>For the sub-eye diameter, < >>Is the sub-eye focal length;
carrying out Gaussian blur processing on the corrected image to obtain an image space, carrying out sampling processing on the image space for multiple times to obtain Gaussian pyramids, and carrying out differential processing on adjacent coatings of each group of Gaussian pyramids to obtain differential pyramids;
selecting first pixel points in the overlapping area based on the differential pyramid, selecting a preset number of second pixel points around the first pixel points, storing the first pixel points and the second pixel points into a feature point set to be selected, and taking extreme points in the feature point set to be selected as feature points;
and determining a feature code segment and a matching feature point based on the feature point, and performing splicing processing on the corrected image according to the feature code segment and the matching feature point to obtain an overall image.
Preferably, the step of determining a feature code segment and a matching feature point based on the feature point, and performing a stitching process on the corrected image according to the feature code segment and the matching feature point to obtain an overall image includes:
Selecting a neighborhood range of a preset shape by taking the characteristic point as the center, selecting a plurality of pixel point pairs in the neighborhood range, and performing assignment processing based on the pixel value between each pixel point pair to obtain a characteristic code segment;
calculating the Hamming distance between the pixel points in the feature point set to be selected and the pixel point pair, selecting two corresponding pixel points with the minimum Hamming distance as a first matched pixel point and a second matched pixel point, calculating the distance ratio of the Hamming distance between the first matched pixel point and the pixel point pair to the Hamming distance between the second matched pixel point and the pixel point pair, and taking the pixel point with the distance ratio smaller than the threshold value of the preset ratio as the matched feature point;
and splicing the corrected image based on the feature code segment and the matched feature points to obtain an overall image.
Preferably, the step of calculating the difference between the whole images of two adjacent frames to obtain a moving object, calculating an offset analog value of the moving object in each direction, and determining the moving direction of the moving object based on the offset analog value includes:
performing differential calculation on the integral images of two adjacent frames to obtain differential images, and determining a moving target based on the differential images;
Performing image decomposition on the differential image by adopting two-dimensional wavelet transformation to obtain a first frequency subgraph and a second frequency subgraph;
performing mutual suppression processing and smoothing filtering processing on the second frequency subgraph to obtain a first signal and a second signal, and performing half-wave rectification processing on the first signal and the second signal to obtain a first processing signal and a second processing signal;
outputting the first processing signal and the second processing signal into a switching value channel to output an offset analog value group, wherein the offset analog value group comprises a first offset analog value, a second offset analog value and a third offset analog value, and judging whether the first offset analog value, the second offset analog value and the third offset analog value are smaller than a preset value or not;
if the first deviation analog value is smaller than a preset value, the moving direction of the moving object moves leftwards, if the first deviation analog value is larger than the preset value, the moving direction of the moving object moves rightwards, if the second deviation analog value is smaller than the preset value, the moving direction of the moving object moves downwards, if the second deviation analog value is larger than the preset value, the moving direction of the moving object moves upwards, if the third deviation analog value is smaller than the preset value, the moving direction of the moving object moves backwards, and if the third deviation analog value is larger than the preset value, the moving direction of the moving object moves forwards.
In a second aspect, the present invention provides a target motion recognition system based on compound eye morphological vision, the system comprising:
the device comprises an acquisition module, a first sub-eye detection module and a second sub-eye detection module, wherein the acquisition module is used for acquiring sub-eye images of continuous frames shot by sub-eyes of a compound-eye camera in a preset period, selecting one of the sub-eyes as a first sub-eye, and taking a sub-eye adjacent to the first sub-eye as a second sub-eye;
the calibration module is used for calibrating internal parameters of the first sub-eye and the second sub-eye to obtain the internal parameters of the first sub-eye and the second sub-eye, setting targets in the visual range of the first sub-eye and the second sub-eye and obtaining corresponding target images, and calibrating external parameters of the first sub-eye and the second sub-eye based on the target images and the internal parameters to obtain the external parameters of the first sub-eye and the second sub-eye;
the correction module is used for calculating a correction coefficient based on the internal parameter and the external parameter, and correcting the sub-eye image based on the correction coefficient to obtain a corrected image;
the splicing module is used for calculating an overlapping area between the corrected images, and carrying out splicing processing on the corrected images based on the overlapping area and a preset splicing algorithm so as to obtain an overall image;
The motion recognition module is used for carrying out differential calculation on the whole images of two adjacent frames to obtain a moving object, calculating an offset simulation value of the moving object in each direction, determining the moving direction of the moving object based on the offset simulation value, calculating the offset of the moving object, and determining the moving distance of the moving object in the corresponding moving direction based on the offset, the inner parameter and the outer parameter.
In a third aspect, the present invention provides a computer, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the method for identifying a target motion based on compound eye morphological vision as described above when executing the computer program.
In a fourth aspect, the present invention provides a storage medium having a computer program stored thereon, where the computer program, when executed by a processor, implements the method for identifying a target motion based on compound eye morphological vision as described above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a target motion recognition method based on compound eye morphological vision according to a first embodiment of the present invention;
fig. 2 is a detailed flowchart of step S21 in the method for identifying target movement based on compound eye morphology vision according to the first embodiment of the present invention;
fig. 3 is a detailed flowchart of step S22 in the method for identifying target movement based on compound eye morphology vision according to the first embodiment of the present invention;
fig. 4 is a detailed flowchart of step S3 in the method for identifying target movement based on compound eye morphology vision according to the first embodiment of the present invention;
fig. 5 is a detailed flowchart of step S4 in the method for identifying target movement based on compound eye morphology vision according to the first embodiment of the present invention;
fig. 6 is a detailed flowchart of step S44 in the method for recognizing target movement based on compound eye morphology vision according to the first embodiment of the present invention;
fig. 7 is a detailed flowchart of step S5 in the method for identifying target movement based on compound eye morphology vision according to the first embodiment of the present invention;
FIG. 8 is a block diagram of a target motion recognition system based on compound eye morphology vision according to a second embodiment of the present invention;
Fig. 9 is a schematic hardware structure of a computer according to another embodiment of the invention.
Embodiments of the present invention will be further described below with reference to the accompanying drawings.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended to illustrate embodiments of the invention and should not be construed as limiting the invention.
In the description of the embodiments of the present invention, it should be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships based on the orientation or positional relationships shown in the drawings, merely to facilitate description of the embodiments of the present invention and simplify description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the embodiments of the present invention, the meaning of "plurality" is two or more, unless explicitly defined otherwise.
In the embodiments of the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," "secured" and the like are to be construed broadly and include, for example, either permanently connected, removably connected, or integrally formed; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the above terms in the embodiments of the present invention will be understood by those of ordinary skill in the art according to specific circumstances.
Example 1
In a first embodiment of the present invention, as shown in fig. 1, a method for identifying target movement based on compound eye morphological vision, includes:
S1, acquiring sub-eye images of continuous frames shot by sub-eyes of a compound eye camera in a preset period, selecting one of the sub-eyes as a first sub-eye, and taking a sub-eye adjacent to the first sub-eye as a second sub-eye;
specifically, in this step, the compound eye camera is composed of a plurality of sub eye cameras arranged in a whole row in a plane, so that the sub eye image is specifically an image shot by each sub eye camera in a corresponding shooting range, and for a subsequent internal and external parameter calibration process, in this step, one sub eye is selected as a first sub eye, and a sub eye at the periphery of the first sub eye is selected as a second sub eye, and it is noted that the second sub eye is specifically a circle of sub eyes around the periphery of the first sub eye, so that in the compound eye camera, except for the sub eye cameras at the edge, the rest sub eyes can be used as the first sub eyes.
S2, performing internal parameter calibration on the first sub-eye and the second sub-eye to obtain internal parameters of the first sub-eye and the second sub-eye, setting targets in visual ranges of the first sub-eye and the second sub-eye and obtaining corresponding target images, and performing external parameter calibration on the first sub-eye and the second sub-eye based on the target images and the internal parameters to obtain external parameters of the first sub-eye and the second sub-eye;
Specifically, in the conventional calibration process, the internal and external parameters of each two sub-eyes in the compound eye camera are usually required to be calibrated, but in the step, the internal and external parameters of each first sub-eye and each second sub-eye are calibrated, so that the workload of calibration is greatly reduced, the calibration efficiency is improved, meanwhile, the internal parameters specifically refer to the characteristics of the first sub-eye itself, such as the focal length, and the external parameters specifically refer to the relative position relationship, such as the rotation angle, the deviation distance and the like, between the calibration target and each sub-eye.
The step S2 includes: s21, calibrating internal parameters of the first sub-eye and the second sub-eye to obtain the internal parameters of the first sub-eye and the second sub-eye; s22, setting targets in the visual ranges of the first sub-eyes and the second sub-eyes and obtaining corresponding target images, and calibrating external parameters of the first sub-eyes and the second sub-eyes based on the target images and the internal parameters to obtain the external parameters of the first sub-eyes and the second sub-eyes.
As shown in fig. 2, the step S21 includes:
s211, setting a calibration point in the calibration coordinate systemThe projection coordinates of the standard point projected in the image pixel coordinate system are +. >Establishing a projection equation based on the calibration point and the projection coordinates:
in the method, in the process of the invention,for a first scale factor, +.>Is an internal parameter matrix->Is an outer parameter matrix;
specifically, the calibration process of the internal parameters can be determined through the projection relation between points on the sub-eye images projected onto the calibration coordinate system, wherein the first scale factor can be determined according to the size and position relation between the calibration plane on the calibration coordinate system and the sub-eye images.
S212, determining an internal parameter matrix based on the projection equation
;/>
In the method, in the process of the invention,、/>edge +.>、/>Equivalent focal length of axis, +.>For the second scale factor, +.>Is the intersection of the optical axis and the pixel plane, +.>For focal length replacement amount, +.>Is the longitudinal projection variation.
S213, based on the internal parameter matrixTo obtain internal parameters of the first sub-eye and the second sub-eye:
;/>;/>
;/>
in the method, in the process of the invention,is an internal reference matrix->Elements of the first row and the first column of +.>Is an internal reference matrix->Elements of the second column of the first row, +.>Is an internal reference matrix->Elements of the third column of the first row, +.>Is an internal reference matrix->Elements of the second row and the second column of the second row, < >>Is an internal reference matrix->Elements of the second row and the third column, < >>Is an internal reference matrix- >Elements of a third row and a third column;
specifically, as can be seen from the equations of the internal parameters in step S213, the above process has five position descriptions, so that in the actual calculation process, when there are more than three calibration coordinate systems and corresponding sub-eye images, the above equation set can be solved to obtain the internal parameters.
As shown in fig. 3, the step S22 includes:
s221, setting a first target in the visual range of the first sub-eye and acquiring a first target image, and setting a second target in the visual range of the second sub-eye and acquiring a second target image;
specifically, the first target and the second target are respectively arranged in the visual ranges of the first sub-eye and the second sub-eye, namely, the first target can be completely reflected in the first target image, the corresponding second target can be completely reflected in the second target image, the determined first target image and the determined second target image are utilized, and the spatial position relationship between the targets and the sub-eyes is established according to the position information of the corresponding target angular points, so that the position conversion relationship among the first sub-eye, the second sub-eye, the first target and the second target is determined, and the corresponding target conversion matrix is obtained.
S222, determining a target conversion matrix between the second target and the first target based on the first target image and the second target image
In the method, in the process of the invention,rotate matrix for spatial coordinate system to first target image,>rotating the matrix for the spatial coordinate system to the second target image,>translation matrix for spatial coordinate system to second target image, < > for>Is a spatial coordinate system to the first target image translation matrix.
S223, based on the target conversion matrixDetermining an extrinsic parameter matrix +_for the second sub-eye to the first sub-eye>Based on the extrinsic parameter matrix->To obtain the external parameters of the first sub-eye and the second sub-eye, wherein the external parameter matrix +.>The method comprises the following steps:
in the method, in the process of the invention,for the first target to first sub-eye conversion matrix,>is the conversion matrix of the second target to the second sub-eye.
S3, calculating a correction coefficient based on the internal parameter and the external parameter, and correcting the sub-eye image based on the correction coefficient to obtain a corrected image;
specifically, in the shooting process of the actual compound eye camera, because a certain error exists between the actually shot image and the theoretically shot image due to the installation and manufacturing errors among the sub-eyes, in step S3, the sub-eye image is corrected according to the corresponding correction coefficient by calculating the corresponding correction coefficient, so that the error is reduced as much as possible, and the corrected image is ensured to be the same as the theoretically shot image as much as possible.
As shown in fig. 4, the step S3 includes:
s31, determining a point in the sub-eye imageDetermining a point +_based on the inner parameter and the outer parameter>Corresponding theoretical coordinate point->Spatial coordinate point->Based on dot->Theoretical coordinate Point->Spatial coordinate point->Establishing a correction equation:
in the method, in the process of the invention,is the intersection of the optical axis and the pixel plane, +.>For the first correction factor, +.>Is the second correction coefficient;
specifically, in this step, the first correction coefficient and the second correction coefficient represent errors in the X, Y axes of the sub-eye images, respectively, and errors about the Z axis are negligible compared with those in the X, Y axes, so that only errors in the X, Y axes are corrected in this step.
S32, solving a first correction coefficient in the correction equationSecond correction coefficient->And based on said first correction factor +.>Said second correction factor->Correcting pixel point coordinates in the sub-eye images to obtain corrected images;
specifically, in the actual photographing process, the dotIs>Can be respectively from sub-eyesImage acquisition and acquisition from the calculated theoretical image, thus by solving the corresponding first correction coefficient +. >Second correction coefficient->And based on said first correction factor +.>Said second correction factor->And correcting the pixel point coordinates in the sub-eye image to obtain a corrected image.
S4, calculating an overlapping area between the corrected images, and performing stitching processing on the corrected images based on the overlapping area and a preset stitching algorithm to obtain an overall image;
specifically, in the actual shooting process, since the visual range of each sub-eye is circular, and each sub-eye is densely arranged in the compound eye camera, a certain overlapping area exists between two adjacent sub-eyes, for example, if the images are directly spliced, the situation that the final formed integral image is disordered and the target is overlapped is easily caused, and the imaging effect of the final integral image is affected, and therefore, in the step, the corrected image is spliced based on the overlapping area and a preset splicing algorithm, so that the integral image is obtained.
As shown in fig. 5, the step S4 includes:
s41, calculating the overlapping area between the corrected images based on the internal parameters
In the method, in the process of the invention,for the distance of the image sensor to the target, < >>For the distance between adjacent sub-eyes +. >Distance from compound eye camera to target, +.>To correct the length of the image +.>For the sub-eye diameter, < >>Is the sub-eye focal length;
for compound eye cameras, the distance between the sub-eyes is determined by the mechanical structure of the compound eye camera, that is, the internal parameters of the camera, and the distance is a constant value, so that the overlapping rate between two adjacent photographed sub-eye images is also constant, and therefore, by calculating the overlapping rate between two adjacent sub-eye images, the product of the overlapping rate and the length of the corrected image is taken as the corresponding overlapping region.
S42, carrying out Gaussian blur processing on the corrected image to obtain an image space, carrying out multiple sampling processing on the image space to obtain Gaussian pyramids, and carrying out differential processing on adjacent coatings of each group of Gaussian pyramids to obtain differential pyramids;
specifically, the purpose of obtaining the differential pyramid is to find out characteristic points meeting preset conditions in the obtained images in different scales, namely, in each layer of the differential pyramid, and finish the image splicing according to the matching relation between the characteristic points.
S43, selecting first pixel points in the overlapping area based on the differential pyramid, selecting a preset number of second pixel points around the first pixel points, storing the first pixel points and the second pixel points into a feature point set to be selected, and taking extreme points in the feature point set to be selected as feature points;
Specifically, the selected pixel points are only found in the overlapping area, and in the actually shot sub-eye images, in order to improve the utilization efficiency of the sub-eye camera, the overlapping area between two adjacent sub-eye images is usually not too large, and the overlapping area occupies only a smaller part compared with one sub-eye image, so that in the step, the selection range of the pixel points is reduced from the whole sub-eye image to the overlapping area, thereby greatly reducing the calculation difficulty and improving the efficiency of feature point matching and splicing.
S44, determining a feature code segment and a matched feature point based on the feature point, and performing splicing processing on the corrected image according to the feature code segment and the matched feature point to obtain an overall image.
As shown in fig. 6, the step S44 includes:
s441, selecting a neighborhood range of a preset shape by taking the characteristic point as the center, selecting a plurality of pixel point pairs in the neighborhood range, and performing assignment processing based on the pixel value between each pixel point pair to obtain a characteristic code segment;
specifically, the selected pixel point pair includes two pixel points, when the pixel point pair is respectively marked as a pixel point L and a pixel point V, when the pixel value of the pixel point L is larger than the pixel value of the pixel point V, the pixel point pair is subjected to assignment processing and is assigned to 0, when the pixel value of the pixel point L is not larger than the pixel value of the pixel point V, the pixel point pair is subjected to assignment processing and is assigned to 1, and after the pixel point pair in a preset area is extracted, a series of feature code segments reflecting the pixel features of the neighborhood range can be obtained, and in the step, the preset range is square, rectangle, diamond or the like.
S442, calculating the Hamming distance between the pixel points in the feature point set to be selected and the pixel point pair, selecting two corresponding pixel points with the minimum Hamming distance as a first matched pixel point and a second matched pixel point, calculating the distance ratio of the Hamming distance between the first matched pixel point and the pixel point pair to the Hamming distance between the second matched pixel point and the pixel point pair, and taking the pixel point with the distance ratio smaller than the threshold value of the preset ratio as the matched feature point;
specifically, the hamming distance between the extracted pixel points in the feature point set to be selected and the pixel point pair is calculated first, and the matching of the descriptors and the matching process of the feature points can be completed by judging the size relation between the distance ratio and the preset ratio threshold, and the process is rapid and has high matching efficiency.
S443, based on the feature code segments and the matched feature points, the corrected image is spliced to obtain an integral image.
S5, carrying out differential calculation on the integral images of two adjacent frames to obtain a moving target, calculating an offset simulation value of the moving target in each direction, determining the moving direction of the moving target based on the offset simulation value, calculating the offset of the moving target, and determining the moving distance of the moving target in the corresponding moving direction based on the offset, the inner parameter and the outer parameter;
Specifically, after obtaining a panoramic image, determining a moving object in the image by using images of two adjacent frames, wherein the purpose of the step is to determine the moving direction of the moving object and the movement amount in the corresponding moving direction, and in the whole image, it can be seen that the moving object only has a moving component in the direction X, Y in the image, when the movement amount in the X direction is 0 and the movement amount in the Y direction is not 0, the moving object moves along the Y axis, and similarly, the corresponding moving direction can be obtained according to the movement amount in the X, Y axis;
however, in the actual environment coordinate system, the moving object can also move along the Z axis, where the Z axis is the axis perpendicular to the whole image, so that the moving object may have motion components on the X, Y, Z axis, and thus the corresponding motion direction can be determined by shifting the analog value, and in this embodiment, the range of the analog offset value is-1 to 1.
As shown in fig. 7, the step S5 includes:
s51, carrying out differential calculation on the whole images of two adjacent frames to obtain differential images, and determining a moving target based on the differential images;
specifically, the corresponding moving object can be determined according to the pixel points with the gray value change of the obtained differential image.
S52, performing image decomposition on the differential image by adopting two-dimensional wavelet transformation to obtain a first frequency subgraph and a second frequency subgraph;
specifically, the high-frequency signal and the low-frequency signal in the differential image can be extracted through two-dimensional wavelet transformation, wherein the high-frequency signal is a corresponding first frequency subgraph, and the low-frequency signal is a corresponding second frequency subgraph.
S53, performing mutual suppression processing and smoothing filtering processing on the second frequency subgraph to obtain a first signal and a second signal, and performing half-wave rectification processing on the first signal and the second signal to obtain a first processing signal and a second processing signal;
specifically, after two-dimensional wavelet transformation, motion information of a moving object can be identified through a low-frequency signal, meanwhile, after mutual suppression processing, an excitation field and a suppression field in a Gaussian space can be determined, motion information is contained in excitation field data, after smoothing filter processing, corresponding isolated noise can be removed, further useful motion signals are extracted from complex background images to the greatest extent, and then half-wave rectification processing is carried out, so that the operation signals are converted into corresponding signal variation so as to be conveniently input into corresponding switching value channels.
S54, outputting the first processing signal and the second processing signal into a switching value channel to output an offset analog value group, wherein the offset analog value group comprises a first offset analog value, a second offset analog value and a third offset analog value, and judging whether the first offset analog value, the second offset analog value and the third offset analog value are smaller than preset values or not;
specifically, the switching value channel is an open channel and a closed channel, when corresponding motion information is input to the switching value channel, an offset analog value is correspondingly output, for actual motion information, motion in the X, Y axis direction can be determined according to the change of gray values, motion in the Z axis can be determined according to the size relation of moving targets of two adjacent frames, when the moving target of the next frame is greater than that of the previous frame, the moving target vertical image is considered to move outwards, otherwise, the moving target vertical image moves inwards, and the value ranges of the output first offset analog value, the second offset analog value and the third offset analog value are all-1.
S55, if the first deviation simulation value is smaller than a preset value, the moving direction of the moving object moves leftwards, if the first deviation simulation value is larger than the preset value, the moving direction of the moving object moves rightwards, if the second deviation simulation value is smaller than the preset value, the moving direction of the moving object moves downwards, if the second deviation simulation value is larger than the preset value, the moving direction of the moving object moves upwards, if the third deviation simulation value is smaller than the preset value, the moving direction of the moving object moves backwards, and if the third deviation simulation value is larger than the preset value, the moving direction of the moving object moves forwards;
Specifically, the preset value is 0, the first offset analog value, the second offset analog value and the third offset analog value respectively correspond to the motion information of the X, Y, Z directions, when the first offset value is greater than 0, the moving object moves rightwards, when the first offset value is smaller than 0, the moving object moves leftwards, when the first offset value is equal to 0, the moving object does not move in the X-axis direction, and the corresponding motion direction can be determined according to the specific values of the second offset analog value and the third offset analog value.
It should be noted that, after a specific differential image is obtained, the offset corresponding to the moving object on the image can be obtained, and the motion quantity in the corresponding direction can be calculated by combining the conversion relationship between the image and the sub-eye camera and the parameters of the sub-eye camera, namely the inner parameter and the outer parameter.
According to the target motion recognition method based on compound eye morphological vision, which is provided by the first embodiment of the invention, firstly, sub-eye images of continuous frames shot by sub-eyes of a compound eye camera in a preset period are obtained, one of the sub-eyes is selected as a first sub-eye, and the sub-eye adjacent to the first sub-eye is selected as a second sub-eye; performing internal parameter calibration on the first sub-eye and the second sub-eye to obtain internal parameters of the first sub-eye and the second sub-eye, setting targets in the visual ranges of the first sub-eye and the second sub-eye and obtaining corresponding target images, and performing external parameter calibration on the first sub-eye and the second sub-eye based on the target images and the internal parameters to obtain external parameters of the first sub-eye and the second sub-eye; then calculating a correction coefficient based on the internal parameter and the external parameter, and correcting the sub-eye image based on the correction coefficient to obtain a corrected image; then calculating an overlapping area between the corrected images, and performing stitching processing on the corrected images based on the overlapping area and a preset stitching algorithm to obtain an overall image; and finally, carrying out differential calculation on the integral images of two adjacent frames to obtain a moving target, calculating an offset simulation value of the moving target in each direction, determining the moving direction of the moving target based on the offset simulation value, calculating the offset of the moving target, and determining the moving distance of the moving target in the corresponding moving direction based on the offset, the inner parameter and the outer parameter.
Example two
As shown in fig. 8, in a second embodiment of the present invention, there is provided a target motion recognition system based on compound eye morphological vision, the system comprising:
an acquisition module 1, configured to acquire sub-eye images of consecutive frames captured by sub-eyes of a compound-eye camera in a preset period, select one of the sub-eyes as a first sub-eye, and use a sub-eye adjacent to the first sub-eye as a second sub-eye;
the calibration module 2 is configured to perform internal parameter calibration on the first sub-eye and the second sub-eye to obtain internal parameters of the first sub-eye and the second sub-eye, set targets in visual ranges of the first sub-eye and the second sub-eye, and obtain corresponding target images, and perform external parameter calibration on the first sub-eye and the second sub-eye based on the target images and the internal parameters to obtain external parameters of the first sub-eye and the second sub-eye;
a correction module 3, configured to calculate a correction coefficient based on the internal parameter and the external parameter, and correct the sub-eye image based on the correction coefficient to obtain a corrected image;
the stitching module 4 is used for calculating an overlapping area between the corrected images, and stitching the corrected images based on the overlapping area and a preset stitching algorithm to obtain an overall image;
The motion recognition module 5 is configured to perform differential computation on the overall images of two adjacent frames to obtain a moving target, calculate an offset simulation value of the moving target in each direction, determine a moving direction of the moving target based on the offset simulation value, calculate an offset of the moving target, and determine a moving distance of the moving target in a corresponding moving direction based on the offset, the inner parameter and the outer parameter.
Wherein, the calibration module 2 comprises:
a projection sub-module for setting a calibration point in the calibration coordinate systemThe projection coordinates of the standard point projected in the image pixel coordinate system are +.>Based on the calibration point and theThe projection coordinates establish a projection equation:
in the method, in the process of the invention,for a first scale factor, +.>Is an internal parameter matrix->Is an outer parameter matrix;
an internal parameter matrix determination sub-module for determining an internal parameter matrix based on the projection equation
;/>
In the method, in the process of the invention,、/>edge +.>、/>Equivalent focal length of axis, +.>Is the firstTwo scale factors->Is the intersection of the optical axis and the pixel plane, +.>For focal length replacement amount, +.>Is the longitudinal projection variation;
an internal parameter determination sub-module for determining an internal parameter matrix based on the internal parameter matrixTo obtain internal parameters of the first sub-eye and the second sub-eye:
;/>;/>
;/>
In the method, in the process of the invention,is an internal reference matrix->Elements of the first row and the first column of +.>Is an internal reference matrix->Elements of the second column of the first row, +.>Is an internal reference matrix->Elements of the third column of the first row, +.>Is an internal reference matrix->Elements of the second row and the second column of the second row, < >>Is an internal reference matrix->Elements of the second row and the third column, < >>Is an internal reference matrix->Elements of a third row and a third column.
The calibration module 2 further comprises:
the target sub-module is used for setting a first target in the visual range of the first sub-eye and acquiring a first target image, and setting a second target in the visual range of the second sub-eye and acquiring a second target image;
a target conversion sub-module for determining a target conversion matrix between the second target and the first target based on the first target image and the second target image
In the method, in the process of the invention,rotate matrix for spatial coordinate system to first target image,>rotating the matrix for the spatial coordinate system to the second target image,>translation matrix for spatial coordinate system to second target image, < > for>Translating the matrix to a first target image for a spatial coordinate system;
an extrinsic parameter determination sub-module for determining a target transformation matrix based on the target transformation matrix Determining an extrinsic parameter matrix +_for the second sub-eye to the first sub-eye>Based on the extrinsic parameter matrix->To obtain the external parameters of the first sub-eye and the second sub-eye, wherein the external parameter matrix +.>The method comprises the following steps:
in the method, in the process of the invention,for the first target to first sub-eye conversion matrix,>is the conversion matrix of the second target to the second sub-eye.
The correction module 3 includes:
a correction equation establishment sub-module for determining a point in the sub-eye imageDetermining a point +_based on the inner parameter and the outer parameter>Corresponding theoretical coordinate point->Spatial coordinate point->Based on dot->Theoretical coordinate Point->Spatial coordinate point->Establishing a correction equation:
in the method, in the process of the invention,is the intersection of the optical axis and the pixel plane, +.>For the first correction factor, +.>Is the second correction coefficient;
a correction sub-module for solving a first of the correction equationsCorrection coefficientSecond correction coefficient->And based on said first correction factor +.>Said second correction factor->And correcting the pixel point coordinates in the sub-eye image to obtain a corrected image.
The splicing module 4 includes:
a region calculation sub-module for calculating an overlap region between the corrected images based on the internal parameters
In the method, in the process of the invention,for the distance of the image sensor to the target, < >>For the distance between adjacent sub-eyes +.>Distance from compound eye camera to target, +.>To correct the length of the image +.>For the sub-eye diameter, < >>Is the sub-eye focal length;
the difference sub-module is used for carrying out Gaussian blur processing on the corrected image to obtain an image space, carrying out multiple sampling processing on the image space to obtain Gaussian pyramids, and carrying out difference processing on adjacent coatings of each group of Gaussian pyramids to obtain difference pyramids;
the characteristic point determining submodule is used for selecting first pixel points in the overlapping area based on the differential pyramid, selecting a preset number of second pixel points around the first pixel points, storing the first pixel points and the second pixel points into a to-be-selected characteristic point set, and taking extreme points in the to-be-selected characteristic point set as characteristic points;
and the splicing sub-module is used for determining a feature code segment and a matching feature point based on the feature point, and carrying out splicing processing on the corrected image according to the feature code segment and the matching feature point so as to obtain an integral image.
The splicing submodule comprises:
the assignment unit is used for selecting a neighborhood range of a preset shape by taking the characteristic points as the center, selecting a plurality of pixel point pairs in the neighborhood range, and carrying out assignment processing based on the pixel value between each pixel point pair so as to obtain a characteristic code segment;
The matching unit is used for calculating the Hamming distance between the pixel points in the feature point set to be selected and the pixel point pair, selecting two corresponding pixel points with the minimum Hamming distance as a first matching pixel point and a second matching pixel point, calculating the distance ratio of the Hamming distance between the first matching pixel point and the pixel point pair to the Hamming distance between the second matching pixel point and the pixel point pair, and taking the pixel point with the distance ratio smaller than a preset ratio threshold value as a matching feature point;
and the splicing unit is used for carrying out splicing processing on the corrected image based on the characteristic code segment and the matched characteristic points so as to obtain an integral image.
The motion recognition module 5 comprises:
the target determining submodule is used for carrying out differential calculation on the whole images of two adjacent frames to obtain differential images, and determining a moving target based on the differential images;
the decomposition sub-module is used for carrying out image decomposition on the differential image by adopting two-dimensional wavelet transformation so as to obtain a first frequency subgraph and a second frequency subgraph;
the processing sub-module is used for carrying out mutual suppression processing and smoothing filtering processing on the second frequency subgraph to obtain a first signal and a second signal, and carrying out half-wave rectification processing on the first signal and the second signal to obtain a first processing signal and a second processing signal;
The offset analog value calculation sub-module is used for outputting an offset analog value group in the first processing signal and the second processing signal output value switching value channel, wherein the offset analog value group comprises a first offset analog value, a second offset analog value and a third offset analog value, and judging whether the first offset analog value, the second offset analog value and the third offset analog value are smaller than preset values or not;
the identification sub-module is used for enabling the movement direction of the moving object to move leftwards if the first deviation simulation value is smaller than a preset value, enabling the movement direction of the moving object to move rightwards if the first deviation simulation value is larger than the preset value, enabling the movement direction of the moving object to move downwards if the second deviation simulation value is smaller than the preset value, enabling the movement direction of the moving object to move upwards if the second deviation simulation value is larger than the preset value, enabling the movement direction of the moving object to move backwards if the third deviation simulation value is smaller than the preset value, and enabling the movement direction of the moving object to move forwards if the third deviation simulation value is larger than the preset value.
In other embodiments of the present invention, a computer is provided in the embodiments of the present invention, including a memory 102, a processor 101, and a computer program stored in the memory 102 and executable on the processor 101, where the processor 101 implements the method for identifying a target motion based on compound eye morphology vision as described above when executing the computer program.
In particular, the processor 101 may include a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, abbreviated as ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present application.
Memory 102 may include, among other things, mass storage for data or instructions. By way of example, and not limitation, memory 102 may comprise a Hard Disk Drive (HDD), floppy Disk Drive, solid state Drive (Solid State Drive, SSD), flash memory, optical Disk, magneto-optical Disk, tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the foregoing. Memory 102 may include removable or non-removable (or fixed) media, where appropriate. The memory 102 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 102 is a Non-Volatile (Non-Volatile) memory. In a particular embodiment, the Memory 102 includes Read-Only Memory (ROM) and random access Memory (Random Access Memory, RAM). Where appropriate, the ROM may be a mask-programmed ROM, a programmable ROM (Programmable Read-Only Memory, abbreviated PROM), an erasable PROM (Erasable Programmable Read-Only Memory, abbreviated EPROM), an electrically erasable PROM (Electrically Erasable Programmable Read-Only Memory, abbreviated EEPROM), an electrically rewritable ROM (Electrically Alterable Read-Only Memory, abbreviated EAROM), or a FLASH Memory (FLASH), or a combination of two or more of these. The RAM may be Static Random-Access Memory (SRAM) or dynamic Random-Access Memory (Dynamic Random Access Memory DRAM), where the DRAM may be a fast page mode dynamic Random-Access Memory (Fast Page Mode Dynamic Random Access Memory FPMDRAM), extended data output dynamic Random-Access Memory (Extended Date Out Dynamic Random Access Memory EDODRAM), synchronous dynamic Random-Access Memory (Synchronous Dynamic Random-Access Memory SDRAM), or the like, as appropriate.
Memory 102 may be used to store or cache various data files that need to be processed and/or communicated, as well as possible computer program instructions for execution by processor 101.
The processor 101 reads and executes the computer program instructions stored in the memory 102 to implement the above-described target motion recognition method based on compound eye morphology vision.
In some of these embodiments, the computer may also include a communication interface 103 and a bus 100. As shown in fig. 9, the processor 101, the memory 102, and the communication interface 103 are connected to each other via the bus 100 and perform communication with each other.
The communication interface 103 is used to implement communications between modules, devices, units, and/or units in embodiments of the application. The communication interface 103 may also enable communication with other components such as: and the external equipment, the image/data acquisition equipment, the database, the external storage, the image/data processing workstation and the like are used for data communication.
Bus 100 includes hardware, software, or both, coupling components of a computer device to each other. Bus 100 includes, but is not limited to, at least one of: data Bus (Data Bus), address Bus (Address Bus), control Bus (Control Bus), expansion Bus (Expansion Bus), local Bus (Local Bus). By way of example, and not limitation, bus 100 may include a graphics acceleration interface (Accelerated Graphics Port), abbreviated AGP, or other graphics Bus, an enhanced industry standard architecture (Extended Industry Standard Architecture, abbreviated EISA) Bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an industry standard architecture (Industry Standard Architecture, ISA) Bus, a wireless bandwidth (InfiniBand) interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a micro channel architecture (Micro Channel Architecture, abbreviated MCa) Bus, a peripheral component interconnect (Peripheral Component Interconnect, abbreviated PCI) Bus, a PCI-Express (PCI-X) Bus, a serial advanced technology attachment (Serial Advanced Technology Attachment, abbreviated SATA) Bus, a video electronics standards association local (Video Electronics Standards Association Local Bus, abbreviated VLB) Bus, or other suitable Bus, or a combination of two or more of the foregoing. Bus 100 may include one or more buses, where appropriate. Although embodiments of the application have been described and illustrated with respect to a particular bus, the application contemplates any suitable bus or interconnect.
The computer can execute the target motion recognition method based on the compound eye morphological vision based on the target motion recognition system based on the compound eye morphological vision, thereby realizing the motion recognition of the target.
In still other embodiments of the present application, in combination with the above-described method for identifying a target motion based on compound eye morphology vision, the embodiments of the present application provide a technical solution, a storage medium storing a computer program, where the computer program when executed by a processor implements the above-described method for identifying a target motion based on compound eye morphology vision.
Those of skill in the art will appreciate that the logic and/or steps represented in the flow diagrams or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (6)

1. The target motion recognition method based on compound eye morphological vision is characterized by comprising the following steps of:
acquiring sub-eye images of continuous frames shot by sub-eyes of a compound eye camera in a preset period, selecting one of the sub-eyes as a first sub-eye, and taking a sub-eye adjacent to the first sub-eye as a second sub-eye;
performing internal parameter calibration on the first sub-eye and the second sub-eye to obtain internal parameters of the first sub-eye and the second sub-eye, setting targets in the visual ranges of the first sub-eye and the second sub-eye and obtaining corresponding target images, and performing external parameter calibration on the first sub-eye and the second sub-eye based on the target images and the internal parameters to obtain external parameters of the first sub-eye and the second sub-eye;
Calculating a correction coefficient based on the internal parameter and the external parameter, and correcting the sub-eye image based on the correction coefficient to obtain a corrected image;
calculating an overlapping area between the corrected images, and performing stitching processing on the corrected images based on the overlapping area and a preset stitching algorithm to obtain an overall image;
performing differential calculation on the integral images of two adjacent frames to obtain a moving target, calculating an offset simulation value of the moving target in each direction, determining the moving direction of the moving target based on the offset simulation value, calculating the offset of the moving target, and determining the moving distance of the moving target in the corresponding moving direction based on the offset, the inner parameter and the outer parameter;
the step of calibrating the internal parameters of the first sub-eye and the second sub-eye to obtain the internal parameters of the first sub-eye and the second sub-eye comprises the following steps:
setting a calibration point in a calibration coordinate systemThe projection coordinates of the standard point projected in the image pixel coordinate system are +.>Establishing a projection equation based on the calibration point and the projection coordinates:
in the method, in the process of the invention,for a first scale factor, +. >Is an internal parameter matrix->Is an outer parameter matrix;
determining an internal parameter matrix based on the projection equation
;/>
In the method, in the process of the invention,、/>edge +.>、/>Equivalent focal length of axis, +.>For the second scale factor, +.>Is the intersection of the optical axis and the pixel plane, +.>For focal length replacement amount, +.>Is the longitudinal projection variation;
based on the internal parameter matrixTo obtain internal parameters of the first sub-eye and the second sub-eye:
;/>;/>
;/>
in the method, in the process of the invention,is an internal reference matrix->Elements of the first row and the first column of +.>Is an internal reference matrix->Elements of the second column of the first row, +.>Is an internal reference matrix->Elements of the third column of the first row, +.>Is an internal reference matrix->Elements of the second row and the second column of the second row, < >>Is an internal reference matrix->Elements of the second row and the third column, < >>Is an internal reference matrix->Elements of a third row and a third column;
the steps of setting targets in the visual ranges of the first sub-eye and the second sub-eye and obtaining corresponding target images, and calibrating the external parameters of the first sub-eye and the second sub-eye based on the target images and the internal parameters to obtain the external parameters of the first sub-eye and the second sub-eye comprise:
setting a first target in the visual range of the first sub-eye and acquiring a first target image, and setting a second target in the visual range of the second sub-eye and acquiring a second target image;
Determining a target conversion matrix between the second target to the first target based on the first target image and the second target image
In the method, in the process of the invention,rotate matrix for spatial coordinate system to first target image,>rotating the matrix for the spatial coordinate system to the second target image,>translation matrix for spatial coordinate system to second target image, < > for>Translating the matrix to a first target image for a spatial coordinate system;
based on the target transformation matrixDetermining an extrinsic parameter matrix +_for the second sub-eye to the first sub-eye>Based on the extrinsic parameter matrix->To obtain the firstExternal parameters of sub-eye and said second sub-eye, wherein said external parameter matrix +.>The method comprises the following steps:
in the method, in the process of the invention,for the first target to first sub-eye conversion matrix,>a conversion matrix for the second target to the second sub-eye;
the step of calculating a correction coefficient based on the internal parameter and the external parameter, and correcting the sub-eye image based on the correction coefficient to obtain a corrected image includes:
determining a point in the sub-eye imageDetermining a point based on the inner parameter and the outer parameterCorresponding theoretical coordinate point->Spatial coordinate point->Based on dot->Theoretical coordinate Point- >Spatial coordinate point->Establishing a correction equation:
in the method, in the process of the invention,is the intersection of the optical axis and the pixel plane, +.>For the first correction factor, +.>Is the second correction coefficient;
solving a first correction coefficient in the correction equationSecond correction coefficient->And based on the first correction coefficientSaid second correction factor->Correcting pixel point coordinates in the sub-eye images to obtain corrected images;
the step of calculating the overlapping area between the corrected images and performing stitching processing on the corrected images based on the overlapping area and a preset stitching algorithm to obtain an overall image comprises the following steps:
based on the followingInternal parameters calculate the overlap region between the corrected images
In the method, in the process of the invention,for the distance of the image sensor to the target, < >>For the distance between adjacent sub-eyes +.>Distance from compound eye camera to target, +.>To correct the length of the image +.>For the sub-eye diameter, < >>Is the sub-eye focal length;
carrying out Gaussian blur processing on the corrected image to obtain an image space, carrying out sampling processing on the image space for multiple times to obtain Gaussian pyramids, and carrying out differential processing on adjacent coatings of each group of Gaussian pyramids to obtain differential pyramids;
Selecting first pixel points in the overlapping area based on the differential pyramid, selecting a preset number of second pixel points around the first pixel points, storing the first pixel points and the second pixel points into a feature point set to be selected, and taking extreme points in the feature point set to be selected as feature points;
and determining a feature code segment and a matching feature point based on the feature point, and performing splicing processing on the corrected image according to the feature code segment and the matching feature point to obtain an overall image.
2. The method for identifying target motion based on compound eye morphological vision according to claim 1, wherein the step of determining a feature code segment and a matching feature point based on the feature point, and performing a stitching process on the corrected image according to the feature code segment and the matching feature point to obtain an overall image comprises:
selecting a neighborhood range of a preset shape by taking the characteristic point as the center, selecting a plurality of pixel point pairs in the neighborhood range, and performing assignment processing based on the pixel value between each pixel point pair to obtain a characteristic code segment;
calculating the Hamming distance between the pixel points in the feature point set to be selected and the pixel point pair, selecting two corresponding pixel points with the minimum Hamming distance as a first matched pixel point and a second matched pixel point, calculating the distance ratio of the Hamming distance between the first matched pixel point and the pixel point pair to the Hamming distance between the second matched pixel point and the pixel point pair, and taking the pixel point with the distance ratio smaller than the threshold value of the preset ratio as the matched feature point;
And splicing the corrected image based on the feature code segment and the matched feature points to obtain an overall image.
3. The method for identifying a target motion based on compound eye morphological vision according to claim 1, wherein the step of performing differential calculation on the whole images of two adjacent frames to obtain a moving target, calculating an offset simulation value of the moving target in each direction, and determining the moving direction of the moving target based on the offset simulation value comprises:
performing differential calculation on the integral images of two adjacent frames to obtain differential images, and determining a moving target based on the differential images;
performing image decomposition on the differential image by adopting two-dimensional wavelet transformation to obtain a first frequency subgraph and a second frequency subgraph;
performing mutual suppression processing and smoothing filtering processing on the second frequency subgraph to obtain a first signal and a second signal, and performing half-wave rectification processing on the first signal and the second signal to obtain a first processing signal and a second processing signal;
outputting the first processing signal and the second processing signal into a switching value channel to output an offset analog value group, wherein the offset analog value group comprises a first offset analog value, a second offset analog value and a third offset analog value, and judging whether the first offset analog value, the second offset analog value and the third offset analog value are smaller than a preset value or not;
If the first deviation analog value is smaller than a preset value, the moving direction of the moving object moves leftwards, if the first deviation analog value is larger than the preset value, the moving direction of the moving object moves rightwards, if the second deviation analog value is smaller than the preset value, the moving direction of the moving object moves downwards, if the second deviation analog value is larger than the preset value, the moving direction of the moving object moves upwards, if the third deviation analog value is smaller than the preset value, the moving direction of the moving object moves backwards, and if the third deviation analog value is larger than the preset value, the moving direction of the moving object moves forwards.
4. A compound eye morphology vision-based target motion recognition system, the system comprising:
the device comprises an acquisition module, a first sub-eye detection module and a second sub-eye detection module, wherein the acquisition module is used for acquiring sub-eye images of continuous frames shot by sub-eyes of a compound-eye camera in a preset period, selecting one of the sub-eyes as a first sub-eye, and taking a sub-eye adjacent to the first sub-eye as a second sub-eye;
the calibration module is used for calibrating internal parameters of the first sub-eye and the second sub-eye to obtain the internal parameters of the first sub-eye and the second sub-eye, setting targets in the visual range of the first sub-eye and the second sub-eye and obtaining corresponding target images, and calibrating external parameters of the first sub-eye and the second sub-eye based on the target images and the internal parameters to obtain the external parameters of the first sub-eye and the second sub-eye;
The correction module is used for calculating a correction coefficient based on the internal parameter and the external parameter, and correcting the sub-eye image based on the correction coefficient to obtain a corrected image;
the splicing module is used for calculating an overlapping area between the corrected images, and carrying out splicing processing on the corrected images based on the overlapping area and a preset splicing algorithm so as to obtain an overall image;
the motion recognition module is used for carrying out differential calculation on the integral images of two adjacent frames to obtain a moving target, calculating an offset simulation value of the moving target in each direction, determining the moving direction of the moving target based on the offset simulation value, calculating the offset of the moving target, and determining the moving distance of the moving target in the corresponding moving direction based on the offset, the internal parameter and the external parameter;
wherein, the calibration module includes:
a projection sub-module for setting a calibration point in the calibration coordinate systemThe projection coordinates of the standard point projected in the image pixel coordinate system are +.>Establishing a projection equation based on the calibration point and the projection coordinates:
in the method, in the process of the invention,for a first scale factor, +.>Is an internal parameter matrix- >Is an outer parameter matrix;
an internal parameter matrix determination sub-module for determining an internal parameter matrix based on the projection equation
;/>
In the method, in the process of the invention,、/>edge +.>、/>Equivalent focal length of axis, +.>For the second scale factor, +.>Is the intersection of the optical axis and the pixel plane, +.>For focal length replacement amount, +.>Is the longitudinal projection variation;
an internal parameter determination sub-module for determining an internal parameter matrix based on the internal parameter matrixTo obtain internal parameters of the first sub-eye and the second sub-eye:
;/>;/>
;/>
in the method, in the process of the invention,is an internal reference matrix->Elements of the first row and the first column of +.>Is an internal reference matrix->Elements of the second column of the first row, +.>Is an internal reference matrix->Elements of the third column of the first row, +.>Is an internal reference matrix->Elements of the second row and the second column of the second row, < >>Is an internal reference matrix->Elements of the second row and the third column, < >>Is an internal reference matrix->Elements of a third row and a third column;
the calibration module further comprises:
the target sub-module is used for setting a first target in the visual range of the first sub-eye and acquiring a first target image, and setting a second target in the visual range of the second sub-eye and acquiring a second target image;
a target conversion sub-module for determining a target conversion matrix between the second target and the first target based on the first target image and the second target image
In the method, in the process of the invention,rotate matrix for spatial coordinate system to first target image,>rotating the matrix for the spatial coordinate system to the second target image,>translation matrix for spatial coordinate system to second target image, < > for>Translating the matrix to a first target image for a spatial coordinate system;
an extrinsic parameter determination sub-module for determining a target transformation matrix based on the target transformation matrixDetermining an extrinsic parameter matrix +_for the second sub-eye to the first sub-eye>Based on the extrinsic parameter matrix->To obtain the external parameters of the first sub-eye and the second sub-eye, wherein the external parameter matrix +.>The method comprises the following steps:
in the method, in the process of the invention,for the first target to first sub-eye conversion matrix,>a conversion matrix for the second target to the second sub-eye;
the correction module includes:
a correction equation establishment sub-module for determining a point in the sub-eye imageDetermining a point +_based on the inner parameter and the outer parameter>Corresponding theoretical coordinate point->Spatial coordinate point->Based on dot->Theoretical coordinate Point->Spatial coordinate point->Establishing a correction equation:
in the method, in the process of the invention,is the intersection of the optical axis and the pixel plane, +.>For the first correction factor, +.>Is the second correction coefficient;
a correction submodule for solving a first correction coefficient in the correction equation Second correction coefficient->And based on said first correction factor +.>Said second correction factor->Correcting pixel point coordinates in the sub-eye images to obtain corrected images;
the splice module includes:
a region calculation sub-module for calculating an overlap region between the corrected images based on the internal parameters
In the method, in the process of the invention,for the distance of the image sensor to the target, < >>For the distance between adjacent sub-eyes +.>Distance from compound eye camera to target, +.>To correct the length of the image +.>For the sub-eye diameter, < >>Is the sub-eye focal length;
the difference sub-module is used for carrying out Gaussian blur processing on the corrected image to obtain an image space, carrying out multiple sampling processing on the image space to obtain Gaussian pyramids, and carrying out difference processing on adjacent coatings of each group of Gaussian pyramids to obtain difference pyramids;
the characteristic point determining submodule is used for selecting first pixel points in the overlapping area based on the differential pyramid, selecting a preset number of second pixel points around the first pixel points, storing the first pixel points and the second pixel points into a to-be-selected characteristic point set, and taking extreme points in the to-be-selected characteristic point set as characteristic points;
The splicing sub-module is used for determining a feature code segment and a matching feature point based on the feature point, and carrying out splicing processing on the corrected image according to the feature code segment and the matching feature point so as to obtain an overall image;
the splicing submodule comprises:
the assignment unit is used for selecting a neighborhood range of a preset shape by taking the characteristic points as the center, selecting a plurality of pixel point pairs in the neighborhood range, and carrying out assignment processing based on the pixel value between each pixel point pair so as to obtain a characteristic code segment;
the matching unit is used for calculating the Hamming distance between the pixel points in the feature point set to be selected and the pixel point pair, selecting two corresponding pixel points with the minimum Hamming distance as a first matching pixel point and a second matching pixel point, calculating the distance ratio of the Hamming distance between the first matching pixel point and the pixel point pair to the Hamming distance between the second matching pixel point and the pixel point pair, and taking the pixel point with the distance ratio smaller than a preset ratio threshold value as a matching feature point;
and the splicing unit is used for carrying out splicing processing on the corrected image based on the characteristic code segment and the matched characteristic points so as to obtain an integral image.
5. A computer comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method for object motion recognition based on compound eye morphology vision as claimed in any one of claims 1 to 3 when executing the computer program.
6. A storage medium having stored thereon a computer program which, when executed by a processor, implements the compound eye morphological vision-based object motion recognition method of any one of claims 1 to 3.
CN202310967035.1A 2023-08-03 2023-08-03 Target motion recognition method and system based on compound eye morphological vision Active CN116681732B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310967035.1A CN116681732B (en) 2023-08-03 2023-08-03 Target motion recognition method and system based on compound eye morphological vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310967035.1A CN116681732B (en) 2023-08-03 2023-08-03 Target motion recognition method and system based on compound eye morphological vision

Publications (2)

Publication Number Publication Date
CN116681732A CN116681732A (en) 2023-09-01
CN116681732B true CN116681732B (en) 2023-10-20

Family

ID=87784079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310967035.1A Active CN116681732B (en) 2023-08-03 2023-08-03 Target motion recognition method and system based on compound eye morphological vision

Country Status (1)

Country Link
CN (1) CN116681732B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010130628A (en) * 2008-12-01 2010-06-10 Sharp Corp Imaging apparatus, image compositing device and image compositing method
CN102572220A (en) * 2012-02-28 2012-07-11 北京大学 Bionic compound eye moving object detection method adopting new 3-2-2 spatial information conversion model
CN106767913A (en) * 2016-12-02 2017-05-31 中国科学技术大学 A kind of compound eye system caliberating device and scaling method based on single LED luminous points and dimensional turntable
CN108776980A (en) * 2018-05-14 2018-11-09 南京工程学院 A kind of scaling method towards lenticule light-field camera
CN111583117A (en) * 2020-05-09 2020-08-25 上海航天测控通信研究所 Rapid panoramic stitching method and device suitable for space complex environment
CN112102401A (en) * 2020-09-21 2020-12-18 中国科学院上海微系统与信息技术研究所 Target positioning method, device, system, equipment and storage medium
CN112362034A (en) * 2020-11-11 2021-02-12 上海电器科学研究所(集团)有限公司 Solid engine multi-cylinder section butt joint guiding measurement algorithm based on binocular vision
CN113191954A (en) * 2021-06-11 2021-07-30 南京工程学院 Panoramic image splicing method based on binocular camera
WO2021254110A1 (en) * 2020-06-19 2021-12-23 京东方科技集团股份有限公司 Image processing method, apparatus and device, and storage medium
CN114485648A (en) * 2022-02-08 2022-05-13 北京理工大学 Navigation positioning method based on bionic compound eye inertial system
CN115348364A (en) * 2022-08-10 2022-11-15 长春工业大学 Curved surface bionic compound eye large-field-of-view imaging device and imaging method
WO2023082306A1 (en) * 2021-11-12 2023-05-19 苏州瑞派宁科技有限公司 Image processing method and apparatus, and electronic device and computer-readable storage medium
CN116205991A (en) * 2023-02-03 2023-06-02 深圳纷来智能有限公司 Construction method of multi-information source event sensor based on multi-view camera array

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010130628A (en) * 2008-12-01 2010-06-10 Sharp Corp Imaging apparatus, image compositing device and image compositing method
CN102572220A (en) * 2012-02-28 2012-07-11 北京大学 Bionic compound eye moving object detection method adopting new 3-2-2 spatial information conversion model
CN106767913A (en) * 2016-12-02 2017-05-31 中国科学技术大学 A kind of compound eye system caliberating device and scaling method based on single LED luminous points and dimensional turntable
CN108776980A (en) * 2018-05-14 2018-11-09 南京工程学院 A kind of scaling method towards lenticule light-field camera
CN111583117A (en) * 2020-05-09 2020-08-25 上海航天测控通信研究所 Rapid panoramic stitching method and device suitable for space complex environment
WO2021254110A1 (en) * 2020-06-19 2021-12-23 京东方科技集团股份有限公司 Image processing method, apparatus and device, and storage medium
CN112102401A (en) * 2020-09-21 2020-12-18 中国科学院上海微系统与信息技术研究所 Target positioning method, device, system, equipment and storage medium
CN112362034A (en) * 2020-11-11 2021-02-12 上海电器科学研究所(集团)有限公司 Solid engine multi-cylinder section butt joint guiding measurement algorithm based on binocular vision
CN113191954A (en) * 2021-06-11 2021-07-30 南京工程学院 Panoramic image splicing method based on binocular camera
WO2023082306A1 (en) * 2021-11-12 2023-05-19 苏州瑞派宁科技有限公司 Image processing method and apparatus, and electronic device and computer-readable storage medium
CN114485648A (en) * 2022-02-08 2022-05-13 北京理工大学 Navigation positioning method based on bionic compound eye inertial system
CN115348364A (en) * 2022-08-10 2022-11-15 长春工业大学 Curved surface bionic compound eye large-field-of-view imaging device and imaging method
CN116205991A (en) * 2023-02-03 2023-06-02 深圳纷来智能有限公司 Construction method of multi-information source event sensor based on multi-view camera array

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
High precision target tracking with a compound-eye image sensor;R. Krishnasamy 等;《 Canadian Conference on Electrical and Computer Engineering 2004》;全文 *
球面仿生复眼的标定与定位研究;何建争;《硕士论文电子期刊》;全文 *
用于大视场三维探测的人工复眼系统几何标定;简慧杰;何建争;王克逸;;光学学报(02);全文 *
目标跟踪与定位中的视觉标定算法研究;李良福;陈卫东;冯祖仁;郑宝忠;;应用光学(04);全文 *

Also Published As

Publication number Publication date
CN116681732A (en) 2023-09-01

Similar Documents

Publication Publication Date Title
US10997696B2 (en) Image processing method, apparatus and device
CN111052176B (en) Seamless image stitching
US20220076391A1 (en) Image Distortion Correction Method and Apparatus
JP6545997B2 (en) Image processing device
CN110689581B (en) Structured light module calibration method, electronic device and computer readable storage medium
US9712755B2 (en) Information processing method, apparatus, and program for correcting light field data
CN109003311B (en) Calibration method of fisheye lens
US10733705B2 (en) Information processing device, learning processing method, learning device, and object recognition device
CN109118544B (en) Synthetic aperture imaging method based on perspective transformation
CN111028205B (en) Eye pupil positioning method and device based on binocular distance measurement
WO2019105261A1 (en) Background blurring method and apparatus, and device
CN106447602A (en) Image mosaic method and device
CN107113376A (en) A kind of image processing method, device and video camera
US9070189B2 (en) Image blurring correction device
CN104954663B (en) Image processing apparatus and image processing method
CN111383264B (en) Positioning method, positioning device, terminal and computer storage medium
CN113822942B (en) Method for measuring object size by monocular camera based on two-dimensional code
CN109951640A (en) Camera anti-fluttering method and system, electronic equipment, computer readable storage medium
CN113875219B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN106886976B (en) Image generation method for correcting fisheye camera based on internal parameters
JP2020067748A (en) Image processing apparatus, image processing method, and program
Cvišić et al. Recalibrating the KITTI dataset camera setup for improved odometry accuracy
CN112470192A (en) Dual-camera calibration method, electronic device and computer-readable storage medium
WO2023236508A1 (en) Image stitching method and system based on billion-pixel array camera
CN105513074B (en) A kind of scaling method of shuttlecock robot camera and vehicle body to world coordinate system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant