CN115713545A - Bionic binocular vision tracking device and method driven by optical wedge set - Google Patents

Bionic binocular vision tracking device and method driven by optical wedge set Download PDF

Info

Publication number
CN115713545A
CN115713545A CN202211338408.0A CN202211338408A CN115713545A CN 115713545 A CN115713545 A CN 115713545A CN 202211338408 A CN202211338408 A CN 202211338408A CN 115713545 A CN115713545 A CN 115713545A
Authority
CN
China
Prior art keywords
camera
optical wedge
imaging
virtual
optical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211338408.0A
Other languages
Chinese (zh)
Inventor
李安虎
孟天晨
刘也琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202211338408.0A priority Critical patent/CN115713545A/en
Publication of CN115713545A publication Critical patent/CN115713545A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to a bionic binocular vision tracking device and method driven by an optical wedge group, which comprises an upper computer, a left camera, a right camera, a rotary optical wedge device and a calibration component, wherein the right front parts of the left camera and the right camera are respectively provided with the left rotary optical wedge device and the right rotary optical wedge device, the imaging visual axes of the left camera and the right camera and the optical axes of the left rotary optical wedge device and the right rotary optical wedge device are respectively kept coaxial correspondingly, and the upper computer is connected with the rotary optical wedge device, the cameras and a tracking target. The rotary optical wedge device is arranged in front of the camera, and the rotary optical wedge is used for expanding the view field of the camera under the condition that the camera is fixed; compared with the prior art, the optical wedge device driving scheme is introduced into the bionic binocular vision tracking model, the self-adaptive binocular imaging configuration is established by adopting the rotary optical wedge device, the targets with large visual field, high resolution and low aberration sensitivity can be quickly tracked, captured and flexibly switched, and meanwhile the optical wedge device is compact in structure, simple in calibration and high in tracking measurement precision.

Description

Bionic binocular vision tracking device and method driven by optical wedge set
Technical Field
The invention relates to the technical field of optical imaging and visual tracking, in particular to a bionic binocular visual tracking device and method driven by an optical wedge set.
Background
With the rapid development of technologies such as vision bionic and image processing, a brand-new field of machine vision is brought forward, and the machine vision is widely applied to the fields of intelligent manufacturing systems, intelligent monitoring, medical imaging, aerospace military industry and the like. In the bionic vision research, because human eyeballs have excellent dynamic characteristics and comprehensive perception capability, the human eye-imitating binocular vision is always a research hotspot of machine vision. At present, left and right cameras of a binocular stereoscopic vision system are mostly fixedly installed and have functions of depth perception, three-dimensional scene reproduction and the like, but the limitation of fixation of a visual axis of the camera causes that the public visual field of the traditional binocular vision system is small, and certain flexibility and adaptability are lacked.
The following prior art proposes several typical bionic binocular vision stereo devices:
in the prior art, chinese application No. CN200910045961.3 discloses a bionic binocular stereoscopic vision device in the field of bionic technology, in which a base and one of the second connecting rods equipped with a camera are connected through two branches: one end of a driving swing rod of the first branch is connected with a motor through a revolute pair, the other end of the driving swing rod of the first branch is connected with one end of a first connecting rod through a revolute pair, and the other end of the first connecting rod is connected with a second connecting rod connected with a camera through a revolute pair; one end of the driving swing rod of the second branch is connected with the motor through a revolute pair, the other end of the driving swing rod of the second branch is connected with one end of a third connecting rod, and the other end of the third connecting rod is connected with one end of a U-shaped piece provided with a camera through a revolute pair; the base and another sixth link, provided with a camera, are also connected by two branches. The two cameras move synchronously by utilizing the two motors, the structure is compact, the mutual positions of the two cameras are fixed, and the difficulty of system calibration is reduced. However, the device has less freedom degree of the left camera and the right camera, poor flexibility and difficulty in simulating various functions of two eyes.
Chinese application No. CN201611055676.6 discloses a device and method for driving bionic binocular vision tracking by optical wedge set, including nine degrees of freedom binocular bionic eye: a binocular bionic eye and neck mechanism consisting of a left eyeball mechanism and a right eyeball mechanism; the left eye ball mechanism includes: the camera is arranged in the eyeball and can rotate automatically, and the camera is used for controlling the left and right movement of the eyeball and the second motor used for controlling the up and down movement of the eyeball; the left eyeball mechanism is arranged on the bracket; the right eyeball mechanism and the left eyeball mechanism have the same structure and are arranged on the bracket in a mirror symmetry mode; the neck mechanism includes: the first neck motor is used for driving the binocular bionic eye to do vertical pitching motion; and the second neck motor is used for driving the binocular bionic eye to do left-right swinging motion. The device can realize the visual information acquisition of the whole scene, but the existence of a plurality of motors can also lead to a larger mechanical structure and a complex control method.
In summary, the research on the bionic binocular vision in the prior art has the defects of fixed visual axis of a camera, lack of flexibility and adaptability, huge mechanical structure, complex control method and the like.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a bionic binocular vision tracking device and method driven by an optical wedge set. Under the condition that the left camera and the right camera are fixed, the rotary optical wedge devices capable of adjusting the imaging visual axes of the cameras in real time are respectively added in front of the left lens and the right lens, the rotation angles and the rotation speeds corresponding to the left rotary optical wedge devices and the right rotary optical wedge devices are adjusted through a control algorithm by combining the human eyeball motion mechanism, various dynamic visual modes such as conjugate motion, anisotropic motion, steady-state tracking and the like of human eyes can be realized, and the high-precision optical wedge device has good imaging flexibility, dynamic responsiveness and environmental adaptability.
The purpose of the invention can be realized by the following technical scheme:
a bionic binocular vision tracking device driven by an optical wedge group comprises an upper computer, a camera, a rotary optical wedge device and a calibration component;
the cameras comprise a left camera and a right camera, and optical axes of the left camera and the right camera are parallel; the camera is used for acquiring a target image;
the rotary optical wedge device comprises a left rotary optical wedge device and a right rotary optical wedge device, the left rotary optical wedge device and the right rotary optical wedge device are respectively arranged right in front of the left camera and the right camera, and the rotary optical wedge device is used for adjusting an imaging visual axis of the cameras;
the imaging visual axes of the left camera and the right camera are respectively coaxial with the optical axes of the left rotating optical wedge device and the right rotating optical wedge device;
the calibration assembly comprises a plane target and a calibration plate, a cross line of the plane target is overlapped with a center line of the camera image, and the calibration assembly is arranged in a common field of view of the left camera and the right camera; the calibration component is used for calibrating internal and external parameters between the camera and the rotary optical wedge device;
and the upper computer is respectively connected with the camera and the rotary optical wedge device.
Preferably, the rotary optical wedge device comprises an optical wedge assembly and a driving assembly;
the optical wedge assembly comprises a plurality of optical wedge elements;
the driving component is used for driving the rotary optical wedge, so that the field of view of the camera is enlarged.
A bionic binocular vision tracking method driven by an optical wedge set comprises the following steps:
s1, establishing an equivalent relation between human eyes and a left optical wedge component and a right optical wedge component, and simulating a binocular motion imaging mode by adopting a left optical wedge component and a right optical wedge component to jointly control a flexible adjustment imaging sensing unit visual axis on the basis of an object image mapping and imaging feedback mechanism of an interest target in a binocular overlapping region and on the principle of visual axis adjustment continuous motion;
s2, establishing an equivalent dynamic virtual binocular camera model based on a light path reversible principle according to the deflection characteristics of the left optical wedge component and the right optical wedge component to the camera imaging visual axis;
s3, establishing a left camera and a right camera coordinate system, obtaining internal parameters of the left camera and the right camera and radial distortion coefficients of lenses of the left camera and the right camera by adopting a Zhang Zhengyou calibration method, and determining relative pose parameters of the left camera and the right camera;
s4, determining the zero position of the main section of each group of optical wedges by adopting an auto-collimation mutual calibration method, calibrating the alignment relation between a camera and the optical wedge groups, and establishing a coordinate system of a left optical wedge assembly and a right optical wedge assembly;
s5, calibrating internal parameters of the virtual camera, pose parameters between the virtual binocular cameras and imaging distortion of the dynamic virtual camera according to the dynamic virtual camera model;
s6, placing a tracking target in a common visual field of the left camera and the right camera, and respectively adjusting the imaging visual axes of the left optical wedge component and the right optical wedge component and the cameras matched with the left optical wedge component and the right optical wedge component by the upper computer through a visual axis adjusting algorithm to enable the tracking target to be located at the center position of the common visual field of the binocular cameras;
and S7, adjusting the imaging visual axes of the left optical wedge assembly and the right optical wedge assembly and the camera matched with the left optical wedge assembly and the right optical wedge assembly through a visual tracking algorithm by an upper computer by utilizing a distribution perception principle and a cooperative control strategy of a human eye physiology simulating mechanism, and realizing closed-loop feedback automatic image stabilization and dynamic target rapid tracking of the bionic binocular visual system.
Preferably, in S1, the left optical wedge assembly and the right optical wedge assembly jointly control the flexible adjustment of the visual axis of the imaging sensing unit, and the process of simulating binocular motion imaging includes:
s11, sequentially acquiring imaging visual angles generated by the optical wedge components in different corner azimuth combinations according to the deflection effect of the optical wedge components on the imaging visual axis of the camera, and changing the visual axis direction of the imaging unit by utilizing the independent rotation motion of each optical wedge around the optical axis;
s12, eyeball movement can be mainly divided into conjugate movement, anisotropic movement, steady-state fixation and the like, and according to the movement mode of an imaging target in the camera visual field, different optical wedge elements in the optical wedge assembly are given certain rotation angles and rotation speeds to synchronously rotate, so that the imaging visual axes of the left camera and the right camera are sequentially changed, and the tracking target is always in the common visual field range of the left camera and the right camera.
Preferably, the process of establishing the equivalent dynamic virtual binocular camera model in S2 includes:
s21, the visual axis direction of the virtual camera is consistent with the visual axis deflection direction of the actual camera, and the optical center position of the virtual camera is located at the intersection point of the visual axis of the virtual camera and the reverse extension line of the reverse tracking light;
s22, constructing a pose transformation matrix of the virtual camera and the actual camera under different visual axis directions according to the actual camera position and the optical wedge element parameters;
and S23, combining the optical center position and the space pose parameters of the virtual camera, and enabling the visual axis pointing sequences of the two fixed cameras to be equivalent to an infinite group of virtual binocular vision arrays, namely an equivalent dynamic virtual binocular camera model.
Preferably, the process of determining the relative pose parameters of the left camera and the right camera in S3 includes:
s31, placing a calibration plate in a public view field of the left camera and the right camera, and changing the pose of the calibration plate to enable the left camera and the right camera to respectively shoot to obtain a plurality of groups of left and right images;
and S32, calculating internal parameters of the left camera and the right camera, a radial distortion coefficient of a lens and relative poses of the left camera and the right camera according to the left image and the right image by using a left camera coordinate system as a world coordinate system.
Preferably, the step of determining the zero position of the main cross section of each group of optical wedges in S4 and calibrating the alignment relationship between the camera and the optical wedge group includes:
s41, determining the zero positions of the main sections of the left optical wedge assembly and the right optical wedge assembly by adopting an auto-collimation mutual calibration method, carrying out reference positioning, adjusting the dividing plates of the two parallel light pipes to enable the horizontal dividing lines of the dividing plates to fall into the main section of the reference optical wedge, and recording the positions;
s42, replacing the reference optical wedge with an optical wedge group to be calibrated for assembly and calibration, and calibrating to obtain the initial position of each optical wedge main section in the optical wedge group by taking a collimator reticle as a reference;
s43, coaxially installing the left optical wedge assembly and the right optical wedge assembly with the calibrated main section with the left camera and the right camera, removing the optical wedge set device in front of the left camera, randomly placing a plane target in a common view field of the left camera and the right camera, and respectively controlling the left camera to directly image and the right camera to refract and image through the double optical wedges;
s44, based on the pose relationship between the left camera and the right camera, extracting the reference coordinates of the calibration point from the direct-view image of the left camera and transmitting the reference coordinates to a coordinate system of the right camera, and predicting the three-dimensional coordinates of the calibration point from the refraction imaging result of the right camera;
s45, determining the alignment relation between the right camera and the optical wedge set thereof by minimizing the deviation between the reference datum and the model prediction, and calibrating the alignment relation between the left camera and the optical wedge set thereof by the method in the same way.
Preferably, the process of calibrating the internal parameters of the virtual camera, the pose parameters between the virtual binocular cameras, and the imaging distortion of the dynamic virtual camera in S5 includes:
s51, according to the image mapping relation between the virtual camera and the actual camera, any point on the virtual image surface corresponds to a point which is subjected to lens distortion correction on the actual image surface, and a pinhole imaging model of the left virtual camera and the right virtual camera is established according to a geometric optics theory;
s52, respectively deducing the imaging visual axis directions and projection center positions of the virtual cameras corresponding to the left camera and the right camera by combining the relative pose relationship of the left camera and the right camera and the control rule of the optical wedge group on the imaging visual axis directions to obtain a coordinate transformation relationship and relative pose parameters between the left virtual camera and the right virtual camera;
s53, deducing a mathematical mapping relation between a virtual image surface and an actual image surface by adopting a virtual camera projection light ray reverse tracking method, and establishing a nonlinear imaging distortion fitting model;
s54, establishing an optimized objective function related to the imaging distortion coefficient of the virtual camera by using the superposition constraint of the projection light of the virtual camera and the initial incident light of the optical wedge group;
s55, aiming at the distribution and evolution characteristics of imaging distortion under different prism corner combinations, an optimized objective function which takes a multi-order distortion coefficient as a variable and takes the minimum ray tracing deviation as a criterion is formed.
Preferably, the process of fast tracking the dynamic target in S7 includes:
s71, establishing a space-time coupling constraint relation of a binocular image sequence according to the motion track description and the projection transformation matrix of the virtual binocular camera, and constructing visual axis adjustment algorithms of different motion modes such as bionic binocular conjugation, heterodromous, fixation and the like;
s72, extracting SURF characteristic motion flow fields from the binocular camera image sequence, and performing foreground region segmentation and dynamic eye identification by combining a visual saliency model and a spatial information guidance mechanism;
s73, estimating target pose parameters by adopting a recursive least square algorithm, establishing a motion trail prediction model of the target pose parameters, and acquiring image deviation information of the current target relative to the center of the field of view of the left camera and the right camera;
s74, estimating absolute positioning information of the target from the left image deviation and the right image deviation by utilizing a triangulation principle, and solving corner parameters of a left optical wedge assembly and a right optical wedge assembly by a visual tracking algorithm and an optical wedge group reverse solving algorithm so as to ensure that the centers of the double view fields can synchronously lock the target;
and S75, substituting the corner parameters of the optical wedge group into a state equation of the control system, generating the joint optimal estimation of the multiple control quantities such as the angular velocity, the angular acceleration and the like of the left optical wedge assembly and the right optical wedge assembly under the condition of the double-view-field cooperative control energy constraint, establishing a closed-loop flow, and realizing the continuous, stable and smooth target tracking function.
Preferably, the inverse solution algorithm of the optical wedge group adopts one of a table look-up method, an approximation method or an iteration method.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the invention, the rotary optical wedge devices are respectively arranged in front of the left camera and the right camera, and the imaging visual axes of the cameras are adjusted through the rotary optical wedge devices, so that the motion mode of the bionic eyes can be realized, the muscle simulation or complex mechanism combination mode adopted by typical binocular motion is thoroughly abandoned, the structure of the bionic eyeball is greatly simplified, the visual field and the imaging flexibility of the cameras are greatly improved under the condition that the cameras are fixedly installed, the targets with large visual field, high resolution and low aberration sensitivity can be quickly tracked, captured and flexibly switched, the functions of acquiring and transmitting binocular dynamic visual information of the bionic eyes are met, and meanwhile, the structure is compact, the calibration is simple, and the tracking measurement precision is higher.
2. The two fixed cameras in the invention can be equivalent to an infinite group of virtual binocular vision arrays to form a self-adaptive binocular configuration mechanism, thereby providing a rich selection mode for bionic binocular movement.
3. Due to the fact that the two cameras are fixed, the binocular imaging requirements of various eyeball movement modes can be met by calibrating the internal and external parameters once, repeated calibration is not needed, the calibration process of dynamic binocular vision is simplified, and the control method is simple.
4. The invention adopts a refraction type visual axis adjusting mode, the prism rotation angle and the light beam refraction angle have larger reduction ratio, the visual axis pointing precision is very high, and the influence of mechanical error disturbance on the visual axis adjustment is reduced.
5. The invention adopts the forward and reverse solution theory of the optical wedge device, can accurately control the optical wedge device to adjust the direction of the visual axis, and can track the visual field target by matching with an effective cooperative control strategy so as to further acquire binocular stereoscopic vision information.
Drawings
FIG. 1 is a schematic structural view of the present invention;
FIG. 2 is a schematic diagram of a camera and virtual camera pose relationship;
FIG. 3 is a block diagram of a camera and wedge assembly system calibration process;
FIG. 4 is a schematic diagram of camera and virtual camera pose calibration;
FIG. 5 is a schematic view of the equivalent motion mode of the human eye and the dual optical wedges;
FIG. 6 is a bionic dynamic binocular vision imaging tracking scheme;
FIG. 7 is a flow chart of a dual optical wedge visual axis tracking iterative algorithm.
Reference numerals:
1-a left camera; 2-left rotation double optical wedges; 3-a right camera; 4-rotating the double optical wedges right; 5-an upper computer; 6-calibrating the component; 61-calibration plate; 62-a planar target; 7-tracking the target; 8-left virtual camera; 9-right virtual camera; (a) -conjugate motion; (b) -counter-motion; (c) -Steady-State fixation.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
Example one
As shown in fig. 1, the embodiment provides a bionic binocular vision tracking device driven by an optical wedge set, which includes an upper computer 5, a camera, a rotating optical wedge device and a calibration component 6, and specifically includes a left camera 1, a left rotating dual optical wedge 2, a right camera 3, a right rotating dual optical wedge 4, an upper computer 5 and a calibration component 6;
the left camera 1 and the right camera 3 are arranged around the tracking target 7, and optical axes of the two cameras are arranged in parallel; the left rotating double optical wedge 2 and the right rotating double optical wedge 4 are respectively arranged right in front of the left camera 1 and the right camera 3; and the visual axes of the left camera and the right camera and the optical axes of the left rotating optical wedge device and the right rotating optical wedge device are respectively kept coaxial correspondingly.
The calibration assembly 6 comprises a calibration plate 61 and a planar target 62, which are arranged in the common field of view of the left camera 1 and the right camera 3; the cross-hair of the planar target 62 coincides with the center-hair of the camera image, and in this embodiment, the same planar target is used to calibrate the left camera and the right camera in turn.
The upper computer 5 is in communication connection with the left camera 1, the right camera 3, the left rotating double optical wedge 2, the right rotating double optical wedge 4 and the tracking target 7, wherein the tracking target 7 is exemplified by a robot in the embodiment.
As shown in fig. 2, the binocular stereoscopic vision system driven by the two optical wedges can be equivalent to a dynamic virtual binocular camera system, so as to improve the simplicity and flexibility of dynamic visual tracking control and information processing. The imaging visual axis of the left camera 1 can be equivalent to a left virtual camera 8 for directly viewing the target through the left rotating double optical wedges 2; similarly, the imaging visual axis of the right camera 3 passing through the right rotating double optical wedge 4 can be equivalent to the direct vision target of the right virtual camera 9.
In this embodiment, the internal parameters such as the focal length and the resolution of the left camera and the right camera and the external position parameters corresponding to the rotating optical wedge device can be synchronously adjusted according to the variation of specific application occasions, wherein the model and the parameter of the left camera and the right camera and the external position parameters corresponding to the rotating optical wedge device can be completely the same, and can also be differentially selected according to actual needs.
Preferably, the optical wedge parameters and the arrangement form of the left rotating optical wedge device and the right rotating optical wedge device are completely the same, and the optical wedge parameters and the arrangement form can be selected in a differentiation manner according to actual needs.
The rotary optical wedge device comprises an optical wedge component and a driving component, wherein the optical wedge component comprises a plurality of optical wedge elements with different numbers, and optical parameters and arrangement forms of the optical wedge elements can be matched and adjusted according to the requirement of a field range; the driving assembly can adopt the modes of direct drive of a torque motor or gear transmission, synchronous belt transmission, worm and gear transmission and the like.
In the embodiment, a rotary optical wedge device is arranged in front of a camera, under the condition that the camera is fixed, the rotary optical wedge generates an expansion effect on the visual field of the camera, a three-degree-of-freedom monocular vision motion model is constructed from three layers of dual-optical-wedge visual axis pointing, motion dimension and tracking mode, and binocular stereoscopic vision is introduced; meanwhile, an equivalent virtual binocular camera is constructed according to an imaging principle, a self-adaptive binocular configuration mechanism is formed, the corresponding relation between the left eyeball model and the right eyeball model and the camera view field area is established, the simplicity and flexibility of dynamic vision tracking control and information processing are improved, and binocular vision of human eyes is simulated. Compared with the existing multi-degree-of-freedom bionic binocular vision system, the imaging system of the embodiment does not need to move the left camera body and the right camera body in any form, six-dimensional motion of bionic eyes can be realized by driving of the two groups of optical wedge devices, the whole device is compact in structure, and the imaging field range, the image resolution, the imaging efficiency, the flexibility and the environmental adaptability are good.
Example two
As shown in fig. 3 to fig. 7, the present embodiment provides a method for driving a bionic binocular vision tracking by an optical wedge set, including the following steps:
s1, establishing an equivalent relation between human eyes and a left optical wedge component and a right optical wedge component, and simulating a binocular motion imaging mode by adopting a left optical wedge component and a right optical wedge component to jointly control a flexible adjustment imaging sensing unit visual axis on the basis of an object image mapping and imaging feedback mechanism of an interest target in a binocular overlapping region and on the principle of visual axis adjustment continuous motion;
s2, establishing an equivalent dynamic virtual binocular camera model based on a light path reversible principle according to the deflection characteristics of the left optical wedge component and the right optical wedge component to the camera imaging visual axis;
s3, establishing a left camera and a right camera coordinate system, obtaining internal parameters of the left camera and the right camera and radial distortion coefficients of lenses of the left camera and the right camera by adopting a Zhang Zhengyou calibration method, and determining relative pose parameters of the left camera and the right camera;
s4, determining the zero position of the main section of each group of optical wedges by adopting an auto-collimation mutual calibration method, calibrating the alignment relation between a camera and the optical wedge groups, and establishing a coordinate system of a left optical wedge assembly and a right optical wedge assembly;
s5, calibrating internal parameters of the virtual camera, pose parameters between the virtual binocular cameras and imaging distortion of the dynamic virtual camera according to the dynamic virtual camera model;
s6, placing a tracking target in a common visual field of the left camera and the right camera, and respectively adjusting the imaging visual axes of the left optical wedge component and the right optical wedge component and the cameras matched with the left optical wedge component and the right optical wedge component by the upper computer through a visual axis adjusting algorithm to enable the tracking target to be located at the center position of the common visual field of the binocular cameras;
and S7, adjusting the imaging visual axes of the left optical wedge assembly and the right optical wedge assembly and the camera matched with the left optical wedge assembly and the right optical wedge assembly through a visual tracking algorithm by an upper computer by utilizing a distribution perception principle and a cooperative control strategy of a human eye physiology simulating mechanism, and realizing closed-loop feedback automatic image stabilization and dynamic target rapid tracking of the bionic binocular visual system.
In S1, the visual axis of the imaging sensing unit is flexibly adjusted through joint control of the left optical wedge component and the right optical wedge component, and the process of simulating binocular motion imaging comprises the following steps:
s11, sequentially acquiring imaging visual angles generated by the optical wedge components in different corner azimuth combinations according to the deflection effect of the optical wedge components on the imaging visual axis of the camera, and changing the visual axis direction of the imaging unit by utilizing the independent rotation motion of each optical wedge around the optical axis;
s12, eyeball movement can be mainly divided into conjugate movement, anisotropic movement, steady-state fixation and the like, and according to the movement mode of an imaging target in the camera visual field, different optical wedge elements in the optical wedge assembly are given certain rotation angles and rotation speeds to synchronously rotate, so that the imaging visual axes of the left camera and the right camera are sequentially changed, and the tracking target is always in the common visual field range of the left camera and the right camera.
The process of establishing the equivalent dynamic virtual binocular camera model in the S2 comprises the following steps:
s21, the visual axis direction of the virtual camera is consistent with the visual axis deflection direction of the actual camera, and the optical center position of the virtual camera is located at the intersection point of the visual axis of the virtual camera and the reverse extension line of the reverse tracking light;
s22, constructing a pose transformation matrix of the virtual camera and the actual camera under different visual axis directions according to the actual camera position and the optical wedge element parameters;
and S23, combining the optical center position and the space pose parameters of the virtual camera, and enabling the visual axis pointing sequences of the two fixed cameras to be equivalent to an infinite group of virtual binocular vision arrays, namely an equivalent dynamic virtual binocular camera model.
The process of determining the relative pose parameters of the left camera and the right camera in the S3 comprises the following steps:
s31, placing a calibration plate in a public view field of the left camera and the right camera, and changing the pose of the calibration plate to enable the left camera and the right camera to respectively shoot to obtain a plurality of groups of left and right images;
and S32, calculating internal parameters of the left camera and the right camera, a radial distortion coefficient of a lens and relative poses of the left camera and the right camera according to the left image and the right image by using a coordinate system of the left camera as a world coordinate system.
S4, determining the zero position of the main section of each group of optical wedges, and calibrating the alignment relation between the camera and the optical wedges comprises the following steps:
s41, determining the zero positions of the main sections of the left optical wedge assembly and the right optical wedge assembly by adopting an auto-collimation mutual calibration method, carrying out reference positioning, adjusting the dividing plates of the two parallel light pipes to enable the horizontal dividing lines of the dividing plates to fall into the main section of the reference optical wedge, and recording the positions;
s42, replacing the reference optical wedge with an optical wedge group to be calibrated for assembly and calibration, and calibrating to obtain the initial position of each optical wedge main section in the optical wedge group by taking the collimator reticle as a reference;
s43, coaxially installing the left optical wedge assembly and the right optical wedge assembly with the calibrated main section with the left camera and the right camera, removing the optical wedge set device in front of the left camera, randomly placing a plane target in a common view field of the left camera and the right camera, and respectively controlling the left camera to directly image and the right camera to refract and image through the double optical wedges;
s44, based on the pose relationship between the left camera and the right camera, extracting the reference coordinates of the calibration point from the direct-view image of the left camera and transmitting the reference coordinates to a coordinate system of the right camera, and predicting the three-dimensional coordinates of the calibration point from the refraction imaging result of the right camera;
s45, determining the alignment relation between the right camera and the optical wedge set thereof by minimizing the deviation between the reference datum and the model prediction, and calibrating the alignment relation between the left camera and the optical wedge set thereof by the method in the same way.
The process of calibrating the internal parameters of the virtual cameras, the pose parameters between the virtual binocular cameras and the imaging distortion of the dynamic virtual cameras in the S5 comprises the following steps:
s51, according to the image mapping relation between the virtual camera and the actual camera, any point on the virtual image surface corresponds to a point which is subjected to lens distortion correction on the actual image surface, and a pinhole imaging model of the left virtual camera and the right virtual camera is established according to a geometric optics theory;
s52, respectively deducing the imaging visual axis directions and projection center positions of the virtual cameras corresponding to the left camera and the right camera by combining the relative pose relationship of the left camera and the right camera and the control rule of the optical wedge group on the imaging visual axis directions to obtain a coordinate transformation relationship and relative pose parameters between the left virtual camera and the right virtual camera;
s53, deducing a mathematical mapping relation between a virtual image surface and an actual image surface by adopting a virtual camera projection light ray reverse tracking method, and establishing a nonlinear imaging distortion fitting model;
s54, establishing an optimized objective function related to the imaging distortion coefficient of the virtual camera by using the superposition constraint of the projection light of the virtual camera and the initial incident light of the optical wedge group;
s55, aiming at the distribution and evolution characteristics of imaging distortion under different prism corner combinations, an optimized objective function which takes a multi-order distortion coefficient as a variable and takes the minimum ray tracing deviation as a criterion is formed.
The process of fast tracking the dynamic target in the step S7 includes:
s71, establishing a space-time coupling constraint relation of a binocular image sequence according to the motion track description and the projection transformation matrix of the virtual binocular camera, and constructing visual axis adjustment algorithms of different motion modes such as bionic binocular conjugation, heterodromous, fixation and the like;
s72, extracting SURF characteristic motion flow fields from the binocular camera image sequence, and performing foreground region segmentation and dynamic eye identification by combining a visual saliency model and a spatial information guidance mechanism;
s73, estimating target pose parameters by adopting a recursive least square algorithm, establishing a motion trail prediction model of the target pose parameters, and acquiring image deviation information of the current target relative to the center of a left camera view field and a right camera view field;
s74, estimating absolute positioning information of the target from the left image deviation and the right image deviation by utilizing a triangulation principle, and solving corner parameters of a left optical wedge assembly and a right optical wedge assembly by a visual tracking algorithm and an optical wedge group reverse solving algorithm so as to ensure that the centers of the double view fields can synchronously lock the target;
and S75, substituting the corner parameters of the optical wedge group into a state equation of the control system, generating the joint optimal estimation of the multiple control quantities such as the angular velocity, the angular acceleration and the like of the left optical wedge assembly and the right optical wedge assembly under the condition of the double-view-field cooperative control energy constraint, establishing a closed-loop flow, and realizing the continuous, stable and smooth target tracking function.
The optical wedge group inverse solving algorithm adopts one of a table look-up method, an approximation method or an iteration method.
When in specific implementation, the method comprises the following steps:
step 1, parameter matching and system construction:
step 11, according to the requirements of the bionic eye system such as imaging property, field range, image resolution and the like, the parameters of the left camera and the right camera are completely the same, and the parameters of the left camera and the right camera are selected as follows: the horizontal field angle is 23.39 degrees, the vertical field angle is 17.65 degrees, the imaging resolution is 1600 multiplied by 1200, the pixel size is 4.4 multiplied by 4.4 mu m, the lens focal length f =12mm, 16mm and 35mm, the frame frequency is more than 120fps, and the visual axis pointing range and the stereoscopic imaging quality are weighed aiming at different application scenes;
step 12, two optical wedge elements in the rotary double-optical-wedge device are completely the same, the optical wedge angle alpha =20.05 °, and the refractive indexn =1.517, diameter D p =80mm, thin end thickness d 0 =5mm, two optical wedges mounted with their planes facing each other and spaced apart by a distance D 1 =100mm, camera is coaxial with rotary double optical wedge and is at a distance D from the nearest optical wedge plane 2 The width is not less than 30mm, and a stepping motor drives a synchronous belt transmission mechanism to drive an optical wedge to realize full-circle rotation;
step 13, establishing a left camera coordinate system O by taking the left camera optical center as an origin CL -X CL Y CL Z CL And the left image coordinate system x l o l y l Establishing a left optical wedge coordinate system O by using the center of the left rotary double optical wedge wedged incidence plane as an origin PL -X PL Y PL Z PL (ii) a Establishing a coordinate system O of the right camera by the same method CR -X CR Y CR Z CR Right image coordinate system x r o r y r And a right wedge coordinate system O PR -X PR Y PR Z PR (ii) a Establishing a world coordinate system O W -X W Y W Z W And establishing a tracking target coordinate system O by using the robot base as an origin R -X R Y R Z R
Step 2, calibrating internal and external parameters of the imaging system, as shown in fig. 3, wherein the calibration of the bionic binocular vision system adopts an inside-out strategy, and internal characteristic parameters of a camera and a double optical wedge, namely the internal and external parameters of the camera and the zero position of the main section of the optical wedge, are respectively obtained; and then sequentially determining the relative pose relations of the left camera, the right camera and the double optical wedges, and realizing the alignment of the cameras and the double optical wedges by minimizing deviation. The method comprises the following specific steps:
step 21, selecting a checkerboard calibration plate with the minimum square side length of 12.5mm, and respectively shooting a plurality of calibration plate images with different poses by a left camera and a right camera by adopting a Zhang Zhengyou calibration method to obtain internal parameters of the left camera and the right camera and radial distortion coefficients of lenses of the left camera and the right camera;
and step 22, determining relative pose parameters, namely a rotation matrix and a translation vector, of the left camera and the right camera by combining the perspective projection model and the rigid body transformation relation.
Step 23, determining the zero position of the main section of each group of double optical wedges by adopting an auto-collimation mutual calibration method, carrying out reference positioning, adjusting the dividing plates of the two parallel light pipes to enable the horizontal division lines of the dividing plates to fall into the main section of the reference optical wedge, and recording the position;
step 24, replacing the reference double-optical wedge with a double-optical wedge to be calibrated for assembly and calibration, calibrating the initial position of the main section of the double-optical wedge by taking a collimator reticle as a reference, and installing the calibrated double-optical wedge in front of a camera in a plane side-out manner;
step 25, calibrating the axial alignment relation between each group of cameras and the double optical wedges by using an auxiliary reference transfer principle, removing the double optical wedge device in front of the left camera, randomly placing plane targets in a common view field of the left camera and the right camera, and respectively controlling the direct-view imaging of the left camera and the refraction imaging of the right camera through the double optical wedges aiming at the plane targets at different positions;
step 26, extracting the reference coordinates of the calibration point from the direct-view image of the left camera and transmitting the reference coordinates to a coordinate system of the right camera, predicting the three-dimensional coordinates of the calibration point from the refraction imaging result of the right camera, and determining the alignment relation between the right camera and the double optical wedges of the right camera by minimizing the deviation between the reference and the model prediction;
and 27, calibrating the alignment relation between the left camera and the double optical wedges thereof by the method in the same way, and calibrating the geometric position parameters between the left camera and the right camera and between the left camera and the right double optical wedges.
Step 3, calibrating the pose of the virtual camera and correcting distortion:
step 31, according to the camera and the rotary double-optical-wedge imaging model, enabling a double-optical-wedge driven binocular stereoscopic vision system to be equivalent to a dynamic virtual binocular camera system, and establishing a left virtual camera coordinate system and a right virtual camera coordinate system, wherein the left virtual camera coordinate system and the right virtual camera coordinate system are shown in fig. 4;
step 32, any point on the image surface of the virtual camera can correspond to a point which is subjected to lens distortion correction on the actual image surface, and pinhole imaging models of the left virtual camera and the right virtual camera are established according to the image mapping relation of the virtual camera and the actual camera to obtain equivalent focal lengths of the pinhole imaging models respectively being f v1 And f v2
Step 33, combining the relative pose relationship of the left camera and the right camera and the control rule of the double optical wedges for the pointing of the imaging visual axis, and according to the imaging of the left camera and the right camera through the vector refraction law and the reverse ray tracing methodDirection of visual axis s r1 、s r2 Deriving the imaging boresight orientation s of its corresponding virtual camera v1 、s v2 Determining the projection center position o of the left virtual camera and the right virtual camera by combining the geometric optics theory r1 、o r2 . Will s is r1 And s v1 、s r2 And s v2 Substituting Rodrigues transformation Rot to obtain rotation matrixes R of left and right virtual cameras relative to respective actual cameras v1 、R v2
Figure BDA0003915434890000121
With the projected center position o of the left and right actual cameras r1 、o r2 For reference, translation vector t of left and right virtual cameras v1 、t v2 Respectively expressed as:
Figure BDA0003915434890000122
due to the relative rotation matrix R of the left and right actual cameras r And a relative translation vector t r The coordinate transformation relation of the left virtual camera and the right virtual camera is calibrated in advance through a rotation matrix R v And a translation vector t v Description, expressed as:
Figure BDA0003915434890000123
step 34, deducing a mathematical mapping relation between a virtual image surface and an actual image surface by adopting a virtual camera projection light ray reverse tracking method, and establishing a nonlinear imaging distortion fitting model;
step 35, establishing an optimized objective function related to the imaging distortion coefficient of the virtual camera by using the superposition constraint of the virtual camera projection light and the optical wedge group initial incident light;
step 36, forming an optimized objective function with the multi-order distortion coefficient as a variable and the minimum ray tracing deviation as a criterion according to the distribution and the evolution characteristics of imaging distortion under different prism rotation angle combinations, wherein the objective function is solved through a numerical iteration optimization algorithm and is represented as:
Figure BDA0003915434890000131
where the superscript m is used to distinguish between left and right virtual cameras, b r Is according to b o1 Or b o2 Actual camera projection ray, h, obtained from tracking r For the exit position of the ray on the plane side of the dual wedge, { m v And c represents a vector formed by distortion coefficients to be optimized.
Step 4, bionic binocular stereo imaging and visual axis adjustment:
and step 41, taking the tail end of the robot as an example of the tracking target, and enabling the tracking target to fall into a common field of view of the left camera and the right camera. The exit point of the target light corresponding to the left double optical wedges is (X) fL ,Y fL ,Z fL ) Target pointing pitch angle is ρ L In an azimuth of
Figure BDA0003915434890000134
The exit point of the right double wedge is (X) fR ,Y fR ,Z fR ) Pitch angle is ρ R In an azimuth of
Figure BDA0003915434890000135
According to the coordinate conversion relation, the exit points of the left and right double optical wedges satisfy the following conditions:
Figure BDA0003915434890000132
in the formula, T LR (R LR ,t LR ) For a transformation matrix between left and right cameras, T PR (R PR ,t PR ) And T PL (R PL ,t PL ) The space target point coordinates of the transformation matrixes of the right camera and the left camera relative to the right double-optical wedge and the left double-optical wedge respectively satisfy that:
Figure BDA0003915434890000133
step 42, according to the calibrated left camera, the calibrated right camera, the calibrated left rotating optical wedge, the calibrated right rotating optical wedge, the position of the tracking target in the public view field and the direction change of the visual axis of the virtual binocular camera, are combined to perform reverse tracking, and the rotation angles of the left rotating optical wedge and the right rotating optical wedge are adjusted through reverse demodulation of the rotating optical wedges, namely, the pitch angle rho and the azimuth angle of the left light and the right light relative to the optical axis of the system are adjusted
Figure BDA0003915434890000136
The tracking target is positioned at the center of the common visual field of the left camera and the right camera.
Step 5, tracking the bionic binocular imaging visual axis, as shown in fig. 6, which is a bionic dynamic binocular vision tracking scheme, wherein the left and right bionic eyes transmit target images to a highly bionic binocular vision upper system through a left camera, a right camera and a double optical wedge communication control module respectively, an eyeball drive control system realizes the corner control of each optical wedge, a three-dimensional imaging system realizes the generation and modeling of three-dimensional point cloud, and a dynamic tracking system realizes the cooperative tracking control of the double-view-field imaging visual axis, and the specific steps are as follows:
step 51, setting a moving path of a tracked target, extracting a URF characteristic motion flow field of the tracked target from a binocular camera image sequence, and carrying out foreground region segmentation and dynamic target identification by combining a visual saliency model and a spatial information guide mechanism;
step 52, estimating target pose parameters by adopting a recursive least square algorithm, establishing a motion trail prediction model of the target pose parameters, acquiring image deviation information of a current target relative to the centers of the left camera and the right camera view fields, and estimating target absolute positioning information from left image deviation and right image deviation by utilizing a triangulation principle;
step 53, adjusting the strategy by different movement modes such as bionic binocular conjugation, heterodromous, fixation and the like, as shown in fig. 5, wherein (a) represents conjugated movement, (b) represents heterodromous movement, and (c) represents steady fixation. The rotation angle parameters of the left and right double optical wedge groups are solved and adjusted through a rotary double optical wedge reverse solving algorithm so as to ensure that the centers of the double view fields can synchronously lock the target and realize the bionic binocular vision combined feedback cooperative control;
and step 54, setting a comprehensive image error threshold, calculating the distance between the image points of the tracking target in the left camera and the right camera and the center of the field of view, and calculating the comprehensive tracking error of the left camera and the right camera. If the composite error is greater than the error threshold, iterating the adjusting process; if the error value is smaller than the error threshold value, the task is ended, the cooperative control of the left and right double optical wedges on the visual axis is completed, and the iteration process is shown in fig. 7.
Figure BDA0003915434890000141
In the formula (x) C ,y C ) Normalizing image coordinates, v, for camera target focal length in For the back-projected light vector of the object in the camera, v out The outgoing light vector of the target in the prism, N is the normal vector of the prism plane, N is the refractive index of the prism, g (x) is the outgoing light response function with the double-wedge outgoing light as a variable, f (x) is the corner response function with the double-wedge outgoing light as a variable, (theta) C1C2 ) Is the current rotation angle of the double optical wedges, (theta) F1F2 ) Adjusting the angle of rotation for a dual optical wedge, (x) FL ,y FL ) Adjusting the coordinates of the post-shot for the target in the left camera, (x) FR ,y FR ) Adjusting the coordinates of the post-shot for the target in the right camera, (x) L ,y L ) As the center coordinates of the left camera field of view, (x) R ,y R ) Is the right camera field-of-view center coordinate.
As shown in fig. 7, it is a flowchart of a dual optical wedge visual axis tracking iterative algorithm, which includes the following steps:
1) Setting an upper limit CountMax of iteration times, an initial iteration Step size Step and tracking accuracy BiasThresh, and enabling the iteration times Count to be =0;
2) Adjusting the azimuth angle of the visual axis of the camera;
3) If the current Bias is larger than BiasThresh, adjusting the elevation angle of the visual axis of the camera, executing Count + +, and entering the next step; otherwise, the tracing is successful and the whole steps are finished.
4) Judging whether the Bias > BiasThresh is satisfied and the Count is less than or equal to CountMax, if so, enabling Step = Step/2, and returning to the Step (2); otherwise, executing the next step;
5) Judging whether the Bias > BiasThresh is met, if so, exceeding the iteration times, and finishing all the steps; otherwise, the tracing is successful and the whole steps are finished.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations can be devised by those skilled in the art in light of the above teachings. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. A bionic binocular vision tracking device driven by an optical wedge group is characterized by comprising an upper computer (5), a camera, a rotary optical wedge device and a calibration component (6);
the camera comprises a left camera (1) and a right camera (3), and the optical axes of the left camera (1) and the right camera (3) are parallel; the camera is used for acquiring a target image;
the rotary optical wedge device comprises a left rotary optical wedge (2) device and a right rotary optical wedge (4) device, the left rotary optical wedge (2) device and the right rotary optical wedge (4) device are respectively arranged right in front of the left camera (1) and the right camera (3), and the rotary optical wedge device is used for adjusting an imaging visual axis of the cameras;
the imaging visual axes of the left camera (1) and the right camera (3) are respectively coaxial with the optical axes of the left rotating optical wedge (2) device and the right rotating optical wedge (4) device;
the calibration assembly (6) comprises a planar target (62) and a calibration plate (61); the calibration component (6) is arranged in a common field of view of the left camera (1) and the right camera (3); the calibration component (6) is used for calibrating internal and external parameters between the camera and the rotary optical wedge device;
and the upper computer (5) is respectively connected with the camera and the rotary optical wedge device.
2. The bionic binocular vision tracking device driven by the optical wedge set according to claim 1, wherein the rotary optical wedge device comprises an optical wedge assembly and a driving assembly;
the optical wedge assembly comprises a plurality of optical wedge elements;
the driving component is used for driving the rotary optical wedge, so that the field of view of the camera is enlarged.
3. A bionic binocular vision tracking method driven by an optical wedge set comprises the following steps:
establishing an equivalent relation between human eyes and the left optical wedge component and the right optical wedge component, based on an object image mapping and imaging feedback mechanism of an interest target in an eye overlapping region, and on the principle of visual axis adjustment continuous motion, adopting the left optical wedge component and the right optical wedge component to jointly control the visual axis of the flexible adjustment imaging sensing unit, and simulating a binocular motion imaging mode;
establishing an equivalent dynamic virtual binocular camera model based on a reversible principle of an optical path according to the deflection characteristics of the left optical wedge component and the right optical wedge component to the camera imaging visual axis;
establishing a left camera and a right camera coordinate system, acquiring internal parameters of the left camera and the right camera and a radial distortion coefficient of a lens thereof by adopting a Zhang Zhengyou calibration method, and determining relative pose parameters of the left camera and the right camera;
determining the zero position of the main section of each group of optical wedges by adopting an auto-collimation mutual calibration method, calibrating the alignment relation between a camera and an optical wedge group, and establishing a coordinate system of a left optical wedge assembly and a right optical wedge assembly;
calibrating internal parameters of the virtual camera, pose parameters between the virtual binocular cameras and imaging distortion of the dynamic virtual camera according to the dynamic virtual camera model;
placing a tracking target in a common visual field of the left camera and the right camera, and respectively adjusting the imaging visual axes of the left optical wedge component and the right optical wedge component and the cameras matched with the left optical wedge component and the right optical wedge component by the upper computer through a visual axis adjusting algorithm to enable the tracking target to be positioned at the central position of the common visual field of the binocular camera;
by utilizing a distribution perception principle and a cooperative control strategy of a human eye physiology imitation mechanism, an upper computer adjusts the imaging visual axes of the left optical wedge assembly and the right optical wedge assembly and the camera matched with the left optical wedge assembly and the right optical wedge assembly through a visual tracking algorithm, and the closed-loop feedback automatic image stabilization and the dynamic target rapid tracking of the bionic binocular vision system are realized.
4. The bionic binocular vision tracking method driven by optical wedge sets according to claim 3, wherein the left optical wedge assembly and the right optical wedge assembly are jointly controlled to flexibly adjust visual axes of imaging sensing units, and the process of simulating binocular motion imaging comprises the following steps:
according to the deflection effect of the optical wedge component on the camera imaging visual axis, sequentially acquiring imaging visual angles generated by the optical wedge component in different corner azimuth combinations, and changing the visual axis direction of the imaging unit by utilizing the independent rotation motion of each optical wedge around the optical axis;
according to the motion mode of the imaging target in the camera visual field, different optical wedge elements in the optical wedge assembly are given certain rotation angles and rotation speeds to synchronously rotate, so that the imaging visual axes of the left camera and the right camera are sequentially changed, and the tracking target is always in the common visual field range of the left camera and the right camera.
5. The method for bionic binocular vision tracking driven by optical wedge sets according to claim 3, wherein the process of establishing the equivalent dynamic virtual binocular camera model comprises the following steps:
the direction of the visual axis of the virtual camera is consistent with the direction of the visual axis deflection of the actual camera, and the optical center position of the virtual camera is positioned at the intersection point of the visual axis of the virtual camera and the reverse extension line of the reverse tracking light;
constructing a pose transformation matrix of the virtual camera and the actual camera under the pointing directions of different visual axes according to the actual camera position and the optical wedge element parameters;
and (3) combining the optical center position and the space pose parameters of the virtual camera, and enabling the visual axis pointing sequences of the two fixed cameras to be equivalent to an infinite group of virtual binocular vision arrays, namely an equivalent dynamic virtual binocular camera model.
6. The method for bionic binocular vision tracking driven by optical wedge sets according to claim 3, wherein the process of determining the relative pose parameters of the left camera and the right camera comprises the following steps:
placing a calibration plate in a common visual field of the left camera and the right camera, and changing the pose of the calibration plate to enable the left camera and the right camera to respectively shoot to obtain a plurality of groups of left and right images;
and calculating internal parameters of the left camera and the right camera, a radial distortion coefficient of a lens and relative poses of the left camera and the right camera according to the left image and the right image by using a left camera coordinate system as a world coordinate system.
7. The method for bionic binocular vision tracking through optical wedge group driving according to claim 3, wherein the process of determining the zero position of the main section of each optical wedge group and calibrating the alignment relation between the camera and the optical wedge group comprises the following steps:
determining the zero positions of the main sections of the left optical wedge assembly and the right optical wedge assembly by adopting an auto-collimation mutual calibration method, carrying out reference positioning, adjusting the dividing plates of the two parallel light tubes to enable the horizontal dividing lines to fall into the main section of the reference optical wedge, and recording the positions;
replacing the reference optical wedge with an optical wedge group to be calibrated for assembly and calibration, and calibrating to obtain the initial position of each optical wedge main section in the optical wedge group by taking a collimator reticle as a reference;
the left optical wedge component and the right optical wedge component with calibrated main sections are coaxially installed with the left camera and the right camera, the optical wedge group device in front of the left camera is removed, planar targets are randomly placed in a common view field of the left camera and the right camera, and direct-view imaging of the left camera and refraction imaging of the right camera through the double optical wedges are respectively controlled;
based on the pose relationship between the left camera and the right camera, extracting the reference coordinates of the calibration point from the direct-view image of the left camera and transmitting the reference coordinates to a coordinate system of the right camera, and predicting the three-dimensional coordinates of the calibration point from the refraction imaging result of the right camera;
and determining the alignment relation between the right camera and the optical wedge set thereof by minimizing the deviation between the reference datum and the model prediction, and calibrating the alignment relation between the left camera and the optical wedge set thereof by the same method.
8. The method for bionic binocular vision tracking driven by optical wedge sets according to claim 3, wherein the process of calibrating the internal parameters of the virtual cameras, the pose parameters between the virtual binocular cameras and the imaging distortion of the dynamic virtual cameras comprises the following steps:
according to the image mapping relation between the virtual camera and the actual camera, any point on the virtual image surface corresponds to a point which is subjected to lens distortion correction on the actual image surface, and a pinhole imaging model of the left virtual camera and the right virtual camera is established according to a geometric optics theory;
respectively deducing the imaging visual axis directions and projection center positions of the virtual cameras corresponding to the left camera and the right camera by combining the relative pose relationship of the left camera and the right camera and the control rule of the optical wedge group on the imaging visual axis directions to obtain a coordinate transformation relationship and relative pose parameters between the left virtual camera and the right virtual camera;
deducing a mathematical mapping relation between a virtual image surface and an actual image surface by adopting a virtual camera projection light ray reverse tracing method, and establishing a nonlinear imaging distortion fitting model;
establishing an optimized objective function related to the imaging distortion coefficient of the virtual camera by using the superposition constraint of the projection light of the virtual camera and the initial incident light of the optical wedge group;
and forming an optimized objective function by taking the multi-order distortion coefficient as a variable and the minimum ray tracing deviation as a criterion according to the distribution and the evolution characteristics of imaging distortion under different prism corner combinations.
9. The method for bionic binocular vision tracking driven by optical wedge sets according to claim 3, wherein the dynamic target fast tracking process comprises:
establishing a space-time coupling constraint relation of a binocular image sequence according to the motion trail description and the projection transformation matrix of the virtual binocular camera, and constructing a visual axis adjustment algorithm of different bionic binocular motion modes;
extracting SURF characteristic motion flow fields from a binocular camera image sequence, and performing foreground region segmentation and dynamic eye identification by combining a visual saliency model and a spatial information guidance mechanism;
estimating target pose parameters by adopting a recursive least square algorithm and establishing a motion trail prediction model thereof to obtain image deviation information of a current target relative to the center of a left camera and a right camera view field;
estimating target absolute positioning information from left and right image deviations by utilizing a triangulation principle, and solving corner parameters of a left optical wedge assembly and a right optical wedge assembly by a visual tracking algorithm and an optical wedge group reverse solving algorithm so as to ensure that the centers of two fields of view can synchronously lock a target;
and substituting the corner parameters of the optical wedge set into a state equation of the control system, generating the joint optimal estimation of multiple control quantities such as the angular velocity, the angular acceleration and the like of the left optical wedge assembly and the right optical wedge assembly under the condition of the double-view-field cooperative control energy constraint, establishing a closed-loop flow, and realizing the target tracking function.
10. The method of claim 9, wherein the inverse solution algorithm for the set of optical wedges is one of a table lookup method, an approximation method or an iteration method.
CN202211338408.0A 2022-10-28 2022-10-28 Bionic binocular vision tracking device and method driven by optical wedge set Pending CN115713545A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211338408.0A CN115713545A (en) 2022-10-28 2022-10-28 Bionic binocular vision tracking device and method driven by optical wedge set

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211338408.0A CN115713545A (en) 2022-10-28 2022-10-28 Bionic binocular vision tracking device and method driven by optical wedge set

Publications (1)

Publication Number Publication Date
CN115713545A true CN115713545A (en) 2023-02-24

Family

ID=85231594

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211338408.0A Pending CN115713545A (en) 2022-10-28 2022-10-28 Bionic binocular vision tracking device and method driven by optical wedge set

Country Status (1)

Country Link
CN (1) CN115713545A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883516A (en) * 2023-09-07 2023-10-13 西南科技大学 Camera parameter calibration method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883516A (en) * 2023-09-07 2023-10-13 西南科技大学 Camera parameter calibration method and device
CN116883516B (en) * 2023-09-07 2023-11-24 西南科技大学 Camera parameter calibration method and device

Similar Documents

Publication Publication Date Title
CN109859275B (en) Monocular vision hand-eye calibration method of rehabilitation mechanical arm based on S-R-S structure
WO2018076154A1 (en) Spatial positioning calibration of fisheye camera-based panoramic video generating method
CN109242914B (en) Three-dimensional calibration method of movable vision system
CN109323650B (en) Unified method for measuring coordinate system by visual image sensor and light spot distance measuring sensor in measuring system
CN108489398B (en) Method for measuring three-dimensional coordinates by laser and monocular vision under wide-angle scene
CN113175899B (en) Camera and galvanometer combined three-dimensional imaging model of variable sight line system and calibration method thereof
CN110363838B (en) Large-visual-field image three-dimensional reconstruction optimization method based on multi-spherical-surface camera model
CN109712232B (en) Object surface contour three-dimensional imaging method based on light field
CN109285189B (en) Method for quickly calculating straight-line track without binocular synchronization
CN111080705B (en) Calibration method and device for automatic focusing binocular camera
CN111854636B (en) Multi-camera array three-dimensional detection system and method
CN113724337B (en) Camera dynamic external parameter calibration method and device without depending on tripod head angle
US9052585B2 (en) Control system for stereo imaging device
CN110849269A (en) System and method for measuring geometric dimension of field corn cobs
CN115713545A (en) Bionic binocular vision tracking device and method driven by optical wedge set
CN115638726A (en) Fixed sweep pendulum type multi-camera vision measurement method
Deng et al. Equivalent virtual cameras to estimate a six-degree-of-freedom pose in restricted-space scenarios
CN111583117A (en) Rapid panoramic stitching method and device suitable for space complex environment
CN110766752A (en) Virtual reality interactive glasses with reflective mark points and space positioning method
CN107806861B (en) Inclined image relative orientation method based on essential matrix decomposition
Zou et al. Flexible Extrinsic Parameter Calibration for Multicameras With Nonoverlapping Field of View
CN111553955B (en) Multi-camera three-dimensional system and calibration method thereof
CN112804515A (en) Omnidirectional stereoscopic vision camera configuration system and camera configuration method
TWI725620B (en) Omnidirectional stereo vision camera configuration system and camera configuration method
Spacek Omnidirectional catadioptric vision sensor with conical mirrors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination