CN118129696B - Multi-target distance measurement method and system based on double compound eye imaging - Google Patents

Multi-target distance measurement method and system based on double compound eye imaging Download PDF

Info

Publication number
CN118129696B
CN118129696B CN202410561239.XA CN202410561239A CN118129696B CN 118129696 B CN118129696 B CN 118129696B CN 202410561239 A CN202410561239 A CN 202410561239A CN 118129696 B CN118129696 B CN 118129696B
Authority
CN
China
Prior art keywords
sub
camera unit
eye
eye camera
compound eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410561239.XA
Other languages
Chinese (zh)
Other versions
CN118129696A (en
Inventor
鱼卫星
刘一鸣
许黄蓉
张远杰
武登山
周晓军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XiAn Institute of Optics and Precision Mechanics of CAS
Original Assignee
XiAn Institute of Optics and Precision Mechanics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XiAn Institute of Optics and Precision Mechanics of CAS filed Critical XiAn Institute of Optics and Precision Mechanics of CAS
Priority to CN202410561239.XA priority Critical patent/CN118129696B/en
Publication of CN118129696A publication Critical patent/CN118129696A/en
Application granted granted Critical
Publication of CN118129696B publication Critical patent/CN118129696B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Measurement Of Optical Distance (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The invention relates to an imaging ranging method and a system, in particular to a multi-target ranging method and a system based on double-compound-eye imaging, which aim to solve the defects that in the prior art, a plurality of targets in a scene cannot be simultaneously measured by using a double-eye camera in a ranging process and the requirement of wide-area target detection cannot be met; and simultaneously, a corresponding multi-target ranging system is also provided, which comprises a left compound eye camera unit and a right compound eye camera unit and is used for realizing the multi-target ranging method.

Description

Multi-target distance measurement method and system based on double compound eye imaging
Technical Field
The invention relates to an imaging ranging method and system, in particular to a multi-target ranging method and system based on double compound eye imaging.
Background
In nature, the arthropod such as an insect often has a vision system different from the single-aperture eye of a vertebrate, and this vision system is called a compound eye and is composed of a plurality of small eyes. Compared with a single-aperture eye of a human being, the compound eye vision system has the advantages of larger visual field, more sensitive perception for a moving target, low distortion and infinite depth of field. In the field of three-dimensional stereotactic, a bionic compound eye imaging system is derived based on the characteristic of infinite depth of field of the compound eye, and accurate acquisition of depth information of a space target of an object space can be realized.
At present, most of the optical imaging fields are widely applied to single-aperture imaging systems, and the imaging spatial resolution is higher, and the defects are that the field of view is limited and wide-area target monitoring cannot be realized; although the fisheye lens is used for realizing large-view-field imaging, the distortion is very serious, the distortion around the image is gradually enlarged except that the center of the image is kept unchanged, a complex later image processing algorithm is required to be matched, and very high requirements are provided for the camera calibration process. In the imaging ranging system, more structured light cameras or binocular cameras are used, so that the defects of limited scene or small ranging range exist. In addition, the binocular camera is used for ranging, a complex stereo correction process is needed, a plurality of targets in a scene cannot be simultaneously measured, and the requirement of wide-area target detection cannot be met.
Disclosure of Invention
The invention aims to solve the defects that in the prior art, a binocular camera is used for ranging, a complex three-dimensional correction process is needed, a plurality of targets in a scene cannot be simultaneously measured, and the wide-area target detection requirement cannot be met, and provides a multi-target ranging method and system based on binocular imaging.
In order to achieve the above purpose, the technical solution provided by the present invention is as follows:
The multi-target distance measurement method based on double compound eye imaging is characterized by comprising the following steps of:
step 1, optical axis correction
Selecting a double compound eye imaging system comprising a left compound eye camera unit and a right compound eye camera unit; the left and right compound eye camera units are adjusted, so that the optical axes of the central sub-eyes are parallel, obtain the left part distance of sub-eye optical axis of center of right compound eye camera unit;
Step 2, searching and matching the characteristic points
Dividing to obtain each sub-eye image, carrying out distortion correction on each sub-eye image, recording coordinate information of common characteristic points of central sub-eyes of the left compound-eye camera unit and the right compound-eye camera unit, screening sub-eye images containing the common characteristic points, and recording corresponding sub-eye image quantity, sub-eye numbers and coordinate information of the common characteristic points;
the coordinate information is a coordinate under a distortion correction image coordinate system; the common feature points are the same feature points appearing in different sub-eyes;
Step 3, data processing
Calculating left and right parallaxes of the common feature points in different sub-eyes, respectively calculating distances from the common feature points to central lines of central sub-eyes of the left and right compound eye camera units, obtaining a plurality of ranging sample points corresponding to each common feature point, removing extreme points in the ranging sample points, obtaining a ranging result of each common feature point, and finishing the distance measurement of a plurality of targets.
Further, after the ranging result is obtained in the step 3, the method further includes: and calculating a ranging relative error, and recording the ranging time length.
Further, the step 1 specifically comprises the following steps:
1.1. the position and the angle of the left compound eye camera unit are adjusted, so that any two feature points in a target scene appear at the center sub-eye of the left compound eye camera unit, and the left compound eye camera unit is fixed;
1.2. the position and the angle of the right compound eye camera unit are adjusted, so that the two characteristic points in the step 1.1 are simultaneously present at the center sub-eye of the right compound eye camera unit, and the distance of the center sub-eye optical axis is calculated according to the distances of the left compound eye camera unit and the right compound eye camera unit;
1.3. Calculating theoretical coordinate values of two target feature points in a central sub-eye of the right compound eye camera unit when the optical axes are parallel; and then adjusting the angle of the right compound eye camera unit to enable the actual coordinate values of the two target feature points in the central sub-eyes of the right compound eye camera unit to be consistent with the theoretical coordinate values, wherein the optical axes of the central sub-eyes of the left compound eye camera unit and the right compound eye camera unit are parallel.
Further, the step 2 specifically comprises:
2.1. shooting a target scene by using a left compound eye camera unit and a right compound eye camera unit, dividing the shot image to obtain images of all sub eyes, and carrying out distortion correction on the sub eye images;
2.2. The method comprises the steps of taking distortion correction images of central sub-eyes of a left compound-eye camera unit and a right compound-eye camera unit as images to be matched, carrying out feature point matching on the distortion correction images of the two central sub-eyes, removing feature points which are incorrectly matched, and recording coordinate information of all common feature points in the distortion correction images of the central sub-eyes of the left compound-eye camera unit and the right compound-eye camera unit;
2.3. Respectively carrying out characteristic point matching on the other sub-eyes of the left compound eye camera unit by taking the central sub-eye of the left compound eye camera unit as a reference, and acquiring characteristic point coordinate information matched with the central sub-eye characteristic point of the left compound eye camera unit in the other sub-eyes;
2.4. respectively carrying out characteristic point matching on the other sub-eyes of the right compound eye camera unit by taking the central sub-eye of the right compound eye camera unit as a reference, and acquiring characteristic point coordinate information matched with the central sub-eye characteristic point of the right compound eye camera unit in the other sub-eyes;
2.5. Searching whether common characteristic points exist in the rest sub-eye images of the left compound eye camera unit and the right compound eye camera unit or not by taking the common characteristic points of the central sub-eyes of the left compound eye camera unit and the right compound eye camera unit as references, and respectively counting the number of the sub-eye images of each common characteristic point in the left compound eye camera unit and the right compound eye camera unit, wherein the common characteristic points appear in m sub-eye images of the left compound eye camera unit and n sub-eye images of the right compound eye camera unit;
2.6. feature point screening
Setting a ranging sample point threshold value, removing common characteristic points with m multiplied by n values smaller than the ranging sample point threshold value, and taking the rest common characteristic points as effective characteristic points;
2.7. And recording the serial numbers of the sub-eye images corresponding to the effective feature points and the coordinate information of the sub-eye images.
Further, the step 3 specifically comprises:
3.1. Selecting a sub-eye of each of the left compound-eye camera unit and the right compound-eye camera unit corresponding to the effective feature point in the step 2.7, and respectively calculating the distance X L、XR between the effective feature point and the sub-eye main point in the sub-eye image corresponding to the left compound-eye camera unit and the right compound-eye camera unit according to the coordinate information of the effective feature point in the sub-eye image and the position coordinates of the corresponding sub-eye main point, wherein the difference X L-XR is the left parallax of the effective feature point;
3.2. calculating a rotation matrix and a translation matrix of each sub-eye main point of the right compound eye camera unit relative to the central sub-eye main point of the left compound eye camera unit, and calculating the base line width b of the sub-eye selected in the step 3.1 by combining the distance of the optical axis of the central sub-eye; calculating the distance Z between the effective feature point and the center line of the center sub-eye of the left compound-eye camera unit and the right compound-eye camera unit through the following steps;
wherein f is the average value of the focal length of the sub-eyes selected in the step 3.1;
3.3. Sequentially combining m sub-eye images of the left compound eye camera unit and each sub-eye in n sub-eye images of the right compound eye camera unit, repeating the steps 3.1-3.2, and calculating the distance from the same effective characteristic point to the central connecting line of the central sub-eyes of the left compound eye camera unit and the right compound eye camera unit to obtain m multiplied by n ranging sample points of the effective characteristic point;
3.4. Setting the parallax changing range between the left sub-eye and the right sub-eye as plus or minus (40 plus or minus 5) pixels by taking the center sub-eye of the left compound-eye camera unit as a reference, and drawing a distance and parallax relation diagram of the effective characteristic point to the center connecting line of the center sub-eyes of the left compound-eye camera unit and the right compound-eye camera unit;
3.5. Setting the ratio of minimum values to maximum values to be removed in the data processing process as K, determining a removal base B% according to the number of m multiplied by n of distance measurement sample points actually obtained each time, wherein the number of removed maximum values is m multiplied by n multiplied by B%, and the number of removed minimum values is m multiplied by n multiplied by (K multiplied by B%);
3.6. Obtaining an average value of the ranging sample points after the extremum is removed, and obtaining a ranging result of the effective characteristic points;
3.7. Repeating the steps 3.1-3.6, and calculating the distance measurement result one by one for the effective feature points screened in the step 2.6 to finish the distance measurement of a plurality of targets.
Further, the step1 further comprises a calibration step:
The center coordinates and the radius of each sub-eye in the left compound eye camera unit and the right compound eye camera unit are obtained through calibration; the method comprises the following steps: shooting a whiteboard by using a left compound eye camera unit and a right compound eye camera unit, acquiring an original whiteboard compound eye image, detecting a circular outline of each sub-eye image in the original whiteboard compound eye image, acquiring the circle center coordinates and the radius of each sub-eye image, and numbering the sub-eye images;
Obtaining internal parameters and external parameters of each sub-eye in the left compound-eye camera unit and the right compound-eye camera unit through calibration; the method comprises the following steps: and shooting the calibration plate at multiple angles by using the left compound eye camera unit and the right compound eye camera unit, obtaining a plurality of calibration information images with different angles by each sub-eye, and processing the calibration information images to obtain the internal parameters and the external parameters of each sub-eye in the left compound eye camera unit and the right compound eye camera unit.
Further, in the calibration step, the calibration plate is a checkerboard calibration plate, and the calibration information image is processed through a Zhang Zhengyou calibration method in a MATLAB calibration tool box;
Or the calibration plate is CALTag calibration plates, and the calibration information image is processed through a self-identification calibration method.
Meanwhile, the invention also provides a multi-target ranging system based on double-compound-eye imaging, which is used for realizing the multi-target ranging method based on double-compound-eye imaging, and is characterized in that:
The device comprises a motion supporting unit, two compound eye camera units which are arranged on the motion supporting unit in parallel and a data processing unit; the two compound eye camera units can move relative to the movement supporting unit, and the output end of the compound eye camera unit is connected with the data processing unit;
The compound eye camera unit comprises a compound eye lens array, an optical relay image transferring subsystem and a large area array image sensor which are sequentially arranged along incident light;
the compound eye lens array comprises a support shell and a plurality of sub-eyes arranged on the support shell, wherein the field angle of the sub-eyes is larger than the included angle of the optical axes of the adjacent sub-eyes;
the optical relay image transfer subsystem is used for converting a primary curved surface image formed by the compound eye lens array into a secondary plane image;
the large area array image sensor is used for receiving the secondary plane image, and the output end of the large area array image sensor is connected with the data processing unit;
the data processing unit is used for receiving the image shot by the compound eye camera unit and processing the image to obtain a ranging result.
Further, the motion supporting unit comprises a guide rail assembly and two groups of supporting assemblies; the two groups of support components are respectively connected with the corresponding compound eye camera units to realize the movable support, fixation and optical axis adjustment of the compound eye camera units;
The supporting component comprises a clamping component, a rotating component and a sliding component which are sequentially connected;
The clamping component comprises a first clamping block and a second clamping block, the first clamping block and the second clamping block are oppositely arranged, gaps matched with compound eye camera units are correspondingly formed in the butt joint surfaces, the compound eye camera units are arranged in the corresponding gaps, and the first clamping block and the second clamping block are fixedly connected through bolts;
the sliding part comprises an I-shaped base and a connecting piece arranged above the I-shaped base, and the connecting piece is fixedly connected with the rotating part;
The upper surface of the guide rail component is provided with a groove matched with the I-shaped base, the two groups of support components are respectively connected in the groove through sliding components and used for realizing the movement of the two compound-eye camera units, and scales are marked on the guide rail and used for recording the lengths of the central connecting lines of the central sub-eyes of the left compound-eye camera unit and the right compound-eye camera unit;
The rotating component is of a rotating mechanical structure with multiple degrees of freedom, and the upper end of the rotating component is connected with the second clamping block of the clamping component.
Further, the compound eye lens arrays are hexagonal, and long diagonal lines of the compound eye lens arrays in the two compound eye camera units are perpendicular to each other;
the angle of view of the sub-eyes is 10-40 degrees, and the included angle of the optical axes of adjacent sub-eyes is 5-15 degrees.
The invention has the beneficial effects that:
1. The multi-target ranging method is based on curved surface bionic double compound eye imaging, realizes larger visual field and small image distortion, utilizes the curved surface bionic compound eye to enable a plurality of sub eyes to image an object space, has overlapping visual fields between adjacent sub eyes, can simultaneously observe the same target in a scene by the plurality of sub eyes, and can simultaneously acquire a plurality of sub eye images with the same target at least through one imaging.
2. The multi-target ranging method adopts a multi-sub-eye ranging algorithm, can realize simultaneous ranging of a plurality of targets at different distances, can range middle and long-distance targets in a large-view-field range, does not need to carry out a complex stereo matching process like a common binocular camera, has higher ranging efficiency, and has a wide application prospect in the fields of wide-area monitoring, key area security, target identification and tracking and the like.
3. According to the multi-target ranging method, the base line width can be adjusted according to the actual distance of the target, compared with a single compound eye camera, the working distance is greatly expanded, common characteristic points in different sub eyes are matched by using a characteristic point matching algorithm, unreasonable common characteristic points and ranging sample points are removed by setting the threshold value and the extremum proportion of the ranging sample points, and the computing efficiency can be further improved while the ranging precision is ensured.
4. The multi-target ranging system is provided with the left compound eye camera unit and the right compound eye camera unit, and obtains a sufficient number of sub-eye images after one-time imaging, so that the multi-target ranging system is convenient and simple to use.
5. According to the invention, the field angle of the sub-eyes is set to be 10-40 degrees, and the included angle of the optical axes of the adjacent sub-eyes is matched to be 5-15 degrees, so that any point of the object space in the imaging range can be acquired by at least seven sub-eyes, at least 49 ranging sample points can be obtained from the common characteristic point, which is equivalent to 49 experiments of a binocular ranging system, the complicated step of three-dimensional correction of left and right images is omitted, and the ranging efficiency is greatly improved.
Drawings
FIG. 1 is a schematic diagram of a dual compound eye imaging system in accordance with an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a compound eye camera unit according to an embodiment of the present invention;
FIG. 3 is an exploded schematic view of a support assembly in accordance with an embodiment of the present invention;
fig. 4 is a schematic flow chart of a ranging method step 4 according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a ranging method step 5.3 according to an embodiment of the present invention;
Reference numerals illustrate:
the device comprises a 1-compound eye camera unit, a 11-compound eye lens array, a 111-supporting shell, a 112-sub eye, a 12-optical relay image transferring subsystem, a 13-large area array image sensor, a 2-motion supporting unit, a 21-clamping component, a 211-first clamping block, a 212-second clamping block, a 213-bolt, a 22-rotating component, a 23-sliding component, a 231-I-shaped base, a 232-connecting piece and a 3-guide rail component.
Detailed Description
The structure of the multi-target ranging system based on double compound eye imaging is shown in fig. 1, and the multi-target ranging system comprises two compound eye camera units 1 which are arranged in parallel, wherein the two compound eye camera units 1 are connected with a motion supporting unit 2, and the motion supporting unit 2 is used for realizing mobile supporting, fixing and optical axis adjusting of the compound eye camera units 1 and comprises a guide rail component 3 and two groups of supporting components corresponding to the two compound eye camera units 1.
As shown in fig. 2, the compound eye camera unit 1 includes a compound eye lens array 11, an optical relay imaging subsystem 12, and a large area array image sensor 13, which are sequentially arranged in the incident light direction; the compound eye lens array 11 includes a plurality of sub-eyes 112 and a supporting housing 111, in this embodiment, the supporting housing 111 is hemispherical, and the plurality of sub-eyes 112 are preferably arranged on the surface of the spherical shell in a hexagonal manner, so as to obtain object space and light intensity information, form a primary curved image, and can improve the proportion of useful information obtained by arranging in a hexagonal manner.
Except that the directions of the compound eye lens arrays 11 of the two compound eye camera units 1 are inconsistent, other parameters are identical, in this embodiment, any long diagonal line of one compound eye lens array 11 and any long diagonal line of the other compound eye lens array 11 which are arranged in a hexagonal manner are mutually perpendicular, so as to further enlarge the field of view of the double compound eye imaging system; the angle of view ΔΦ of a single sub-eye 112 is larger than the included angle ΔΦ of the optical axes of adjacent sub-eyes 112, so that larger field of view overlap is generated between the sub-eyes 112, any point in object space is acquired by more sub-eyes 112, and the ranging result is more accurate. In this embodiment, the angle ΔΦ of view of a single sub-eye 112 is 24 °, the angle ΔΦ of optical axes of adjacent sub-eyes 112 is 7 °, so that any point of object space in the imaging range can be acquired by at least seven sub-eyes 112, in other embodiments of the present invention, other angles may be set, and considering the ranging efficiency and accuracy, the angle ΔΦ of view of a sub-eye 112 is preferably 10 ° to 40 °, and the angle of optical axes of adjacent sub-eyes 112 is preferably 5 ° to 15 °; the surface of the supporting shell 111 is provided with a plurality of stepped holes, and lenses of the sub-eyes 112 are respectively fixed in the corresponding stepped holes through rubber press rings; the optical relay image transfer subsystem 12 is formed by arranging 6-13 lenses, and is used for converting a primary curved surface image formed by the compound eye lens array 11 into a secondary plane image, and the functions of reducing distortion and aberration can be realized in the conversion process; the large area array image sensor 13 is a CMOS image sensor for receiving a secondary planar image.
As shown in fig. 3, the support assembly includes a clamping member 21, a rotating member 22, and a sliding member 23 connected in sequence; the clamping component 21 comprises a first clamping block 211 and a second clamping block 212, the first clamping block 211 and the second clamping block 212 are oppositely arranged, gaps matched with the compound eye camera unit 1 are correspondingly formed in the butt joint surface, the compound eye camera unit 1 is arranged in the corresponding gaps, and the first clamping block 211 and the second clamping block 212 are fixedly connected through bolts 213. The rotating member 22 is a rotating mechanical structure with multiple degrees of freedom, and is used for realizing pitching, tilting left and right and in-plane rotation operation of the compound eye camera unit 1, and the upper end of the rotating member is connected with the second clamping block 212 of the clamping member 21; the sliding part 23 includes an i-shaped seat 231 and a connecting piece 232 disposed above the i-shaped seat 231, the connecting piece 232 being fixedly connected with the rotating part 22; the upper surface of the guide rail component 3 is provided with a groove matched with the I-shaped base 231, two groups of support components are respectively connected in the groove through sliding components 23 and used for realizing the movement of the two compound eye camera units 1, the front surface of the guide rail is marked with corresponding scales, the base line width of the double compound eye imaging system can be recorded in time, and in the embodiment, the total length of scales marked by the guide rail component 3 is 6.8m. The base line of the double compound eye imaging system is the optical center connecting line of the center sub-eyes of the left compound eye camera unit 1 and the right compound eye camera unit 1, the length of the center connecting line is the base line width of the double compound eye imaging system.
The multi-target ranging system further comprises a data processing unit, wherein the data processing unit is connected with the output end of the large-area array image sensor 13, receives the image shot by the compound eye camera unit 1, and processes the image to obtain a ranging result.
The invention discloses a multi-target ranging method based on double compound eye imaging, which comprises the steps of sub-eye image calibration and numbering, compound eye camera unit 1 calibration, optical axis correction, characteristic point search matching and data processing, and specifically comprises the following steps:
Step 1, sub-eye image calibration and numbering
Shooting the white board by using a left compound eye camera unit 1 and a right compound eye camera unit 1 respectively to obtain original white board compound eye images of the left compound eye camera and the right compound eye camera;
detecting the circular outline of the sub-eye image in the acquired original whiteboard compound-eye image, and recording the circle center coordinates (X n,Yn) and the radius R n of each sub-eye image;
In this embodiment, a hough circle detection algorithm is used to detect the circular contour of the sub-eye image, and in other embodiments of the present invention, a canny edge detection and contour tracking algorithm may be used to implement the circular contour detection of the sub-eye image;
numbering the sub-eyes 112 according to the angle value of the radius R n and the connecting line of the center coordinates (X n,Yn) of the sub-eyes and the center of the compound eye image;
Dividing the sub-eye image according to the center coordinates (X n,Yn) and the radius R n, renaming the sub-eye image according to the steps, and respectively marking as L_i or R_j, wherein i and j are respectively numerical values of numbers of sub-eyes 112 in the left compound eye and the right compound eye; in this embodiment, i=1, 2, …,127, j=1, 2, …,127;
Step 2, compound eye camera unit 1 calibration
The left compound eye camera unit 1 and the right compound eye camera unit 1 are used for shooting the checkerboard calibration plate at multiple angles, each sub-eye 112 obtains a plurality of calibration information images with different angles, and the more the sub-eyes 112 obtain the calibration information images, the more the camera calibration precision can be improved;
processing the acquired sub-eye 112 calibration information image by using Zhang Zhengyou calibration method in MATLAB calibration kit to obtain internal parameters and external parameters of each sub-eye 112 of the left and right compound-eye camera units 1;
the internal parameters of each sub-eye 112 include sub-eye principal point position coordinates, focal length, tangential distortion parameters, radial distortion parameters and reprojection errors representing calibration accuracy, and the smaller the reprojection errors are, the higher the camera calibration accuracy is; the sub-eye principal point is the intersection point of the sub-eye optical axis and the sub-eye image plane after the compound eye camera unit is calibrated, and the position of the intersection point in the sub-eye image coordinate system is the position coordinate of the sub-eye principal point;
The external parameters of each sub-eye 112 comprise a rotation matrix and a translation matrix of each sub-eye 112 in the left compound eye camera unit 1 and the right compound eye camera unit 1 relative to the center sub-eye of the left compound eye camera unit 1 and the right compound eye camera unit 1;
In this embodiment, the compound eye camera unit 1 is calibrated by using a checkerboard calibration board, and in other embodiments of the present invention, a CALTag calibration board may be also used, and the corresponding sub-eye 112 calibration information image is processed by using a self-recognition calibration method, so as to obtain the internal parameters and the external parameters of each sub-eye 112 of the left and right compound eye camera units 1.
In the invention, the steps 1 and 2 are only needed to be carried out before the multi-target ranging system is used for ranging for the first time.
Step 3, optical axis correction
3.1. Moving the left compound eye camera unit 1 to a position where the scale value of the guide rail assembly 3 is zero, adjusting the rotary part 22 of the support assembly of the left compound eye camera unit 1 so that any two feature points in a target scene appear at the central sub-eye of the left compound eye camera unit 1, recording the numerical value of the rotary part and fixing the left compound eye camera unit 1; the value of the rotating component can be directly used for measuring the distance of the same target next time;
3.2. Controlling the right compound eye camera unit 1 to move in parallel along the guide rail assembly 3, and adjusting the angle of the right compound eye camera unit 1 so that the two characteristic points in the step 3.1 are simultaneously appeared at the center sub-eye of the right compound eye camera unit 1, and recording the scale value of the guide rail assembly 3 corresponding to the right compound eye camera unit 1 at the moment, wherein the difference between the scale values of the guide rail assembly 3 corresponding to the left compound eye camera unit 1 and the right compound eye camera unit 1 is the distance of the center sub-eye optical axis;
3.3. calculating the coordinate values of the central sub-eyes of the two target feature points in the right compound-eye camera unit 1 when the optical axes are parallel, taking the coordinate values as theoretical coordinate values, and then adjusting the rotating component 22 of the right compound-eye camera unit 1 to enable the actual coordinate values of the two target feature points to be consistent with the theoretical coordinate values, wherein the central sub-eye optical axes of the left compound-eye camera unit 1 and the right compound-eye camera unit 1 are considered to be corrected to be parallel;
3.4. The numerical value of the rotating component is recorded, the right compound eye camera unit 1 is fixed, the right compound eye camera unit 1 can be directly regulated through the numerical value when the distance measurement is carried out on the same target next time, so that the central sub-eye optical axes of the left compound eye camera unit 1 and the right compound eye camera unit 1 are parallel, and the repetition of the step 3.3 is avoided.
4. Feature point search matching
4.1. Segmentation compound eye image
Shooting a target scene by using a fixed double-compound-eye imaging system to obtain a compound-eye image, dividing the compound-eye image according to the calibration result of the step 1 to obtain an image of each sub-eye 112, and carrying out distortion correction on the sub-eye image according to the radial distortion parameters and the tangential distortion parameters obtained in the step 2.2 as shown in fig. 4;
4.2. Center sub-eye common feature point matching
Taking the distortion correction images of the central sub-eyes of the left compound-eye camera unit 1 and the right compound-eye camera unit 1, namely the distortion correction images with the numbers of L_1 and R_1 as images to be matched, extracting and matching characteristic points of the distortion correction images of the two central sub-eyes by using a Scale-invariant feature transform (Scale-INVARIANT FEATURE TRANSFORM, SIFT) algorithm, removing the characteristic points which are incorrectly matched by using a random sampling consensus (RANdom SAmple Consensus, RANSAC) algorithm, and recording coordinate information of all common characteristic points in the central sub-eyes of the left compound-eye camera unit 1 and the right compound-eye camera unit 1; the common feature points are the same feature points appearing in different sub-eye images; the coordinate information is coordinates of the sub-eye 112 in a distortion correction image coordinate system;
4.3. Characteristic point matching of the remaining sub-eyes 112 of the left compound eye camera unit 1
Performing feature point matching on the L_2-L_18 sub-eyes 112 of the left compound eye camera unit 1 by using a SIFT algorithm and a RANSAC algorithm respectively by taking the central sub-eye of the left compound eye camera unit 1 as a reference, and acquiring feature point coordinate information matched with the central sub-eye feature point of the left compound eye camera unit 1 in the L_2-L_18 sub-eyes 112;
in this embodiment, the target is mainly located in the sub-eye 112 of numbers 1-18, so ranging is performed according to the sub-eye images of L_1-L_18 and R_1-R_18.
4.4. Feature point matching for the remaining sub-eyes 112 of the right compound eye camera unit 1
Taking the central sub-eye of the right compound eye camera unit 1 as a reference, performing feature point matching on the R_2-R_18 sub-eye 112 of the right compound eye camera unit 1 by using a SIFT algorithm and a RANSAC algorithm, and acquiring feature point coordinate information matched with the central sub-eye feature point of the right compound eye camera unit 1 in the R_2-R_18 sub-eye 112;
4.5. Common feature point count
Searching the rest sub-eye images of the left compound eye camera unit 1 and the right compound eye camera unit 1 respectively by taking the common characteristic point of the central sub-eye of the left compound eye camera unit 1 and the right compound eye camera unit 1 as a reference, and counting the number of the sub-eye images of each common characteristic point in the left compound eye camera unit 1 and the right compound eye camera unit 1 respectively, wherein the common characteristic point appears in m sub-eye images of the left compound eye camera unit 1 and n sub-eye images of the right compound eye camera unit 1;
4.6. full scene feature point screening
The number of the ranging sample points can be obtained for each common characteristic point, namely, the greater the number of the m multiplied by n, the more the number of the ranging times for the characteristic point is represented, and the higher the final ranging precision is, so that the threshold value of the ranging sample point is set, the common characteristic points with the m multiplied by n smaller than the threshold value of the ranging sample point are removed, and the ranging precision is ensured;
In this embodiment, the ranging sample point threshold is 20, and the obtained common feature points with m×n values greater than or equal to 20 are regarded as effective feature points for subsequent operations;
4.7. and recording the sub-eye image numbers corresponding to the effective feature points and the coordinate information of the sub-eye image numbers in the sub-eye images.
5. Data processing
5.1. Selecting one sub-eye 112 in each of the left and right compound-eye camera units 1 corresponding to one of the effective feature points in step 4.7 to form a group of ranging sub-eyes; according to the coordinate information of the effective feature points in the sub-eye image and the position coordinates of the corresponding sub-eye main points, respectively calculating the distances X L、XR between the effective feature points and the sub-eye main points in the sub-eye image corresponding to the left and right compound eye camera units 1, and making a difference on X L、XR, wherein the difference X L-XR is used as the left and right parallax of the effective feature points;
5.2. Converting external parameters of the left compound eye camera unit 1 and the right compound eye camera unit 1 to obtain a rotation matrix and a translation matrix of each sub-eye main point of the right compound eye camera unit 1 relative to the central sub-eye main point of the left compound eye camera unit 1, and calculating by combining the distance of the optical axes of the central sub-eyes to obtain a base line width b of the sub-eye 112 selected in the step 5.1; the base line width of the sub-eye 112 is the distance between the optical center lines of the ranging sub-eyes, and in the present invention, is the distance between the main points of the two sub-eyes after the calibration of the compound eye camera unit.
According to the principle of similar triangles, calculating the distance Z from an effective feature point to a base line of the binocular imaging system by the following formula;
wherein f is the average value of the focal length of the sub-eye 112 selected in step 5.1;
5.3. Sequentially combining m sub-eye images of the left compound eye camera unit 1 and each sub-eye 112 corresponding to n sub-eye images of the right compound eye camera unit 1, repeating the steps 5.1-5.2, calculating the distance from the same effective feature point to the base line of the double compound eye imaging system, and obtaining m multiplied by n ranging sample points of the effective feature point, as shown in fig. 5;
5.4. Taking the central sub-eye of the left compound eye camera unit 1 as a reference, assuming that the parallax change range between the left sub-eye 112 and the right sub-eye 112 is +/-40 pixels, drawing a distance-parallax relation diagram between effective characteristic points and a double compound eye imaging system base line, wherein x and y axes are parallax magnitudes, and a z axis is a distance calculation value;
in other embodiments of the present invention, the parallax change range may be ± (40±5), where ±40 is preferred, and the obtained relationship diagram is more accurate;
5.5. Setting the ratio of minimum values to maximum values to be removed in the data processing process as K, determining a removal base B% according to the number of m multiplied by n of distance measurement sample points actually obtained each time, wherein the number of removed maximum values is m multiplied by n multiplied by B%, and the number of removed minimum values is m multiplied by n multiplied by (K multiplied by B%);
Setting the ratio of the minimum value to the maximum value and the removal base according to the number of the obtained ranging sample points, the actual distance of the target and the accuracy of data, and removing the extreme value to improve the accuracy of the ranging result, wherein the more the actual distance of the target is, the more the minimum value is needed to be removed;
In the embodiment, determining that the base number of rejection is 10%, and the ratio of the minimum value to the maximum value is 2.5, namely rejecting 10% of maximum value points and 25% of minimum value points, and taking the rest ranging sample points as reliable values;
5.6. Obtaining an average value of the ranging sample points after the extremum is removed, and obtaining a ranging result of the effective characteristic points; recording the time spent in ranging, and monitoring the real-time performance of ranging by the method;
5.7. Repeating the steps 5.1-5.6, and calculating the distance measurement result one by one for the effective feature points screened in the step 4.6 to realize the distance measurement of a plurality of targets in the same target scene;
5.8. And measuring actual distance values from different effective feature points to a base line of the double-compound eye imaging system by using a laser range finder, and calculating a range finding relative error.

Claims (9)

1. A multi-target distance measurement method based on double compound eye imaging is characterized by comprising the following steps:
step 1, optical axis correction
Selecting a double compound eye imaging system comprising a left compound eye camera unit and a right compound eye camera unit; the left and right compound eye camera units are adjusted, so that the optical axes of the central sub-eyes are parallel, obtain the left part distance of sub-eye optical axis of center of right compound eye camera unit;
Step 2, searching and matching the characteristic points
Dividing to obtain each sub-eye image, carrying out distortion correction on each sub-eye image, recording coordinate information of common characteristic points of central sub-eyes of the left compound-eye camera unit and the right compound-eye camera unit, screening sub-eye images containing the common characteristic points, and recording corresponding sub-eye image quantity, sub-eye numbers and coordinate information of the common characteristic points; setting a ranging sample point threshold value, and screening the common characteristic points to obtain effective characteristic points;
the coordinate information is a coordinate under a distortion correction image coordinate system; the common feature points are the same feature points appearing in different sub-eyes;
Step 3, data processing
3.1. Selecting a sub-eye of each of the left compound-eye camera unit and the right compound-eye camera unit corresponding to the effective feature point in the step 2, and respectively calculating the distance X L、XR between the effective feature point and the sub-eye main point in the sub-eye image corresponding to the left compound-eye camera unit and the right compound-eye camera unit according to the coordinate information of the effective feature point in the sub-eye image and the position coordinates of the corresponding sub-eye main point, wherein the difference X L-XR is the left parallax of the effective feature point;
3.2. calculating a rotation matrix and a translation matrix of each sub-eye main point of the right compound eye camera unit relative to the central sub-eye main point of the left compound eye camera unit, and calculating the base line width b of the sub-eye selected in the step 3.1 by combining the distance of the optical axis of the central sub-eye; calculating the distance Z between the effective feature point and the center line of the center sub-eye of the left compound-eye camera unit and the right compound-eye camera unit through the following steps;
wherein f is the average value of the focal length of the sub-eyes selected in the step 3.1;
3.3. Sequentially combining m sub-eye images of the left compound eye camera unit and each sub-eye in n sub-eye images of the right compound eye camera unit, repeating the steps 3.1-3.2, and calculating the distance from the same effective characteristic point to the central connecting line of the central sub-eyes of the left compound eye camera unit and the right compound eye camera unit to obtain m multiplied by n ranging sample points of the effective characteristic point;
3.4. setting the parallax changing range between the left sub-eye and the right sub-eye as plus or minus (40 plus or minus 5) pixels by taking the center sub-eye of the left compound-eye camera unit as a reference, and drawing a distance and parallax relation diagram of the effective characteristic point to the center connecting line of the center sub-eyes of the left compound-eye camera unit and the right compound-eye camera unit;
3.5. Setting the ratio of minimum values to maximum values to be removed in the data processing process as K, determining a removal base B% according to the number of m multiplied by n of distance measurement sample points actually obtained each time, wherein the number of removed maximum values is m multiplied by n multiplied by B%, and the number of removed minimum values is m multiplied by n multiplied by (K multiplied by B%);
3.6. Obtaining an average value of the ranging sample points after the extremum is removed, and obtaining a ranging result of the effective characteristic points;
3.7. And 3.1-3.6, calculating a distance measurement result one by one for the effective characteristic points screened in the step 2, and finishing the distance measurement of a plurality of targets.
2. The multi-target ranging method based on binocular imaging according to claim 1, wherein the method comprises the following steps:
Step 3, after obtaining the ranging result, further comprises: and calculating a ranging relative error, and recording the ranging time length.
3. The multi-target ranging method based on binocular imaging according to claim 1 or 2, wherein step1 specifically comprises:
1.1. the position and the angle of the left compound eye camera unit are adjusted, so that any two feature points in a target scene appear at the center sub-eye of the left compound eye camera unit, and the left compound eye camera unit is fixed;
1.2. the position and the angle of the right compound eye camera unit are adjusted, so that the two characteristic points in the step 1.1 are simultaneously present at the center sub-eye of the right compound eye camera unit, and the distance of the center sub-eye optical axis is calculated according to the distances of the left compound eye camera unit and the right compound eye camera unit;
1.3. Calculating theoretical coordinate values of two target feature points in a central sub-eye of the right compound eye camera unit when the optical axes are parallel; and then adjusting the angle of the right compound eye camera unit to enable the actual coordinate values of the two target feature points in the central sub-eyes of the right compound eye camera unit to be consistent with the theoretical coordinate values, wherein the optical axes of the central sub-eyes of the left compound eye camera unit and the right compound eye camera unit are parallel.
4. The multi-target ranging method based on binocular imaging according to claim 3, wherein the step 2 is specifically:
2.1. shooting a target scene by using a left compound eye camera unit and a right compound eye camera unit, dividing the shot image to obtain images of all sub eyes, and carrying out distortion correction on the sub eye images;
2.2. The method comprises the steps of taking distortion correction images of central sub-eyes of a left compound-eye camera unit and a right compound-eye camera unit as images to be matched, carrying out feature point matching on the distortion correction images of the two central sub-eyes, removing feature points which are incorrectly matched, and recording coordinate information of all common feature points in the distortion correction images of the central sub-eyes of the left compound-eye camera unit and the right compound-eye camera unit;
2.3. Respectively carrying out characteristic point matching on the other sub-eyes of the left compound eye camera unit by taking the central sub-eye of the left compound eye camera unit as a reference, and acquiring characteristic point coordinate information matched with the central sub-eye characteristic point of the left compound eye camera unit in the other sub-eyes;
2.4. respectively carrying out characteristic point matching on the other sub-eyes of the right compound eye camera unit by taking the central sub-eye of the right compound eye camera unit as a reference, and acquiring characteristic point coordinate information matched with the central sub-eye characteristic point of the right compound eye camera unit in the other sub-eyes;
2.5. Searching whether common characteristic points exist in the rest sub-eye images of the left compound eye camera unit and the right compound eye camera unit or not by taking the common characteristic points of the central sub-eyes of the left compound eye camera unit and the right compound eye camera unit as references, and respectively counting the number of the sub-eye images of each common characteristic point in the left compound eye camera unit and the right compound eye camera unit, wherein the common characteristic points appear in m sub-eye images of the left compound eye camera unit and n sub-eye images of the right compound eye camera unit;
2.6. feature point screening
Setting a ranging sample point threshold value, removing common characteristic points with m multiplied by n values smaller than the ranging sample point threshold value, and taking the rest common characteristic points as effective characteristic points;
2.7. And recording the serial numbers of the sub-eye images corresponding to the effective feature points and the coordinate information of the sub-eye images.
5. The multi-target ranging method based on binocular imaging according to claim 4, wherein,
The method also comprises the following steps of:
The center coordinates and the radius of each sub-eye in the left compound eye camera unit and the right compound eye camera unit are obtained through calibration; the method comprises the following steps: shooting a whiteboard by using a left compound eye camera unit and a right compound eye camera unit, acquiring an original whiteboard compound eye image, detecting a circular outline of each sub-eye image in the original whiteboard compound eye image, acquiring the circle center coordinates and the radius of each sub-eye image, and numbering the sub-eye images;
Obtaining internal parameters and external parameters of each sub-eye in the left compound-eye camera unit and the right compound-eye camera unit through calibration; the method comprises the following steps: and shooting the calibration plate at multiple angles by using the left compound eye camera unit and the right compound eye camera unit, obtaining a plurality of calibration information images with different angles by each sub-eye, and processing the calibration information images to obtain the internal parameters and the external parameters of each sub-eye in the left compound eye camera unit and the right compound eye camera unit.
6. The multi-target ranging method based on binocular imaging according to claim 5, wherein,
In the calibration step, the calibration plate is a checkerboard calibration plate, and the calibration information image is processed through a Zhang Zhengyou calibration method in a MATLAB calibration tool box;
Or the calibration plate is CALTag calibration plates, and the calibration information image is processed through a self-identification calibration method.
7. A multi-target ranging system based on double compound eye imaging for realizing the multi-target ranging method based on double compound eye imaging according to any one of claims 1 to 6, characterized in that:
comprises a motion supporting unit (2), two compound eye camera units (1) which are arranged on the motion supporting unit (2) in parallel and a data processing unit; the two compound eye camera units (1) can move relative to the movement supporting unit (2), and the output end of the compound eye camera units (1) is connected with the data processing unit;
the compound eye camera unit (1) comprises a compound eye lens array (11), an optical relay image transfer subsystem (12) and a large area array image sensor (13) which are sequentially arranged along incident light;
The compound eye lens array (11) comprises a supporting shell (111) and a plurality of sub-eyes (112) arranged on the supporting shell (111), wherein the field angle of the sub-eyes (112) is larger than the included angle of the optical axes of the adjacent sub-eyes (112);
the optical relay image transfer subsystem (12) is used for converting a primary curved surface image formed by the compound eye lens array (11) into a secondary plane image;
the large area array image sensor (13) is used for receiving the secondary plane image, and the output end of the large area array image sensor (13) is connected with the data processing unit;
the data processing unit is used for receiving the image shot by the compound eye camera unit (1) and processing the image to obtain a ranging result.
8. The multi-target ranging system based on binocular imaging of claim 7, wherein:
The motion supporting unit (2) comprises a guide rail assembly (3) and two groups of supporting assemblies; the two groups of support components are respectively connected with the corresponding compound eye camera units (1) to realize the movable support, the fixation and the optical axis adjustment of the compound eye camera units (1);
the supporting component comprises a clamping component (21), a rotating component (22) and a sliding component (23) which are connected in sequence;
The clamping component (21) comprises a first clamping block (211) and a second clamping block (212), the first clamping block (211) and the second clamping block (212) are oppositely arranged, gaps matched with the compound eye camera unit (1) are correspondingly formed in the butt joint surface, the compound eye camera unit (1) is arranged in the corresponding gaps, and the first clamping block (211) and the second clamping block (212) are fixedly connected through bolts (213);
The sliding part (23) comprises an I-shaped base (231) and a connecting piece (232) arranged above the I-shaped base (231), and the connecting piece (232) is fixedly connected with the rotating part (22);
the upper surface of the guide rail assembly (3) is provided with a groove matched with the I-shaped base (231), two groups of support assemblies are respectively connected in the groove through sliding parts (23) and used for realizing the movement of the two compound eye camera units (1), and scales are marked on the guide rail and used for recording the lengths of central connecting lines of central sub-eyes of the left compound eye camera unit (1) and the right compound eye camera unit (1);
The rotating component (22) is of a rotating mechanical structure with multiple degrees of freedom, and the upper end of the rotating component is connected with the second clamping block (212) of the clamping component (21).
9. A multi-target ranging system based on binocular imaging according to claim 7 or 8, wherein:
The compound eye lens arrays (11) are hexagonal, and long diagonal lines of the compound eye lens arrays (11) in the two compound eye camera units (1) are mutually perpendicular;
the angle of view of the sub-eyes (112) is 10-40 degrees, and the included angle of the optical axes of the adjacent sub-eyes (112) is 5-15 degrees.
CN202410561239.XA 2024-05-08 2024-05-08 Multi-target distance measurement method and system based on double compound eye imaging Active CN118129696B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410561239.XA CN118129696B (en) 2024-05-08 2024-05-08 Multi-target distance measurement method and system based on double compound eye imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410561239.XA CN118129696B (en) 2024-05-08 2024-05-08 Multi-target distance measurement method and system based on double compound eye imaging

Publications (2)

Publication Number Publication Date
CN118129696A CN118129696A (en) 2024-06-04
CN118129696B true CN118129696B (en) 2024-08-16

Family

ID=91234207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410561239.XA Active CN118129696B (en) 2024-05-08 2024-05-08 Multi-target distance measurement method and system based on double compound eye imaging

Country Status (1)

Country Link
CN (1) CN118129696B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109084959A (en) * 2018-06-05 2018-12-25 南京理工大学 A kind of parallelism of optical axis bearing calibration based on binocular location algorithm

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4717863B2 (en) * 2007-09-28 2011-07-06 富士フイルム株式会社 Calibration method and apparatus for compound eye imaging apparatus and calibration chart used therefor
EP3461127B1 (en) * 2016-05-17 2024-03-06 FUJIFILM Corporation Stereo camera and stereo camera control method
CN112102401B (en) * 2020-09-21 2024-05-28 中国科学院上海微系统与信息技术研究所 Target positioning method, device, system, equipment and storage medium
CN112595418B (en) * 2020-12-16 2022-05-06 中国科学院西安光学精密机械研究所 Super-large field-of-view polarization camera based on bionic curved compound eye
CN112950727B (en) * 2021-03-30 2023-01-06 中国科学院西安光学精密机械研究所 Large-view-field multi-target simultaneous ranging method based on bionic curved compound eye
CN116485676A (en) * 2023-04-26 2023-07-25 中国科学院西安光学精密机械研究所 Multi-dimensional information processing method and sensing system based on curved surface bionic compound eyes
CN117434294A (en) * 2023-10-16 2024-01-23 中北大学 Multi-aperture pure-vision optical flow velocity measurement method for unmanned aerial vehicle

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109084959A (en) * 2018-06-05 2018-12-25 南京理工大学 A kind of parallelism of optical axis bearing calibration based on binocular location algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
曲面仿生复眼三维探测关键技术研究;朱帅民;《中国优秀硕士学位论文全文数据库信息科技辑》;20240115(第01期);I138-1467页 *
朱帅民.曲面仿生复眼三维探测关键技术研究.《中国优秀硕士学位论文全文数据库信息科技辑》.2024,(第01期),I138-1467页. *

Also Published As

Publication number Publication date
CN118129696A (en) 2024-06-04

Similar Documents

Publication Publication Date Title
CN109859272B (en) Automatic focusing binocular camera calibration method and device
CN108734744B (en) Long-distance large-view-field binocular calibration method based on total station
CN101539422B (en) Monocular vision real time distance measuring method
CN108198224B (en) Linear array camera calibration device and calibration method for stereoscopic vision measurement
CN111080705B (en) Calibration method and device for automatic focusing binocular camera
CN109341668B (en) Multi-camera measuring method based on refraction projection model and light beam tracking method
CN108986070A (en) A kind of rock fracture way of extensive experimentation monitoring method based on high-speed video measurement
CN110763204B (en) Planar coding target and pose measurement method thereof
CN106500625B (en) A kind of telecentricity stereo vision measurement method
CN107589069B (en) Non-contact type measuring method for object collision recovery coefficient
CN107941153B (en) Visual system for optimizing calibration of laser ranging
CN111854636B (en) Multi-camera array three-dimensional detection system and method
CN113119129A (en) Monocular distance measurement positioning method based on standard ball
CN106871900A (en) Image matching positioning method in ship magnetic field dynamic detection
CN108896017B (en) Method for measuring and calculating position parameters of projectile near-explosive fragment groups
CN112229323A (en) Six-degree-of-freedom measurement method of checkerboard cooperative target based on monocular vision of mobile phone and application of six-degree-of-freedom measurement method
CN112950727B (en) Large-view-field multi-target simultaneous ranging method based on bionic curved compound eye
CN108769554B (en) Array thermal imaging instrument
CN112489141B (en) Production line calibration method and device for single-board single-image strip relay lens of vehicle-mounted camera
JP3696336B2 (en) How to calibrate the camera
CN118129696B (en) Multi-target distance measurement method and system based on double compound eye imaging
CN115439541A (en) Glass orientation calibration system and method for refraction imaging system
WO2023060717A1 (en) High-precision positioning method and system for object surface
CN115150607A (en) Focusing type plenoptic camera parameter design method based on multi-focal-length micro lens array
CN209877942U (en) Image distance measuring instrument

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant