CN114095628A - Automatic focusing algorithm, automatic focusing visual device and control method thereof - Google Patents

Automatic focusing algorithm, automatic focusing visual device and control method thereof Download PDF

Info

Publication number
CN114095628A
CN114095628A CN202111192907.9A CN202111192907A CN114095628A CN 114095628 A CN114095628 A CN 114095628A CN 202111192907 A CN202111192907 A CN 202111192907A CN 114095628 A CN114095628 A CN 114095628A
Authority
CN
China
Prior art keywords
convex lens
lens
sin
guide rail
focusing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111192907.9A
Other languages
Chinese (zh)
Other versions
CN114095628B (en
Inventor
吴飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Mengfei Automation Technology Co ltd
Original Assignee
Shanghai Mengfei Automation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Mengfei Automation Technology Co ltd filed Critical Shanghai Mengfei Automation Technology Co ltd
Priority to CN202111192907.9A priority Critical patent/CN114095628B/en
Publication of CN114095628A publication Critical patent/CN114095628A/en
Application granted granted Critical
Publication of CN114095628B publication Critical patent/CN114095628B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Abstract

The invention discloses an automatic focusing algorithm, an automatic focusing visual device and a control method thereof, wherein the algorithm comprises the steps of A1-A8; the automatic focusing visual device comprises a support, a first lens, a second lens, a movable guide rail assembly, a fixed guide rail assembly and a voice coil motor, wherein a sawtooth rack matched with the movable guide rail assembly and the fixed guide rail assembly is arranged on an output shaft of the voice coil motor, and a first mounting hole for mounting the first lens and a second mounting hole for mounting the second lens are formed in the support; the control method of the auto-focus vision device includes steps C1-C4.

Description

Automatic focusing algorithm, automatic focusing visual device and control method thereof
Technical Field
The invention relates to the field of automatic focusing visual devices, in particular to an automatic focusing algorithm, an automatic focusing visual device and a control method thereof.
Background
The existing automatic focusing camera cannot automatically and accurately measure the distance, the existing automatic focusing camera cannot perform multiple automatic focusing on an observed object behind glass or an underwater observed object shot by a water surface, the existing automatic focusing camera cannot accurately focus, and the existing camera or a mobile phone camera cannot realize three-dimensional recognition.
Disclosure of Invention
The invention aims to solve the technical problems that the existing automatic focusing camera can not automatically and accurately measure the distance, the existing automatic focusing camera can not carry out multiple automatic focusing on an observed object behind glass or an underwater observed object shot from the water surface, the existing automatic focusing camera can not accurately focus, and the existing camera or a mobile phone camera can not realize three-dimensional recognition, the invention provides an automatic focusing algorithm, the invention also provides an automatic focusing visual device based on the automatic focusing algorithm, the invention also provides a control method of the automatic focusing visual device, which can calculate the focusing mode of the coincidence ratio by confirming the position of the characteristic to carry out multiple focusing, when observing the target on the object and the target behind the object, 2 data with high coincidence ratio to individual characteristic can be detected in the focusing area, and similarly, when observing the underwater target and the water surface target, the condition of partial peeping, the target on the multilayer glass and the like are different from the traditional contrast focusing and phase focusing, the focusing speed is higher than the contrast focusing and is inferior to the phase focusing; the three-dimensional recognition can be realized, because 2 convex lenses shoot the same target, the two images are put together, and the characteristics of the three-dimensional structure can be confirmed through the change of the shadow part and the change of the length of the lines; after the maximum contact ratio appears, namely the position of accurate focusing, the accurate distance can be confirmed at the moment, so that the defects caused by the prior art are overcome.
In order to solve the technical problems, the invention provides the following technical scheme:
in a first aspect, an auto-focusing algorithm includes the following steps:
step A1: respectively taking the centers of two completely consistent first photoelectric sensors and second photoelectric sensors as origin points, and establishing two centrosymmetric rectangular coordinate systems which are a rectangular coordinate system XY and a rectangular coordinate system XY';
step A2: respectively establishing 2m by taking the origin as the center in a rectangular coordinate system XY and a rectangular coordinate system XY0*2n0The two centrosymmetric first grids and the second grids, each grid in the first grids and the second grids is a characteristic unit;
step A3: establishing two centrosymmetric first focusing regions and second focusing regions of 2m × 2n with origin as center in the first grid and the second grid, wherein y is L1The boundary line between the common field of view and the non-common field of view of the first photosensor is defined as (y ═ L)2A boundary line between the common view and the non-common view of the second photosensor;
step A4: determining all characteristic types of the first focus area by adopting an 8-bit binary system, and recording a group of characteristic data, wherein the characteristic data is a class of values with equal red, green and blue pixel values of each characteristic unit;
step A5: acquiring the coordinate of the end point of each feature data of the first focus area in the X-axis direction of the first grid and the second grid, and recording the coordinate of the left end point of the first grid as X1,y1The right end point is marked as x2,y2And the left end point coordinate of the second grid is recorded as x3,y3The right end point is marked as x4,y4Wherein x is1≤L1,x2≤L1,x3≤L2,x4≤L2,ε2-2μn0≤y3≤2μn01,ε2-2μn0≤y4≤2μn01,ε1The combined error of the second photoelectric sensor relative to the first photoelectric sensor in the Y-axis direction is epsilon2The comprehensive error of the second photoelectric sensor relative to the Y-axis direction of the first sensor is downward when the second convex lens corresponding to the first photoelectric sensor runs the whole stroke;
step A6: according to x1、x2、x3、x4Respectively substituting the formula 1 and the formula 2 to obtain alpha1、α2、θ1、θ2
Equation 1: alpha is alphan=arctg【(L*xn-f*xn*cotα)/Lf】;
Equation 2: thetan=arctg【(L*xn+2-f*xn+2*cosα)/Lf】;
Wherein alpha isnAnd thetanIs the tangent angle of the end point abscissa and the current object distance ratio, L is the distance between the optical center of the first convex lens and the optical center of the second convex lens, f is the focal length of the first convex lens and the second convex lens, the focal lengths of the first convex lens and the second convex lens are consistent, and alpha is the connecting line of the optical centers of the first convex lens and the second convex lensThe angle of the mirror axis;
step A7: will be alpha1、α2、θ1、θ2Substituting the formula 3 to obtain a focus angle beta;
equation 3:
β=arctg{[sin(α12)*sin2(α+θ2)*cos(α1+α-θ1)*cos(α12)+sin212)*sin2(α+θ2)*sin(α1+α-θ1)-sin(α12)*sin(θ12)*sin(α+θ2)*cosα2]÷[sin(α12)*sin(α+θ2)*cos(α1+α-θ1)*cosα1*cos(α+θ22)+sin(α12)*sin(α+θ2)*sin(α1+α-θ1)*sinα1*cos(α+θ22)-sin(θ12)*sinα1*cosα2*cos(α+θ22)]};
wherein β is the focusing angle of the feature;
step A8: and adjusting the second lens to move to the angle according to the focus angle beta.
In the above automatic focusing algorithm, in step a1, the X-axis labels of the rectangular coordinate system XY decrease from left to right, and the X-axis labels of the rectangular coordinate system XY' increase from left to right;
in step a2, the feature cells are 1 red pixel cell, 2 green pixel cells, and 1 blue pixel cell to form a2 × 2 bayer array, and the length of each pixel cell in the X direction is denoted as μ
L in step A31Calculated by equation 4, said L2Calculated by formula 5;
equation 4: l is1=(WL2 f-WLf2*cosα)/(WL2 sinα*cosα-WLf*sinα*cos2α+2L2 f*sin2α-WLf*cos2α+Wf2*cos3α-2Lf2*sinα*cosα);
Equation 5: l is2=(WL2 f sin2α-WLf2*sinα*cosα)/(WL2 sinα*cosα-WLf*cos2α+2L2 f*sin2α-WLf*sinα*cos2α+Wf2*cos3α-2Lf2*sin2α*cosα);
W is the width value of the first grid and the second grid in the X direction, f is the focal length of the first convex lens and the second convex lens, and the focal lengths of the first convex lens and the second convex lens are consistent.
The above automatic focusing algorithm further comprises a step of verifying a coincidence ratio of a focusing result in a focusing area, which comprises the following specific steps:
step B1: the feature data of any feature in the first focusing area is selected to subtract the feature data of the same position in the second focusing area to obtain the difference of the feature data;
step B2: when the number of the difference of the feature data is about 0 and reaches 60% -100% of the number of the feature in the first focusing area, confirming that the focusing point completes focusing on the feature;
step B3: when the number of the difference between the feature data is about 0 does not reach 60% -100% of the number of the feature in the first focus region, the second convex lenses are sequentially operated until the number of the difference between the feature data is 0 reaches 60% -100% of the number of the feature in the first focus region, and focusing is completed.
In a second aspect, an auto-focus vision device according to an auto-focus algorithm comprises a support, a first lens, a second lens, a moving guide rail assembly, a fixed guide rail assembly and a voice coil motor, wherein a sawtooth rack matched with the moving guide rail assembly and the fixed guide rail assembly is installed on an output shaft of the voice coil motor, and a first installation hole for installing the first lens and a second installation hole for installing the second lens are formed in the support;
the movable guide rail assembly comprises an installation shaft frame and a movable guide rail, the fixed guide rail assembly comprises a connecting shaft, a fixed guide rail and a fixed shaft, the installation shaft frame is U-shaped, two ends of the installation shaft frame are connected to the upper side and the lower side of the first lens respectively and then connected into the first installation hole of the support, one end of the first lens, which is far away from the first installation hole, is provided with a first groove for placing the movable guide rail, a first slide bar is installed in the first groove, the movable guide rail is provided with a first slide groove matched with the first slide bar, one end of the movable guide rail is connected to the installation shaft frame, and the other end of the movable guide rail is provided with a sawtooth groove matched with the sawtooth rack;
the second lens is mounted in the second mounting hole of the bracket through the connecting shaft, a second groove for placing the fixed guide rail is formed in one end, away from the second mounting hole, of the second lens, a second sliding rod is mounted in the second groove, a second sliding groove matched with the second sliding rod is formed in the fixed guide rail, two ends of the fixed shaft are respectively connected to the fixed guide rail and the bracket, and a sawtooth guide rail matched with the sawtooth rack is formed in one end of the second lens;
the voice coil motor is externally connected or internally provided with a controller which is connected with the voice coil motor in a control mode and stores the automatic focusing algorithm of the first aspect, and the controller is connected with the first lens and the second lens respectively in a wired or wireless mode to carry out data interaction and control.
The automatic focusing visual device according to the automatic focusing algorithm comprises a support, a first fixing shaft, a second fixing shaft and a fixing shaft, wherein the support comprises a bottom plate and a vertical plate;
the sawtooth rack is an I-shaped sawtooth rack with two ends respectively connected with the sawtooth groove and the sawtooth guide rail in a matching way;
the sawtooth groove and the sawtooth guide rail are both arranged in an arc shape.
The automatic focusing visual device according to the automatic focusing algorithm is characterized in that a first convex lens and a first guide pillar are arranged in the first lens, the first slide bar is mounted on the first guide pillar, the first convex lens is mounted on the front end surface of the first lens, and a first photoelectric sensor is mounted at one end, facing the first convex lens, of the first guide pillar;
a second convex lens and a second guide pillar are arranged in the second lens, the second sliding rod is installed on the second guide pillar, the second convex lens is installed on the front end face of the second lens, and a second photoelectric sensor is installed at one end, facing the second convex lens, of the second guide pillar;
the first photoelectric sensor and the second photoelectric sensor are connected with the controller for data transmission.
The automatic focusing vision device based on the automatic focusing algorithm is characterized in that the first photoelectric sensor is provided with a first filter, and the second photoelectric sensor is provided with a second filter.
In a third aspect, a method for controlling an auto-focus vision apparatus includes the following steps:
step C1: the voice coil motor drives the sawtooth rack to drive the movable guide rail to move, a first sliding groove on the movable guide rail restrains the first sliding rod so that the first guide pillar drives the first photoelectric sensor to move, meanwhile, the voice coil motor drives the sawtooth rack to drive the sawtooth guide rail to move, and the second sliding rod drives a second photoelectric sensor on the second guide pillar to move under the restraint of the second sliding groove;
step C2: acquiring detection data of the first photoelectric sensor and the second photoelectric sensor in real time and transmitting the detection data to the controller;
step C3: the controller calculates the detection data to obtain control data containing voice coil motor driving data;
step C4: the controller controls the voice coil motor to focus through the control data.
The control method of the auto-focusing vision device is characterized in that the confirmation formula of the first chute shape on the moving guide rail is as follows:
X1=【Z1+L*f/(L-f*cotα)】cosα;
Y1=【Z1+L*f/(L-f*cotα)】sinα;
wherein A is the optical center of the first convex lens in the plane coordinate system XYA is the center of a circle, Z1Is any length value, L is the distance between the optical center of the first convex lens and the optical center of the second convex lens, f is the focal length of the first convex lens and the second convex lens, alpha is the included angle between the axis of the second convex lens and the connecting line between the first convex lens and the second convex lens, alpha is equal to or larger than 45 degrees and equal to or smaller than 90 degrees, and when alpha is equal to or larger than 45 degrees and equal to or smaller than 90 degrees, the alpha is substituted into the formula to obtain the matched X1、Y1
The confirmation formula of the second chute shape on the fixed rail is as follows:
X2=【Z2+L*f/(L-f*cosα)】cosα;
Y2=【Z2+L*f/(L-f*cosα)】sinα;
wherein B is the coordinate point of the optical center of the second convex lens in the plane coordinate system XY, Z2Is any length value, L is the distance between the optical center of the first convex lens and the optical center of the second convex lens, f is the focal length of the first convex lens and the second convex lens, alpha is the included angle between the axis of the second convex lens and the connecting line between the first convex lens and the second convex lens, alpha is equal to or larger than 45 degrees and equal to or smaller than 90 degrees, and when alpha is equal to or larger than 45 degrees and equal to or smaller than 90 degrees, the alpha is substituted into the formula to obtain the matched X2、Y2
In a fourth aspect, a computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, which computer program, when executed by a processor, performs the steps of the method of any of the third aspects.
The technical scheme provided by the automatic focusing algorithm, the automatic focusing visual device and the control method thereof has the following technical effects:
the method comprises the steps that multiple focusing is carried out by a focusing mode of calculating a coincidence ratio by confirming the positions of features, 2 pieces of data with high coincidence ratio can be detected in a focusing area when a target on an object and a target behind the object are observed, similarly, underwater and water surface targets, partial peeping conditions, targets on multilayer glass and the like are observed, the method is different from the traditional contrast focusing and phase focusing, the focusing speed is higher than that of the contrast focusing and is inferior to that of the phase focusing; the three-dimensional recognition can be realized, because 2 convex lenses shoot the same target, the two images are put together, and the characteristics of the three-dimensional structure can be confirmed through the change of the shadow part and the change of the length of the lines; after the maximum contact ratio occurs, i.e. the position of the exact focus, the exact distance can also be confirmed.
Drawings
FIG. 1 is a schematic diagram of a rectangular coordinate system XY in an auto-focusing algorithm according to the present invention;
FIG. 2 is a schematic diagram of a rectangular coordinate system XY' in an auto-focusing algorithm according to the present invention;
FIG. 3 is a graph of the relationship of focusing calculations for a feature
FIG. 4 is a schematic diagram of an auto-focus vision apparatus according to an auto-focus algorithm;
FIG. 5 is a schematic diagram of an internal structure of a first lens of an auto-focus vision apparatus according to an auto-focus algorithm of the present invention;
FIG. 6 is a schematic diagram of an internal structure of a second lens of an auto-focus vision apparatus according to an auto-focus algorithm of the present invention;
FIG. 7 is a flowchart illustrating a method for controlling an auto-focus vision apparatus according to the present invention;
FIG. 8 is a diagram of a position relationship of a sliding slot of an auto-focus vision device according to the present invention.
Wherein the reference numbers are as follows:
the lens driving device comprises a bracket 101, a first lens 102, a second lens 103, a mounting shaft bracket 104, a moving guide rail 105, a connecting shaft 106, a fixed guide rail 107, a fixed shaft 108, a voice coil motor 109, a sawtooth bracket 110, a first mounting hole 111, a second mounting hole 112, a first groove 113, a first sliding groove 114, a sawtooth groove 115, a second groove 116, a second sliding groove 117, a sawtooth guide rail 118, a first sliding rod 201, a first convex lens 202, a first guide post 203, a first photoelectric sensor 204, a second sliding rod 301, a second convex lens 302, a second guide post 303 and a second photoelectric sensor 304.
Detailed Description
In order to make the technical means, the inventive features, the objectives and the effects of the invention easily understood and appreciated, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the specific drawings, and it is obvious that the described embodiments are a part of the embodiments of the present invention, but not all of the embodiments.
All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It should be understood that the structures, ratios, sizes, and the like shown in the drawings and described in the specification are only used for matching with the disclosure of the specification, so as to be understood and read by those skilled in the art, and are not used to limit the conditions under which the present invention can be implemented, so that the present invention has no technical significance, and any structural modification, ratio relationship change, or size adjustment should still fall within the scope of the present invention without affecting the efficacy and the achievable purpose of the present invention.
In addition, the terms "upper", "lower", "left", "right", "middle" and "one" used in the present specification are for clarity of description, and are not intended to limit the scope of the present invention, and the relative relationship between the terms and the terms is not to be construed as a scope of the present invention.
The first embodiment of the invention provides an automatic focusing algorithm, which aims to perform multiple focusing by calculating a focusing mode of superposition ratio by confirming the positions of features, and when observing a target on an object and a target behind the object, 2 data with high superposition ratio for individual features can be detected in a focusing area; the three-dimensional recognition can be realized, because 2 convex lenses shoot the same target, the two images are put together, and the characteristics of the three-dimensional structure can be confirmed through the change of the shadow part and the change of the length of the lines; after the maximum contact ratio occurs, i.e. the position of the exact focus, the exact distance can also be confirmed.
As shown in fig. 1-2, in a first aspect, a first embodiment, an auto-focusing algorithm includes the following steps:
step A1: respectively taking the centers of the first photoelectric sensor 204 and the second photoelectric sensor 304 which are completely consistent as an origin, and establishing two centrosymmetric rectangular coordinate systems, namely a rectangular coordinate system XY and a rectangular coordinate system XY';
step A2: respectively establishing 2m by taking the origin as the center in a rectangular coordinate system XY and a rectangular coordinate system XY0*2n0The two centrosymmetric first grids and the second grids, each grid in the first grids and the second grids is a characteristic unit;
step A3: establishing two centrosymmetric first focusing regions and second focusing regions of 2m × 2n with origin as center in the first grid and the second grid, wherein y is L1Is the boundary line between the common field of view and the non-common field of view of the first photosensor 204, and y is equal to L2A boundary line between the common view and the non-common view of the second photosensor 304;
step A4: determining all characteristic types of the first focus area by adopting an 8-bit binary system, and recording a group of characteristic data, wherein the characteristic data is a class of values with equal red, green and blue pixel values of each characteristic unit;
step A5: acquiring the coordinate of the end point of each feature data of the first focus area in the X-axis direction of the first grid and the second grid, and recording the coordinate of the left end point of the first grid as X1,y1The right end point is marked as x2,y2And the left end point coordinate of the second grid is recorded as x3,y3The right end point is marked as x4,y4Wherein x is1≤L1,x2≤L1,x3≤L2,x4≤L2,ε2-2μn0≤y3≤2μn01,ε2-2μn0≤y4≤2μn01,ε1Is a second photoelectric sensor304, the combined error, epsilon, of the second photosensor 304 in the Y-axis direction relative to the first photosensor 204 over the entire stroke of the second convex lens 3022The combined error of the second photoelectric sensor 304 downward relative to the Y-axis direction of the first sensor when the second convex lens 302 corresponding to the first photoelectric sensor 204 runs the whole stroke;
step A6: according to x1、x2、x3、x4Respectively substituting the formula 1 and the formula 2 to obtain alpha1、α2、θ1、θ2
Equation 1: alpha is alphan=arctg【(L*xn-f*xn*cotα)/Lf】;
Equation 2: thetan=arctg【(L*xn+2-f*xn+2*cosα)/Lf】;
Wherein alpha isnAnd thetanThe tangent angle of the ratio of the abscissa of the end point to the current object distance is shown, L is the distance between the optical center of the first convex lens 202 and the optical center of the second convex lens 302, f is the focal length of the first convex lens 202 and the second convex lens 302, the focal lengths of the first convex lens 202 and the second convex lens 302 are consistent, and alpha is the included angle between the connecting line of the optical centers of the first convex lens 202 and the second convex lens 302 and the axis of the second convex lens 302;
step A7: will be alpha1、α2、θ1、θ2Substituting the formula 3 to obtain a focus angle beta;
equation 3:
β=arctg{[sin(α12)*sin2(α+θ2)*cos(α1+α-θ1)*cos(α12)+sin212)*sin2(α+θ2)*sin(α1+α-θ1)-sin(α12)*sin(θ12)*sin(α+θ2)*cosα2]÷[sin(α12)*sin(α+θ2)*cos(α1+α-θ1)*cosα1*cos(α+θ22)+sin(α12)*sin(α+θ2)*sin(α1+α-θ1)*sinα1*cos(α+θ22)-sin(θ12)*sinα1*cosα2*cos(α+θ22)]};
wherein β is the focusing angle of the feature;
step A8: the second lens 103 is adjusted to move to the angle according to the focus angle β.
In the above automatic focusing algorithm, in step a1, the X-axis labels of the rectangular coordinate system XY decrease from left to right, and the X-axis labels of the rectangular coordinate system XY' increase from left to right;
in step a2, the feature cells are 1 red pixel cell, 2 green pixel cells, and 1 blue pixel cell to form a2 × 2 bayer array, and the length of each pixel cell in the X direction is denoted as μ
L in step A31Calculated by equation 4, said L2Calculated by formula 5;
equation 4: l is1=(WL2 f-WLf2*cosα)/(WL2 sinα*cosα-WLf*sinα*cos2α+2L2 f*sin2α-WLf*cos2α+Wf2*cos3α-2Lf2*sinα*cosα);
Equation 5: l is2=(WL2 f sin2α-WLf2*sinα*cosα)/(WL2 sinα*cosα-WLf*cos2α+2L2 f*sin2α-WLf*sinα*cos2α+Wf2*cos3α-2Lf2*sin2α*cosα);
W is the width of the first grid and the second grid in the X direction, f is the focal length of the first convex lens 202 and the second convex lens 302, and the focal lengths of the first convex lens 202 and the second convex lens 302 are the same.
The above automatic focusing algorithm further comprises a step of verifying a coincidence ratio of a focusing result in a focusing area, which comprises the following specific steps:
step B1: the feature data of any feature in the first focusing area is selected to subtract the feature data of the same position in the second focusing area to obtain the difference of the feature data;
step B2: when the number of the difference of the feature data is about 0 and reaches 60% -100% of the number of the feature in the first focusing area, confirming that the focusing point completes focusing on the feature;
step B3: when the number of the difference between the feature data is about 0 does not reach 60% -100% of the number of the feature in the first focus region, the second convex lens 302 is sequentially operated until the number of the difference between the feature data is 0 reaches 60% -100% of the number of the feature in the first focus region, and focusing is completed.
As shown in FIG. 3, A is the optical center of the first convex lens 202, B is the optical center of the second convex lens 302, the axes of the 2 convex lenses intersect at point C, the ends are 2 photosensors with the same size, the centers of the photosensors intersect and are perpendicular to the axes of the convex lenses, K1K2And J1J2Is the light receiving line of the photoelectric sensor at an angle alpha, and the point C is K1K2And J1J2Mid point of (A), H1And H3For projection of one of the characteristic data onto 2 end points of the received light, H2The focusing point of the characteristic data to be searched by calculation, beta is the focusing angle to be searched, BJ3And AK3Is a boundary line between a defined coincident field of view and a defined non-coincident field of view, L1And L2The boundary line is at the intersection of the photoelectric sensors and the corresponding angle is theta3And alpha3;α4And theta4Is the maximum acceptance angle, alpha1And alpha2Is H1And H3Acceptance angle of convex lens at point A, theta1And theta2Is H1And H3Acceptance angle of convex lens at B point, H1And H3The falling point of the convex lens at the point A is X1And X2,H1And H3The falling point of the convex lens at the point B is X3And X4
α1、α2、θ1、θ2、α3、θ3The confirmation method of the value is as follows:
the lateral length of the photosensor is W,V1is the distance of A point convex lens, V2The distance between the convex lenses at the point B is f, and AB is L;
V1=L*f/(L-f*cotα);
V2=L*f/(L-f*cosα);
CA=L*tgα,CB=L/cosα;
tgα4=0.5W/V1=CK1/(L*tgα) (1);
tgθ4=0.5W/V2=CJ2/(L/cosα) (2);
∠J1CK1=∠J5CK5=90°-α;
∠CK1A=90°-α4,∠CJ1B=90°-θ4
∠K1J3C=α+α4,∠J2K3C=α+θ4
at Δ K1J3C and Δ J2K3In C, there is a sine theorem;
CJ3/(sin90°-α4)=CK1/sin(α4+α) (3);
CK3/(sin90°-θ4)=CJ2/sin(θ4+α) (4);
CJ3/CB=L2/V2(5);
CK3/CA=L1/V1(6);
obtaining CK from (1) and (2)1,CJ2The (3), (4), (5) and (6);
L1=(WL2 f-WLf2*cosα)/(WL2 sinα*cosα-WLfsinα*cos2α+2L2 fsin2α-WLf*cos2α+Wf2*cos3α-2Lf2 sinα*cosα);
L2=(WL2 fsin2α-WLf2 sinα*cosα)/(WL2 sinα*cosα-WLfcos2α+2L2 fsin2α-WLfsinα*co s2α+Wf2*cos3α-2Lf2 sin2α*cosα);
α1=arctg(CX1/CA)=arctg【CX1/(L*tgα)】;
α2=arctg(CX2/CA)=arctg【CX2/(L*tgα)】;
θ1=arctg(CX3/CB)=arctg(CX3*cosα/L);
θ2=arctg(CX4/CB)=arctg(CX4*cosα/L);
the value of β is determined as follows:
at Δ ABH3Inner angle ABH3=α+θ2,∠H3AB=90°-α2
Then < AH3B=180°-90°+α2-α-θ2=90°+α2-α-θ2
According to the sine theorem there is AB/sin (90 ° + alpha)2-α-θ2)=AH3/sin(α+θ2) Since AB ═ L;
therefore AH3=L*sin(α+θ2)/sin(90°+α2-α-θ2) (7);
BH3=L*sin(90°-α2)/sin(90°+α2-α-θ2) (8);
At Δ H1H3In A, according to sine theorem, the following formula exists:
A H1/sin∠H1H3A=A H3/sin∠H3 H1A=H1H3/sin(α12) (9);
at delta B H1H3In the interior, the following formula is provided according to the sine theorem:
H1H3/sin(θ12)=B H3/sin∠B H1H3 (10);
at Δ H1H2In A, according to sine theorem, the following formula exists:
H1A/sin(∠H1H3A+α2)=AH2/sin∠H2H1A (11);
∠H2H1A=∠B H1H3+∠A H1B=∠B H1H3+180°-90°-α1-α+θ1
get angle B H1H3=∠H2H1A-90°+α1+α-θ1 (12);
At Δ H1H2Within A, angle H1H3A=180°-∠H2H1A-α12 (13);
(11) Dividing by (9) to obtain: sin & lt H1H3A/sin(∠H1H3A+α2)=A H2/A H3 (14);
(11) Dividing by (9) to obtain:
sin(α12)/sin(θ12)=(B H3/A H3)*(sin∠H3 H1A/sin∠B H1 H3) (15);
the equations formed by (14) and (15) are solved by substituting (7), (8), (12) and (13) into (14) and (15), and the following can be solved: β { [ arctg { [ sin (α) ]12)*sin2(α+θ2)*cos(α1+α-θ1)*cos(α12)+sin212)*sin2(α+θ2)*sin(α1+α-θ1)-sin(α12)*sin(θ12)*sin(α+θ2)*cosα2]÷[sin(α12)*sin(α+θ2)*cos(α1+α-θ1)*cosα1*cos(α+θ22)+sin(α12)*sin(α+θ2)*sin(α1+α-θ1)*sinα1*cos(α+θ22)-sin(θ12)*sinα1*cosα2*cos(α+θ22)]}。
As shown in fig. 4, a second aspect, a second embodiment, an auto-focusing vision device, wherein the auto-focusing vision device includes a bracket 101, a first lens 102, a second lens 103, a moving rail 105 assembly, a fixed rail 107 assembly, and a voice coil motor 109, a driving program of the voice coil motor 109 is a driving program of the voice coil motor 109 of a conventional mobile phone, the higher the resolution is in a narrow current range, the better the resolution is, a sawtooth rack 110 matching the moving rail 105 assembly and the fixed rail 107 assembly is installed on an output shaft of the voice coil motor 109, a first installation hole 111 for installing the first lens 102 and a second installation hole 112 for installing the second lens 103 are provided on the bracket 101;
as shown in fig. 5, the movable guide 105 assembly includes a mounting shaft bracket 104 and a movable guide 105, the fixed guide 107 assembly includes a connecting shaft 106, a fixed guide 107 and a fixed shaft 108, the mounting shaft bracket 104 is U-shaped, and two ends of the mounting shaft bracket 104 are respectively connected to the upper side and the lower side of the first lens 102 and then connected to the first mounting hole 111 of the bracket 101, one end of the first lens 102 away from the first mounting hole 111 is provided with a first groove 113 for placing the movable guide 105, a first sliding rod 201 is installed in the first groove 113, the movable guide 105 is provided with a first sliding slot 114 matching the first sliding rod 201, one end of the movable guide 105 is connected to the mounting shaft bracket 104, and the other end of the movable guide 105 is provided with a serrated slot 115 matching the serrated rack 110;
as shown in fig. 6, the second lens 103 is installed in the second installation hole 112 of the bracket 101 through the connection shaft 106, one end of the second lens 103, which is away from the second installation hole 112, is provided with a second groove 116 for placing the fixed guide rail 107, a second slide bar 301 is installed in the second groove 116, the fixed guide rail 107 is provided with a second sliding slot 117 matched with the second slide bar 301, two ends of the fixed shaft 108 are respectively connected to the fixed guide rail 107 and the bracket 101, one end of the second lens 103 is provided with a sawtooth guide rail 118 matched and connected with the sawtooth rack 110, a gear of the sawtooth groove 115 and a gear of the sawtooth guide rail 118 have the same shape and size, a gear circle center of the sawtooth groove 115 is an optical center of the first convex lens 202, and a gear circle center of the sawtooth guide rail 118 is an optical center of the second convex lens 302;
the voice coil motor 109 is externally connected or internally provided with a controller which is connected with the voice coil motor 109 in a control mode and stores the automatic focusing algorithm of the first aspect, and the controller is respectively connected with the first lens 102 and the second lens 103 in a wired or wireless mode to carry out data interaction and control.
In the above automatic focusing visual device, the bracket 101 includes a bottom plate and a vertical plate, the first mounting hole 111 and the second mounting hole 112 are both disposed on the vertical plate, and one end of the fixing shaft 108 is connected to the bottom plate;
the sawtooth rack 110 is an I-shaped sawtooth rack 110, two ends of which are respectively connected with a sawtooth groove 115 and a sawtooth guide rail 118 in a matching way;
the sawtooth groove 115 and the sawtooth guide rail 118 are both arc-shaped.
As shown in fig. 4-5, in the above-mentioned automatic focusing visual device, a first convex lens 202 and a first guide pillar 203 are disposed in the first lens 102, the first sliding rod 201 is mounted on the first guide pillar 203, the first convex lens 202 is mounted on the front end surface of the first lens 102, and a first photosensor 204 is mounted at one end of the first guide pillar 203 facing the first convex lens 202;
a second convex lens 302 and a second guide post 303 are arranged in the second lens 103, the second slide bar 301 is mounted on the second guide post 303, the second convex lens 302 is mounted on the front end surface of the second lens 103, and a second photoelectric sensor 304 is mounted at one end of the second guide post 303 facing the second convex lens 302;
the first photosensor 204 and the second photosensor 304 are connected to the controller for data transmission.
In the above automatic focusing visual device, the first filter is mounted on the first photoelectric sensor 204, and the second filter is mounted on the second photoelectric sensor 304.
As shown in fig. 7, a third aspect, a third embodiment, a method for controlling an auto-focus vision apparatus, includes the following steps:
step C1: the voice coil motor 109 drives the sawtooth rack 110 to drive the moving guide rail 105 to move, the first sliding chute 114 on the moving guide rail 105 restrains the first sliding rod 201 so that the first guide post 203 drives the first photoelectric sensor 204 to move, meanwhile, the voice coil motor 109 drives the sawtooth rack 110 to drive the sawtooth guide rail 118 to move, and the second sliding rod 301 drives the second photoelectric sensor 304 on the second guide post 303 to move under the restraint of the second sliding chute 117;
step C2: acquiring detection data of the first photoelectric sensor 204 and the second photoelectric sensor 304 in real time and transmitting the detection data to the controller;
step C3: the controller calculates the detection data to obtain control data containing the driving data of the voice coil motor 109;
step C4: the controller controls the voice coil motor 109 to perform focusing through control data.
In the control method of the automatic focusing visual device, the shape of the first sliding slot 114 on the moving guide 105 is determined according to the following formula:
X1=【Z1+L*f/(L-f*cotα)】cosα;
Y1=【Z1+L*f/(L-f*cotα)】sinα;
wherein A is a coordinate point of the optical center of the first convex lens 202 in the plane coordinate system XY, A is the center of a circle, Z1Is any length value, L is the distance between the optical center of the first convex lens 202 and the optical center of the second convex lens 302, f is the focal length of the first convex lens 202 and the second convex lens 302, alpha is the included angle between the axis of the second convex lens 302 and the connecting line between the first convex lens 202 and the second convex lens 302, alpha is between 45 degrees and 90 degrees, and when the alpha is any value between 45 degrees and 90 degrees and is substituted into the formula, the matched X is obtained1、Y1
The confirmation formula of the shape of the second chute 117 on the fixed rail 107 is as follows:
X2=【Z2+L*f/(L-f*cosα)】cosα;
Y2=【Z2+L*f/(L-f*cosα)】sinα;
wherein B is a coordinate point of the optical center of the second convex lens 302 in the plane coordinate system XY, Z2Is any length value, L is the distance between the optical center of the first convex lens 202 and the optical center of the second convex lens 302, f is the focal length of the first convex lens 202 and the second convex lens 302, alpha is the included angle between the axis of the second convex lens 302 and the connecting line between the first convex lens 202 and the second convex lens 302, alpha is larger than or equal to 45 degrees and smaller than or equal to 90 degrees,when any value of alpha between 45 degrees and 90 degrees is substituted into the formula, matched X2 and Y2 are obtained;
wherein L f/(L-f cot α) is the image distance V of the first convex lens 202AIn a synergistic relationship with the angle alpha,
l f/(L-f cos α) is the image distance V of the second convex lens 302BA synergistic relationship with angle α;
as shown in fig. 8, in the XY "coordinate system, point a (optical center of the first convex lens 202) is a center of the coordinate system, point B (optical center of the second convex lens 302) is a point on the X axis, the axis of the first convex lens 202 at point a intersects with the axis of the second convex lens 302 at point B at point C, the Y axis is the axis of the first convex lens 202, AC is a portion on the Y axis, AB is the distance from the optical center of the first convex lens 202 to the center of the second convex lens 302, denoted as L, and BC is on the axis of the convex lens at point B; the included angle between AB and the axis of the second convex lens 302 at the point B is alpha; the object distance U, the image distance V, and the focal lengths of the first convex lens 202 and the second convex lens 302 are both f, and it is known that 1/U +1/V is 1/f;
taking AC as the object distance of the convex lens at the point a and BC as the object distance of the convex lens at the point B, VA ═ L × f/(L-f × cot α), VB ═ L × f/(L-f × cos α) are obtained;
initial angle alpha0The value requirement of (A):
the trajectory of the photosensor holder 101 at point a, i.e. in the cartesian coordinate system XY, is a function of the equation X2+Y2=【Z1+L*f/L-f*cotα】2The matching of the slope of the curve with the extremely high acceleration of the voice coil motor 109 results in a very high resistance which seriously affects the cooperative operation of the system, wherein this drawback is ameliorated from two aspects:
1. when L is much larger than f, α ═ arccotL/f will approach α ═ 0 ° infinitely, curve VAThe more gradual the slope change of the rear section of the (L f cot alpha), the length of L is as large as f as required;
2. when alpha is0When the value of (A) is 45 degrees, the centrifugal increment is about 1mm when the centrifugal compressor operates in the range of alpha between 45 degrees and 90 degrees; when alpha is0When the value of (a) is 60 degrees, and the centrifugal separator operates in the range of alpha between 60 degrees and 90 degrees, the centrifugal separator is used for centrifugingThe increment is about 0.6 mm; to reduce eccentricity, α0The value of (A) can be very high, and is ideally greater than or equal to 60 degrees; when alpha is0When the value of (a) is large, the point a is taken as a reference object, which means that the distance L x th alpha cannot realize focusing, but the distance is small, so that the use is not influenced; on the other hand, when α0When the value of (A) is large, the extension part of the gear at the point A is small even if alpha is large0When the angle is 45 degrees, the number of the extending parts is the largest, the size of the photoelectric sensor is generally about 5mm by 4mm, the actual focal length of the camera for the mobile phone is about 3mm, the view field of the convex lens at the point A cannot be blocked completely, and the convex lens at the point B cannot block the view field because the convex lens rotates along with the gear.
In a fourth aspect, a computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, which computer program, when executed by a processor, performs the steps of the method of any of the third aspects.
In summary, the auto-focusing algorithm, the auto-focusing vision device and the control method thereof of the present invention can calculate the focusing manner of the coincidence ratio to perform multiple focusing by confirming the position of the feature, when observing the target on the object and the target behind the object, 2 data with high coincidence ratio to individual feature can be detected in the focusing area, similarly, when observing the underwater and water surface targets, partial peeping condition, the targets on the multilayer glass, etc. are different from the traditional contrast focusing and phase focusing, the focusing speed is higher than the contrast focusing, and is inferior to the phase focusing; the three-dimensional recognition can be realized, because 2 convex lenses shoot the same target, the two images are put together, and the characteristics of the three-dimensional structure can be confirmed through the change of the shadow part and the change of the length of the lines; after the maximum contact ratio occurs, i.e. the position of the exact focus, the exact distance can also be confirmed.
Specific embodiments of the invention have been described above. It is to be understood that the invention is not limited to the particular embodiments described above, in that devices and structures not described in detail are understood to be implemented in a manner common in the art; various changes or modifications may be made by one skilled in the art within the scope of the claims without departing from the spirit of the invention, and without affecting the spirit of the invention.

Claims (10)

1. An auto-focus algorithm, comprising the steps of:
step A1: respectively taking the centers of two completely consistent first photoelectric sensors and second photoelectric sensors as origin points, and establishing two centrosymmetric rectangular coordinate systems which are a rectangular coordinate system XY and a rectangular coordinate system XY';
step A2: respectively establishing 2m by taking the origin as the center in a rectangular coordinate system XY and a rectangular coordinate system XY0*2n0The two centrosymmetric first grids and the second grids, each grid in the first grids and the second grids is a characteristic unit;
step A3: establishing two centrosymmetric first focusing regions and second focusing regions of 2m × 2n with origin as center in the first grid and the second grid, wherein y is L1The boundary line between the common field of view and the non-common field of view of the first photosensor is defined as (y ═ L)2A boundary line between the common view and the non-common view of the second photosensor;
step A4: determining all characteristic types of the first focusing area by adopting an 8-bit binary system, and recording a group of characteristic data;
step A5: acquiring the coordinate of the end point of each feature data of the first focus area in the X-axis direction of the first grid and the second grid, and recording the coordinate of the left end point of the first grid as X1,y1The right end point is marked as x2,y2And the left end point coordinate of the second grid is recorded as x3,y3The right end point is marked as x4,y4Wherein x is1≤L1,x2≤L1,x3≤L2,x4≤L2,ε2-2μn0≤y3≤2μn01,ε2-2μn0≤y4≤2μn01,ε1For the second convex lens corresponding to the second photoelectric sensor to run the whole travelThe combined error of the second photoelectric sensor in the Y-axis direction relative to the first photoelectric sensor2The comprehensive error of the second photoelectric sensor relative to the Y-axis direction of the first sensor is downward when the second convex lens corresponding to the first photoelectric sensor runs the whole stroke;
step A6: according to x1、x2、x3、x4Respectively substituting the formula 1 and the formula 2 to obtain alpha1、α2、θ1、θ2
Equation 1: alpha is alphan=arctg【(L*xn-f*xn*cotα)/Lf】;
Equation 2: thetan=arctg【(L*xn+2-f*xn+2*cosα)/Lf】;
Wherein alpha isnAnd thetanThe tangent angle of the ratio of the end point abscissa to the current object distance is defined, L is the distance between the optical center of the first convex lens and the optical center of the second convex lens, f is the focal length of the first convex lens and the second convex lens, the focal lengths of the first convex lens and the second convex lens are consistent, and alpha is the included angle between the connecting line of the optical centers of the first convex lens and the second convex lens and the axis of the second convex lens;
step A7: will be alpha1、α2、θ1、θ2Substituting the formula 3 to obtain a focus angle beta;
equation 3:
β=arctg{[sin(α12)*sin2(α+θ2)*cos(α1+α-θ1)*cos(α12)+sin212)*sin2(α+θ2)*sin(α1+α-θ1)-sin(α12)*sin(θ12)*sin(α+θ2)*cosα2]÷[sin(α12)*sin(α+θ2)*cos(α1+α-θ1)*cosα1*cos(α+θ22)+sin(α12)*sin(α+θ2)*sin(α1+α-θ1)*sinα1*cos(α+θ22)-sin(θ12)*sinα1*cosα2*cos(α+θ22)]};
wherein β is the focusing angle of the feature;
step A8: and adjusting the second lens to move to the angle according to the focus angle beta.
2. The autofocus algorithm of claim 1, wherein in step a1, the X-axis indices of the rectangular coordinates XY decrease from left to right, and the X-axis indices of the rectangular coordinates XY' increase from left to right;
in step a2, the feature cells are 1 red pixel cell, 2 green pixel cells, and 1 blue pixel cell to form a2 × 2 bayer array, and the length of each pixel cell in the X direction is denoted as μ
L in step A31Calculated by equation 4, said L2Calculated by formula 5;
equation 4: l is1=(WL2f-WLf2*cosα)/(WL2sinα*cosα-WLf*sinα*cos2α+2L2f*sin2α-WLf*cos2α+Wf2*cos3α-2Lf2*sinα*cosα);
Equation 5: l is2=(WL2f sin2α-WLf2*sinα*cosα)/(WL2sinα*cosα-WLf*cos2α+2L2f*sin2α-WLf*sinα*cos2α+Wf2*cos3α-2Lf2*sin2α*cosα);
W is the width value of the first grid and the second grid in the X direction, f is the focal length of the first convex lens and the second convex lens, and the focal lengths of the first convex lens and the second convex lens are consistent.
3. The automatic focusing algorithm according to claim 1 or 2, further comprising a coincidence ratio check of the focusing result in the focusing area, the specific steps are as follows:
step B1: the feature data of any feature in the first focusing area is selected to subtract the feature data of the same position in the second focusing area to obtain the difference of the feature data;
step B2: when the number of the difference of the feature data is about 0 and reaches 60% -100% of the number of the feature in the first focusing area, confirming that the focusing point completes focusing on the feature;
step B3: when the number of the difference between the feature data is about 0 does not reach 60% -100% of the number of the feature in the first focus region, the second convex lenses are sequentially operated until the number of the difference between the feature data is 0 reaches 60% -100% of the number of the feature in the first focus region, and focusing is completed.
4. An automatic focusing visual device according to an automatic focusing algorithm is characterized by comprising a support, a first lens, a second lens, a movable guide rail assembly, a fixed guide rail assembly and a voice coil motor, wherein a sawtooth rack matched with the movable guide rail assembly and the fixed guide rail assembly is arranged on an output shaft of the voice coil motor, and a first mounting hole for mounting the first lens and a second mounting hole for mounting the second lens are formed in the support;
the movable guide rail assembly comprises an installation shaft frame and a movable guide rail, the fixed guide rail assembly comprises a connecting shaft, a fixed guide rail and a fixed shaft, the installation shaft frame is U-shaped, two ends of the installation shaft frame are connected to the upper side and the lower side of the first lens respectively and then connected into the first installation hole of the support, one end of the first lens, which is far away from the first installation hole, is provided with a first groove for placing the movable guide rail, a first slide bar is installed in the first groove, the movable guide rail is provided with a first slide groove matched with the first slide bar, one end of the movable guide rail is connected to the installation shaft frame, and the other end of the movable guide rail is provided with a sawtooth groove matched with the sawtooth rack;
the second lens is mounted in the second mounting hole of the bracket through the connecting shaft, a second groove for placing the fixed guide rail is formed in one end, away from the second mounting hole, of the second lens, a second sliding rod is mounted in the second groove, a second sliding groove matched with the second sliding rod is formed in the fixed guide rail, two ends of the fixed shaft are respectively connected to the fixed guide rail and the bracket, and a sawtooth guide rail matched with the sawtooth rack is formed in one end of the second lens;
the voice coil motor is externally connected or internally provided with a controller which is connected with the voice coil motor in a control mode and stores the automatic focusing algorithm according to any one of claims 1 to 3, and the controller is respectively connected with the first lens and the second lens through wires or wirelessly for data interaction and control.
5. The auto-focusing vision device according to the auto-focusing algorithm of claim 4, wherein the bracket comprises a bottom plate and a vertical plate, the first mounting hole and the second mounting hole are both opened on the vertical plate, and one end of the fixed shaft is connected to the bottom plate;
the sawtooth rack is an I-shaped sawtooth rack with two ends respectively connected with the sawtooth groove and the sawtooth guide rail in a matching way;
the sawtooth groove and the sawtooth guide rail are both arranged in an arc shape.
6. The auto-focus vision device according to claim 4 or 5, wherein a first convex lens and a first guide pillar are disposed in the first lens, the first slide bar is mounted on the first guide pillar, the first convex lens is mounted on a front end surface of the first lens, and a first photoelectric sensor is mounted at an end of the first guide pillar facing the first convex lens;
a second convex lens and a second guide pillar are arranged in the second lens, the second sliding rod is installed on the second guide pillar, the second convex lens is installed on the front end face of the second lens, and a second photoelectric sensor is installed at one end, facing the second convex lens, of the second guide pillar;
the first photoelectric sensor and the second photoelectric sensor are connected with the controller for data transmission.
7. The auto-focus vision apparatus according to claim 6, wherein said first photosensor has a first filter mounted thereon, and said second photosensor has a second filter mounted thereon.
8. A control method of an auto-focus vision device is characterized by comprising the following steps:
step C1: the voice coil motor drives the sawtooth rack to drive the movable guide rail to move, a first sliding groove on the movable guide rail restrains the first sliding rod so that the first guide pillar drives the first photoelectric sensor to move, meanwhile, the voice coil motor drives the sawtooth rack to drive the sawtooth guide rail to move, and the second sliding rod drives a second photoelectric sensor on the second guide pillar to move under the restraint of the second sliding groove;
step C2: acquiring detection data of the first photoelectric sensor and the second photoelectric sensor in real time and transmitting the detection data to the controller;
step C3: the controller calculates the detection data to obtain control data containing voice coil motor driving data;
step C4: the controller controls the voice coil motor to focus through the control data.
9. The method of claim 8, wherein the first channel shape on the moving rail is determined by the following equation:
X1=【Z1+L*f/(L-f*cotα)】cosα;
Y1=【Z1+L*f/(L-f*cotα)】sinα;
wherein A is a coordinate point of the optical center of the first convex lens in a plane coordinate system XY, A is the center of a circle, Z1Is any length value, L is the distance between the optical center of the first convex lens and the optical center of the second convex lens, f is the focal length of the first convex lens and the second convex lens, alpha is the included angle between the axis of the second convex lens and the connecting line between the first convex lens and the second convex lens, alpha is equal to or larger than 45 degrees and equal to or smaller than 90 degrees, and when alpha is equal to or larger than 45 degrees and equal to or smaller than 90 degrees, the alpha is substituted into the formula to obtain the matched X1、Y1
The confirmation formula of the second chute shape on the fixed rail is as follows:
X2=【Z2+L*f/(L-f*cosα)】cosα;
Y2=【Z2+L*f/(L-f*cosα)】sinα;
wherein B is the coordinate point of the optical center of the second convex lens in the plane coordinate system XY, Z2Is any length value, L is the distance between the optical center of the first convex lens and the optical center of the second convex lens, f is the focal length of the first convex lens and the second convex lens, alpha is the included angle between the axis of the second convex lens and the connecting line between the first convex lens and the second convex lens, alpha is equal to or larger than 45 degrees and equal to or smaller than 90 degrees, and when alpha is equal to or larger than 45 degrees and equal to or smaller than 90 degrees, the alpha is substituted into the formula to obtain the matched X2、Y2
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the method according to any one of claims 5-6.
CN202111192907.9A 2021-10-13 2021-10-13 Automatic focusing method, automatic focusing visual device and control method thereof Active CN114095628B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111192907.9A CN114095628B (en) 2021-10-13 2021-10-13 Automatic focusing method, automatic focusing visual device and control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111192907.9A CN114095628B (en) 2021-10-13 2021-10-13 Automatic focusing method, automatic focusing visual device and control method thereof

Publications (2)

Publication Number Publication Date
CN114095628A true CN114095628A (en) 2022-02-25
CN114095628B CN114095628B (en) 2023-07-07

Family

ID=80296830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111192907.9A Active CN114095628B (en) 2021-10-13 2021-10-13 Automatic focusing method, automatic focusing visual device and control method thereof

Country Status (1)

Country Link
CN (1) CN114095628B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289144A (en) * 2011-06-30 2011-12-21 浙江工业大学 Intelligent three-dimensional (3D) video camera equipment based on all-around vision sensor
CN104853105A (en) * 2015-06-15 2015-08-19 爱佩仪光电技术有限公司 Three-dimensional rapid automatic focusing method based on photographing device capable of controlling inclination of lens
WO2016067648A1 (en) * 2014-10-30 2016-05-06 オリンパス株式会社 Focal point adjustment device, camera system, and focal point adjustment method
CN110568699A (en) * 2019-08-29 2019-12-13 东莞西尼自动化科技有限公司 control method for simultaneously automatically focusing most 12 cameras

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289144A (en) * 2011-06-30 2011-12-21 浙江工业大学 Intelligent three-dimensional (3D) video camera equipment based on all-around vision sensor
WO2016067648A1 (en) * 2014-10-30 2016-05-06 オリンパス株式会社 Focal point adjustment device, camera system, and focal point adjustment method
CN104853105A (en) * 2015-06-15 2015-08-19 爱佩仪光电技术有限公司 Three-dimensional rapid automatic focusing method based on photographing device capable of controlling inclination of lens
CN110568699A (en) * 2019-08-29 2019-12-13 东莞西尼自动化科技有限公司 control method for simultaneously automatically focusing most 12 cameras

Also Published As

Publication number Publication date
CN114095628B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
EP2836869B1 (en) Active alignment using continuous motion sweeps and temporal interpolation
CN109859272B (en) Automatic focusing binocular camera calibration method and device
JP2019144559A (en) Device and method for positioning multi-aperture optical system having multiple optical channels relative to image sensor
US20160061594A1 (en) System and method of measuring and correcting tilt angle of lens
US20090180021A1 (en) Method for adjusting position of image sensor, method and apparatus for manufacturing a camera module, and camera module
US8098984B2 (en) Focus detecting apparatus and an imaging apparatus
WO2014160142A1 (en) Systems and methods for using alignment to increase sampling diversity of cameras in an array camera module
CN103424954A (en) Lens apparatus and image pickup system
US8659689B2 (en) Fast measurement of alignment data of a camera system
CN108401153B (en) Dual-camera module correction device and correction method thereof
US6600878B2 (en) Autofocus sensor
CN111080705A (en) Calibration method and device for automatic focusing binocular camera
CN110057301B (en) Binocular 3D parallax-based height detection device and detection method
CN107063644B (en) Finite object distance distortion measuring method and system
CN113155084A (en) Binocular vision distance measuring device and method based on laser cross standard line assistance
US11233961B2 (en) Image processing system for measuring depth and operating method of the same
CN114813051A (en) Lens assembly method, device and system based on inverse projection MTF detection
EP0547269B1 (en) Apparatus and method for measuring optical axis deflection
CN112492192A (en) Camera focus for ADAS
JP3941631B2 (en) Three-dimensional imaging apparatus and method
CN114095628A (en) Automatic focusing algorithm, automatic focusing visual device and control method thereof
CN110779469B (en) Shafting perpendicularity detection device and method for horizontal photoelectric tracking system
CN109682312B (en) Method and device for measuring length based on camera
JP2006030256A (en) Focusing adjustment method and focusing adjustment device for imaging apparatus
CN115876443A (en) Method and system for aligning measurement geometric center of near-to-eye display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant