CN105163036B - A kind of method that camera lens focuses on automatically - Google Patents
A kind of method that camera lens focuses on automatically Download PDFInfo
- Publication number
- CN105163036B CN105163036B CN201510657845.2A CN201510657845A CN105163036B CN 105163036 B CN105163036 B CN 105163036B CN 201510657845 A CN201510657845 A CN 201510657845A CN 105163036 B CN105163036 B CN 105163036B
- Authority
- CN
- China
- Prior art keywords
- mtd
- mrow
- mtr
- subgraph
- camera lens
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Abstract
The present invention provides a kind of method that camera lens focuses on automatically, and this method comprises the following steps:(1) focus motor progressively moves in its moveable scope, often moves and moves a step, then camera lens gathers multiple images, and calculates the definition values of every image respectively, then averages;(2) obtained all average values are compared judgement, obtain maximum image definition values, focus motor is moved to the focus motor position corresponding to the maximum image definition values, complete automatic focus on.The present invention can be realized by software it is automatic focus on, i.e., by the algorithm of fixation in digital picture comprising information carry out respective handling, obtain corresponding controlled quentity controlled variable, Driving Stepping Motor, drive camera lens to move forward and backward, clear image is focused on until obtaining, not only focusing speed is fast, and precision is high.
Description
Technical field
The present invention relates to technical field of visual measurement, a kind of method focused on automatically more particularly, to camera lens.
Background technology
Automatically focus on for general principle, automatic focus on is segmented into two major classes:One kind is actively to focus on, based on camera lens
The ranging auto-focusing of the range measurement between the target that is taken;Another kind of is passively to focus on, based on imaging clearly on focusing screen
Focus detection auto-focusing.
Actively focusing on mainly has infrared distance measuring method and supersonic sounding method.The principle of infrared distance measuring method is, by camera
Actively launch infrared ray as ranging light source, and the geometrical relationship by being formed between infrared diode, then calculate focal distance;
Ultrasonic ranging method is the time propagated according to ultrasonic wave between digital camera and object to carry out ranging.Infrared-type and super
Sound wave type auto-focusing is to carry out ranging, referred to as active auto-focusing using actively launching light wave or sound wave.It is passive to focus on
Mainly there are contrast method and phase method.Contrast method is to realize auto-focusing, the wheel of image by the contour edge of detection image
Wide edge is more clear, then its brightness step is bigger, and the contrast between edge scenery and background is bigger in other words.Instead
It, image out of focus, contour edge is smudgy, and brightness step or contrast decline, and out of focus more remote, contrast is lower.Phase
Method is to realize auto-focusing by detecting the offset of picture.
The content of the invention
It is an object of the invention to:The problem of existing for prior art, there is provided a kind of method that camera lens focuses on automatically, solution
The problem of focus method focusing speed of certainly existing contrast method is slow, and focusing accuracy is not high.
The goal of the invention of the present invention is achieved through the following technical solutions:
A kind of method that camera lens focuses on automatically, it is characterised in that this method comprises the following steps:
(1) focus motor progressively moves in its moveable scope, often moves and moves a step, then camera lens gathers multiple images,
And the definition values of every image are calculated respectively, then average;
(2) obtained all average values are compared judgement, obtain maximum image definition values, focus motor is moved
To the focus motor position corresponding to the maximum image definition values, automatic focus on is completed.
As further technical scheme, the calculation procedure of the definition values of every image is as follows:
(11) Y-component of image is divided into 3x3 9 subgraphs;
(12) each subgraph is divided into the block of several 8x8 pixels, each block is with the pixel value structure of its each pixel
8x8 matrix I, integer transform is done to each matrix I corresponding to building, and obtains coefficient after 8x8 conversion, and a square fortune is done to each coefficient
Calculate, obtain the transformation energy matrix B E corresponding to each piece;
(13) to each subgraph, the coefficient of the transformation energy matrix B E of some pieces be divided into correspondence position tires out
Add, each subgraph obtains final subgraph transformation energy matrix PE (i), and PE (i) is 8x8 matrixes, and i values are 0~8;
(14) the 8x8 coefficient correspondence positions of the final subgraph transformation energy matrix PE (i) of 9 subgraphs are weighted
Cumulative, it is 8x8 matrixes to obtain full images transformation energy matrix E, E;
(15) full images transformation energy matrix E 8x8 coefficients are scanned according to Zig-Zag orders, are converted to array
SE (x), x are 0~63, and wherein SE (0) is referred to as DC energy, and SE (1)~SE (63) is referred to as exchanging data, by SE (1), SE (2),
~SE (63) is multiplied by 1,2 ,~63 respectively, obtains FSE (1), FSE (2) ,~FSE (63), and FX=FSE (the 1)+FSE (2) that adds up+...
+ FSE (63), FX is referred to as AC energy, and AC energy FX divided by DC energy FSE (0) are obtained into the definition values of image.
As further technical scheme, 9 subgraphs being divided into step (11), wherein center image is larger, week
Defensive wall image is smaller.
As further technical scheme, the transformation for mula for doing integer transform in step (12) to each matrix I is F=I*
T8x8, wherein F is the 8x8 matrixes after conversion, and T is that fixed matrix is as follows:
As further technical scheme, in step (14) during weighted accumulation, the weight coefficient of surrounding subgraph compares center
The weight coefficient of subgraph is small.
As further technical scheme, the weight coefficient of surrounding subgraph is equal to the weight coefficient of center subgraph
0.5~1 times.
Compared with prior art, the present invention can realize automatic focusing by software, i.e., by the algorithm of fixation to numeral
In image comprising information carry out respective handling, obtain corresponding controlled quentity controlled variable, Driving Stepping Motor, drive camera lens is front and rear to move
It is dynamic, clear image is focused on until obtaining, not only focusing speed is fast, and precision is high.Because passive approach does not need additionally
Distance-measuring equipment, therefore this general small volume of instrument is used, easy to carry, using flexible, it can be applied to digital camera, net
Road is imaged in first-class optical system.
Brief description of the drawings
Fig. 1 is nine subgraphs for being divided into 3X3;
Fig. 2 is Zig-Zag scanning sequency figure;
Fig. 3 realizes framework for automatic focusing.
Embodiment
The present invention is described in detail with specific embodiment below in conjunction with the accompanying drawings.
Embodiment
The present invention provides a kind of method that camera lens focuses on automatically, belongs to the passive focusing of contrast method.This method is by dividing
Region calculates image definition, then weighted average regional image definition, obtains final image definition, clear according to image
Clear degree, in the range of focus motor is limited, mobile focus motor search maximum image definition.
As shown in figure 3, gathering image by lens optical system, reception cmos image passes the realization principle of the inventive method
Sensor is transmitted through the video data come, then by the automatic processing for focusing on adjusting module, obtains clearly view data.Its is specific
Realize that step is as follows:
(1) the video data YUV that cmos image sensor transmits is received;
(2) Y-component of a two field picture is divided into 3x3 9 subgraphs, each subgraph is segmented into several
8x8 block.Such as Fig. 1, wherein center image is bigger, and surrounding image is smaller, and we are with from left to right, order from top to bottom
It is nominally P0, P1, P2, P3, P4, P5, P6, P7, P8, maximum subgraph centered on wherein P4;
(3) MxN 8x8 block is divided into each subgraph, each 8x8 blocks are with corresponding to the pixel value structure of each point
8x8 matrix I, integer transform is done to each 8x8 block, obtain coefficient after 8x8 conversion, square operation is done to each coefficient;
Wherein, integer transform formula is F=I*T8x8, F is the 8x8 matrixes after conversion, and I inputs are image 8x8 blocks, and T is solid
Set matrix is as follows:
The formula that square operation is done to each coefficient is:
BE (x, y)=F (x, y) * F (x, y), x=0~7, y=0~7, BE are transformation energy matrix.
(4) to each subgraph, the transformation energy matrix B E for the MxN 8x8 block that subgraph is divided into correspondence position
Coefficient add up, each subgraph obtains final subgraph transformation energy matrix PE (i), and PE (i) is 8x8 matrixes, i=0~8;
(5) the 8x8 coefficient correspondence positions of the final subgraph transformation energy matrix PE (i) of 9 subgraphs are weighted
Cumulative, the weight coefficient of surrounding subgraph is smaller than the weight coefficient of center subgraph (centered on the weight coefficient of surrounding subgraph
0.5~1 times of the weight coefficient of subgraph, the present embodiment takes 0.6 times), it is 8x8 matrixes to obtain full images transformation energy matrix E, E.
(6) full images transformation energy matrix E 8x8 coefficients are converted into array SE by Zig-Zag scanning sequency
(x), x is 0~63, and wherein SE (0) is referred to as DC energy, and SE (1)~SE (63) is referred to as exchanging data, by SE (1), SE (2)~
SE (63) is multiplied by 1,2~63 respectively, obtains FSE (1), FSE (2) ,~FSE (63), and FX=FSE (the 1)+FSE (2) that adds up+...+
FSE (63), FX is referred to as AC energy.ZigZag scanning sequencies are as shown in Figure 2.
(7) AC energy FX divided by DC energy FSE (0) are obtained into image definition Fc.
(8) according to present convergence motor mobile range (motor is fixed, and moving range is also fixed), with image clarity values
Fc is criterion, and the mobile focus motor in focus motor mobile range, motor row is further, in multiple figures of the station acquisition
Picture, the definition values of every image are calculated respectively, are then averaged;Then all definition average value is compared and sentenced
It is disconnected, maximum image definition values Fc focus motor position is obtained, motor is finally moved to the position, then is completed automatic poly-
It is burnt.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, it is noted that all
All any modification, equivalent and improvement made within the spirit and principles in the present invention etc., it should be included in the guarantor of the present invention
Within the scope of shield.
Claims (5)
1. a kind of method that camera lens focuses on automatically, it is characterised in that this method comprises the following steps:
(1) focus motor progressively moves in its moveable scope, often moves and moves a step, then camera lens gathers multiple images, and divides
The definition values of every image are not calculated, are then averaged;
(2) obtained all average values are compared judgement, obtain maximum image definition values, focus motor is moved to this
Focus motor position corresponding to maximum image definition values, complete automatic focus on;
And the calculation procedure of the definition values of every image is as follows:
(11) Y-component of image is divided into 3x3 9 subgraphs;
(12) each subgraph is divided into the block of several 8x8 pixels, each block is with the pixel value structure pair of its each pixel
The 8x8 matrix I answered, integer transform is done to each matrix I, obtains coefficient after 8x8 conversion, square operation is done to each coefficient,
Obtain the transformation energy matrix B E corresponding to each piece;
(13) to each subgraph, the coefficient of the transformation energy matrix B E of some pieces be divided into correspondence position adds up,
Each subgraph obtains final subgraph transformation energy matrix PE (i), and PE (i) is 8x8 matrixes, and i values are 0~8;
(14) the 8x8 coefficient correspondence positions of the final subgraph transformation energy matrix PE (i) of 9 subgraphs are weighted tired
Add, it is 8x8 matrixes to obtain full images transformation energy matrix E, E;
(15) full images transformation energy matrix E 8x8 coefficients are scanned according to Zig-Zag orders, are converted to array SE
(x), x is 0~63, and wherein SE (0) is referred to as DC energy, and SE (1)~SE (63) is referred to as exchanging data, by SE (1), SE (2) ,~
SE (63) is multiplied by 1,2 ,~63 respectively, obtains FSE (1), FSE (2) ,~FSE (63), and FX=FSE (the 1)+FSE (2) that adds up+...+
FSE (63), FX is referred to as AC energy, and AC energy FX divided by DC energy FSE (0) are obtained into the definition values of image.
2. the method that a kind of camera lens according to claim 1 focuses on automatically, it is characterised in that be divided into step (11)
9 subgraphs, wherein center image is larger, and surrounding subgraph is smaller.
3. the method that a kind of camera lens according to claim 1 focuses on automatically, it is characterised in that to each square in step (12)
The transformation for mula that battle array I does integer transform is F=I*T8x8, wherein F is the 8x8 matrixes after conversion, and T is that fixed matrix is as follows:
<mrow>
<msub>
<mi>T</mi>
<mrow>
<mn>8</mn>
<mo>&times;</mo>
<mn>8</mn>
</mrow>
</msub>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mn>8</mn>
</mtd>
<mtd>
<mn>8</mn>
</mtd>
<mtd>
<mn>8</mn>
</mtd>
<mtd>
<mn>8</mn>
</mtd>
<mtd>
<mn>8</mn>
</mtd>
<mtd>
<mn>8</mn>
</mtd>
<mtd>
<mn>8</mn>
</mtd>
<mtd>
<mn>8</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>12</mn>
</mtd>
<mtd>
<mn>10</mn>
</mtd>
<mtd>
<mn>6</mn>
</mtd>
<mtd>
<mn>3</mn>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>3</mn>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>6</mn>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>10</mn>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>12</mn>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>8</mn>
</mtd>
<mtd>
<mn>4</mn>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>4</mn>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>8</mn>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>8</mn>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>4</mn>
</mrow>
</mtd>
<mtd>
<mn>4</mn>
</mtd>
<mtd>
<mn>8</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>10</mn>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>3</mn>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>12</mn>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>6</mn>
</mrow>
</mtd>
<mtd>
<mn>6</mn>
</mtd>
<mtd>
<mn>12</mn>
</mtd>
<mtd>
<mn>3</mn>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>10</mn>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>8</mn>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>8</mn>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>8</mn>
</mrow>
</mtd>
<mtd>
<mn>8</mn>
</mtd>
<mtd>
<mn>8</mn>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>8</mn>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>8</mn>
</mrow>
</mtd>
<mtd>
<mn>8</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>6</mn>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>12</mn>
</mrow>
</mtd>
<mtd>
<mn>3</mn>
</mtd>
<mtd>
<mn>10</mn>
</mtd>
<mtd>
<mn>10</mn>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>3</mn>
</mrow>
</mtd>
<mtd>
<mn>12</mn>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>6</mn>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>4</mn>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>8</mn>
</mrow>
</mtd>
<mtd>
<mn>8</mn>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>4</mn>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>4</mn>
</mrow>
</mtd>
<mtd>
<mn>8</mn>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>8</mn>
</mrow>
</mtd>
<mtd>
<mn>4</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>3</mn>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>6</mn>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>6</mn>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>12</mn>
</mrow>
</mtd>
<mtd>
<mn>12</mn>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>10</mn>
</mrow>
</mtd>
<mtd>
<mn>6</mn>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>3</mn>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>.</mo>
</mrow>
4. the method that a kind of camera lens according to claim 1 focuses on automatically, it is characterised in that weighted accumulation in step (14)
When, the weight coefficient of surrounding subgraph is smaller than the weight coefficient of center subgraph.
5. the method that a kind of camera lens according to claim 4 focuses on automatically, it is characterised in that the weighting system of surrounding subgraph
0.5~1 times of number equal to the weight coefficient of center subgraph.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510657845.2A CN105163036B (en) | 2015-10-13 | 2015-10-13 | A kind of method that camera lens focuses on automatically |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510657845.2A CN105163036B (en) | 2015-10-13 | 2015-10-13 | A kind of method that camera lens focuses on automatically |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105163036A CN105163036A (en) | 2015-12-16 |
CN105163036B true CN105163036B (en) | 2018-03-09 |
Family
ID=54803778
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510657845.2A Active CN105163036B (en) | 2015-10-13 | 2015-10-13 | A kind of method that camera lens focuses on automatically |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105163036B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105163035A (en) * | 2015-09-30 | 2015-12-16 | 努比亚技术有限公司 | Mobile terminal shooting system and mobile terminal shooting method |
CN107277381A (en) * | 2017-08-18 | 2017-10-20 | 成都市极米科技有限公司 | Camera focusing method and device |
CN110411945A (en) * | 2019-08-06 | 2019-11-05 | 沈阳大学 | A kind of Image Acquisition camera for assembly line piece test |
CN111654632B (en) * | 2020-06-19 | 2022-02-08 | 展讯通信(上海)有限公司 | Contrast focusing method, apparatus, electronic device and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101762232A (en) * | 2008-12-23 | 2010-06-30 | 鸿富锦精密工业(深圳)有限公司 | Multi-surface focusing system and method |
CN101782369A (en) * | 2009-01-16 | 2010-07-21 | 鸿富锦精密工业(深圳)有限公司 | Image measurement focusing system and method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101571665B (en) * | 2008-04-28 | 2011-12-21 | 鸿富锦精密工业(深圳)有限公司 | Automatic focusing device and automatic focusing method for projector |
-
2015
- 2015-10-13 CN CN201510657845.2A patent/CN105163036B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101762232A (en) * | 2008-12-23 | 2010-06-30 | 鸿富锦精密工业(深圳)有限公司 | Multi-surface focusing system and method |
CN101782369A (en) * | 2009-01-16 | 2010-07-21 | 鸿富锦精密工业(深圳)有限公司 | Image measurement focusing system and method |
Also Published As
Publication number | Publication date |
---|---|
CN105163036A (en) | 2015-12-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107659774B (en) | Video imaging system and video processing method based on multi-scale camera array | |
CN105744163B (en) | A kind of video camera and image capture method based on depth information tracking focusing | |
CN105163036B (en) | A kind of method that camera lens focuses on automatically | |
CN103150715B (en) | Image mosaic processing method and processing device | |
US20190028631A1 (en) | Auto-Focus Method and Apparatus and Electronic Device | |
CN105282443B (en) | A kind of panorama depth panoramic picture imaging method | |
CN102436639B (en) | Image acquiring method for removing image blurring and image acquiring system | |
CN107369159B (en) | Threshold segmentation method based on multi-factor two-dimensional gray level histogram | |
CN103776419B (en) | A kind of binocular distance measurement method improving measurement range | |
CN101414354B (en) | Image input device and personal authentication device | |
CN105245841A (en) | CUDA (Compute Unified Device Architecture)-based panoramic video monitoring system | |
JP2019510234A (en) | Depth information acquisition method and apparatus, and image acquisition device | |
CN111899164B (en) | Image splicing method for multi-focal-segment scene | |
CN103019001A (en) | Automatic focusing method and device | |
JP7378219B2 (en) | Imaging device, image processing device, control method, and program | |
CN111182238B (en) | High-resolution mobile electronic equipment imaging device and method based on scanning light field | |
US20160163075A1 (en) | Device and method of displaying heat map on perspective drawing | |
CN111932615B (en) | Polarization ranging method and device and readable storage medium | |
CN111800588A (en) | Optical unmanned aerial vehicle monitoring system based on three-dimensional light field technology | |
CN114359406A (en) | Calibration of auto-focusing binocular camera, 3D vision and depth point cloud calculation method | |
CN106973199A (en) | The many aperture camera systems for improving depth accuracy are scanned using focal distance | |
CN109765747A (en) | A kind of aerial image focusing test method, focus detection system and camera | |
Li et al. | Spatially adaptive retina-like sampling method for imaging LiDAR | |
CN108492254B (en) | Image acquisition system and method | |
Nguyen et al. | Calibbd: Extrinsic calibration of the lidar and camera using a bidirectional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |