CN108600638B - Automatic focusing system and method for camera - Google Patents

Automatic focusing system and method for camera Download PDF

Info

Publication number
CN108600638B
CN108600638B CN201810652650.2A CN201810652650A CN108600638B CN 108600638 B CN108600638 B CN 108600638B CN 201810652650 A CN201810652650 A CN 201810652650A CN 108600638 B CN108600638 B CN 108600638B
Authority
CN
China
Prior art keywords
focusing
camera
image
microprocessor
focus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810652650.2A
Other languages
Chinese (zh)
Other versions
CN108600638A (en
Inventor
吴羽峰
金怀洲
金尚忠
唐莹
黄河
袁骁霖
王杰
王赟
张益溢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Jiliang University
Original Assignee
China Jiliang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Jiliang University filed Critical China Jiliang University
Priority to CN201810652650.2A priority Critical patent/CN108600638B/en
Publication of CN108600638A publication Critical patent/CN108600638A/en
Application granted granted Critical
Publication of CN108600638B publication Critical patent/CN108600638B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Automatic Focus Adjustment (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses an automatic focusing system and method for a camera, and relates to the technical field of cameras. The method comprises the following steps: the microprocessor receives a picture of a current scene shot by the camera, the focusing positions of a plurality of objects and backgrounds in the picture are obtained through the picture focus estimation module, the microprocessor processes the information to obtain relative focusing position values, the values are transmitted to the focusing control module to control the focusing motor, a picture is shot when the focusing motor moves one position, all pictures are transmitted to the image fusion processing module after all input focus positions are shot, and finally a high-resolution image is obtained and previewed on a display screen of the camera or stored in a memory chip. The method of the invention can solve the defects that the prior manual focusing is inaccurate, other automatic focusing speeds are slow, the efficiency is low, the prior camera cannot obtain ideal high-resolution images immediately, and the like.

Description

Automatic focusing system and method for camera
Technical Field
The invention relates to the technical field of cameras, in particular to an automatic focusing system and method for a camera.
Background
Focusing of cameras was initially performed manually, and is now gradually replaced by autofocus systems or modules, and with the development of detectors, autofocus techniques are continually improving. Nowadays, more and more cameras, digital cameras and the like adopt an automatic focusing technology. Therefore, most of automatic focusing problems are highlighted, the focusing precision is insufficient, and the focusing position cannot be accurately found; the time for focusing is long, so that a user can obtain a satisfactory image after waiting for a long time; even if the focus is adjusted, some objects are not clear, which requires post-processing by a technician to obtain a high-resolution image. These problems directly affect the customer experience.
Existing all-focus imaging systems typically comprise three parts: a high-speed imaging part, a high-speed zoom lens part, and a high-speed image processing circuit part. These requirements are so high that the costs for producing the cameras are generally higher and the corresponding sales prices are also higher. The full focus imaging system highlights unimportant scenes so that target scenes to be shot cannot be well highlighted. When a user views a picture, the user sometimes cannot obtain the effect desired by the user.
Disclosure of Invention
The invention aims to solve the technical problem of how to provide an automatic focusing system and method of a camera, which have high precision and high efficiency and can automatically generate an ideal high-resolution image.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: an automatic focusing system of a camera, characterized in that: the focusing control module is connected with the microprocessor in a bidirectional mode, a control end of a focusing motor is connected with a control signal output end of the focusing control module, and the microprocessor controls the focusing motor to act through the focusing control module; the signal output end of the position feedback module is connected with the signal input end of the focusing control module; the control input end of the camera shake detection module is connected with the signal output end of the focusing control module, and the output end of the camera shake detection module is connected with the signal input end of the microprocessor; the signal output end of the camera lens is connected with the signal input end of the microprocessor; the camera shutter is bidirectionally connected with the microprocessor; the microprocessor is respectively in bidirectional connection with the focus estimation module and the image fusion module, the image focus estimation module is used for estimating the focus of the image collected by the camera, and the image fusion processing module is used for carrying out fusion processing on the image collected by the camera.
The further technical scheme is as follows: the system also comprises a camera display screen which is bidirectionally connected with the microprocessor and is used for receiving the control of the microprocessor and displaying the data output by the microprocessor.
The further technical scheme is as follows: the system also comprises a memory chip which is bidirectionally connected with the microprocessor and is used for storing programs required by the microprocessor and data processed by the microprocessor.
The invention also discloses an automatic focusing method of the camera, which is characterized by comprising the following steps:
the camera lens collects the image of the current scene at the initial position of the focus, and the image is controlled by the microprocessor and sent to the image focus estimation module;
the image focusing module carries out focus estimation on the image and sends the information of the focus to the microprocessor for processing;
the microprocessor calculates the relative distance of the focusing motor to move each time according to the distance information of the focus to obtain corresponding relative focus distance values, and the relative focus distance values are sequenced according to a certain rule and then sent to the focusing control module;
the method comprises the steps that a focusing control module sequentially sends focusing commands to a focusing motor to control a lens to shoot pictures at a specific focus position, after each command is sent, a position feedback module detects whether the focusing motor reaches a corresponding position and sends a signal to the focusing control module, the focusing control module sends the signal to a camera shake detection module after receiving the information, the shake detection module sends the signal to a microprocessor after detecting that a camera is in a stable state, the microprocessor controls a camera shutter to be pressed down again, the shot pictures are stored in the microprocessor temporarily, and the process is repeated until the focusing control module sends a relative focus distance value transmitted by the microprocessor once;
the microprocessor transmits all the shot pictures to the image fusion processing module for processing, and the image fusion processing module sends the fused images to the microprocessor again;
the microprocessor displays the image on the display screen of the camera, if the image can be shot by a person, the microprocessor automatically stores the generated image on the memory chip by pressing an OK key; and the data in the microprocessor is cleared, and the focusing motor returns to the initial position to prepare for the next shooting.
The technical scheme is that the method for the image focusing module to perform focus estimation on the image comprises the following steps:
1) adjusting a focus to an initial origin position by using a camera capable of manually focusing, shooting a current scene picture, sequentially focusing, finding an optimal imaging position of each target object, and recording a focusing distance required by each target scene to form a picture;
2) the picture data set is trained using a VGG-16 neural network.
3) And finally, distinguishing the picture to be detected by using the trained model, and outputting the focal distance of each target scene in the picture to be detected.
The further technical scheme is that the method for calculating each parameter of the VGG-16 neural network comprises the following steps:
1) the formula for the convolutional layer is as follows:
Figure BDA0001704530750000031
wherein,
Figure BDA0001704530750000032
a jth feature map representing the ith convolution,
Figure BDA0001704530750000033
ith feature map, M, representing the l-1 th convolutionjA set of input feature maps is represented,
Figure BDA0001704530750000034
which represents the kernel of the convolution,
Figure BDA0001704530750000035
a bias term is represented.
2) The mode adopted by the pooling layer is maximum pooling, and the calculation formula is as follows:
Figure BDA0001704530750000036
3) the sizes of the three full connection layers are 4096, 4096 and 1000 respectively. Where the last fully connected layer should change 1000 to the number of categories 120 of the input dataset for classification;
4) initializing parameters by directly using a method of randomly generating weights by Gaussian distribution proposed by a VGG team;
5) penalty factor L2Since the data set is small, the value used in the present invention is 0.0001;
6) momentum is a physical parameter set to accelerate random gradient descent, and the present invention employs a modified Nesterov momentum method and sets the momentum to 0.9.
The further technical scheme is that the method for transmitting all the shot pictures to the image fusion processing module by the microprocessor for processing comprises the following steps:
dividing the original multiple images into blocks of size b x b, using Ai,Bi,Ci.., representing the i-th block of the image a, B, c.. respectively;
calculating a quality evaluation parameter lambda for each block of each image, and respectively recording the quality evaluation parameter lambda
Figure BDA0001704530750000041
Comparison
Figure BDA0001704530750000042
Recording the maximum in the same region as 1;
the image labeled 1 is morphologically manipulated and then fused using different fusion rules.
Adopt the produced beneficial effect of above-mentioned technical scheme to lie in: the invention can solve the problems of inaccurate manual focusing, low speed of other automatic focusing, low efficiency, incapability of obtaining ideal high-resolution images and the like in the prior art. The system and the method have the advantages of high focusing precision, high speed and high efficiency, and can immediately obtain an ideal high-resolution image.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
FIG. 1 is a functional block diagram of a system according to an embodiment of the present invention;
FIG. 2 is a flow chart of the processing of the focus estimation module in the method according to an embodiment of the invention;
FIG. 3 is a network architecture diagram of VGG-16 in the method of the embodiment of the invention;
FIG. 4 is a flow chart of multi-focus image fusion for quality assessment and wavelet fusion in the method according to the embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
As shown in fig. 1, the embodiment of the present invention discloses an automatic focusing system for a camera, which includes a microprocessor module, preferably, the microprocessor module uses a DSP; the focusing control module is bidirectionally connected with the microprocessor, the control end of the focusing motor is connected with the control signal output end of the focusing control module, and the microprocessor controls the action of the focusing motor through the focusing control module; the signal output end of the position feedback module is connected with the signal input end of the focusing control module; the control input end of the camera shake detection module is connected with the signal output end of the focusing control module, and the output end of the camera shake detection module is connected with the signal input end of the microprocessor; the signal output end of the camera lens is connected with the signal input end of the microprocessor; the camera shutter is bidirectionally connected with the microprocessor; the microprocessor is respectively in bidirectional connection with the focus estimation module and the image fusion module, the image focus estimation module is used for estimating the focus of the image collected by the camera, and the image fusion processing module is used for carrying out fusion processing on the image collected by the camera.
The camera display screen connected to the DSP can preview the resulting high resolution image and the operator can confirm that the high resolution image is satisfactory.
And the camera shake detection module connected with the DSP can detect whether the camera is in a stable state after the focusing motor rotates, and sends a signal to inform the DSP that the shutter can be pressed if the camera is stable. The camera shake detection module comprises a shake detection sensor, and whether the camera is stable or not is judged according to data given by the shake detection sensor.
And the focusing control module is connected with the DSP and used for receiving the relative distance value of the focus sent by the DSP chip, sequentially controlling the focusing motor to rotate to a corresponding position, receiving information fed back by the position feedback module, judging whether the motor moves to the corresponding position and the position of the current focusing motor, and controlling the camera shake detection module to work.
The invention also discloses an automatic focusing method of the camera, which comprises the following steps:
the method comprises the following steps: the camera lens collects the image of the current scene at the initial position of the focus, and the image is controlled by the DSP and sent to the image focus estimation module.
Step two: and the picture focusing module carries out focus estimation on the picture and sends the information of the focus to the DSP for processing. Referring to fig. 2, the processing method of the focus estimation module is as follows:
(1) and adjusting the focus to an initial origin position by using a camera capable of manually focusing, shooting a current scene picture, sequentially focusing, finding the optimal imaging position of each target object, and recording the focusing distance required by each target scene. Thus, the pictures with the same distance and different distances of the same target scenery and the pictures with the same distance and different distances of different target scenery are collected in sequence. And respectively sampling under the environmental conditions of high sun, cloudy days and the like. The total number of the pictures is 12, each group comprises about 100 pictures, and the focusing distance of pixel areas with different degrees of definition (at most 10 different degrees of definition) of each picture is classified and labeled.
(2) Referring to fig. 3, white represents a convolutional layer (convolution) and an active layer (Re L U), and dark gray represents a fully connected layer (full connected) and an active layer (Re L U). the neural network consists of 13 feature-extracted convolutional layers and 3 fully connected layers, with 5 max pooling layers between the convolutional layers.
The step of calculating each parameter of the VGG-16 neural network further comprises the following steps:
the formula for the convolutional layer is as follows:
Figure BDA0001704530750000061
wherein,
Figure BDA0001704530750000062
a jth feature map representing the ith convolution,
Figure BDA0001704530750000063
ith feature map, M, representing the l-1 th convolutionjA set of input feature maps is represented,
Figure BDA0001704530750000064
which represents the kernel of the convolution,
Figure BDA0001704530750000065
a bias term is represented.
The mode adopted by the pooling layer is maximum pooling, and the calculation formula is as follows:
Figure BDA0001704530750000066
the sizes of the three full connection layers are 4096, 4096 and 1000 respectively. Where the last fully connected layer should change 1000 to the number of categories 120 of the input data set for classification.
The parameters are initialized directly using the method of Gaussian distribution random generation weights proposed by the VGG team.
Penalty factor L2Since the data set is small, the value used in the present invention is 0.0001.
Momentum is a physical parameter set to accelerate random gradient descent, and the present invention employs a modified Nesterov momentum method and sets the momentum to 0.9.
The training by using the parameters is greatly improved compared with the training result of the original VGG-16, the optimal accuracy of the original VGG-16 is about 90%, and the optimal accuracy after fine adjustment is about 95%.
(3) And finally, distinguishing the picture to be detected by using the trained model, and outputting the focal distance of each target scene in the picture to be detected. The final accuracy rate is maintained to be about 93%.
Step three: the DSP chip calculates the relative distance of the focusing motor to move each time according to the distance information of the focus to obtain corresponding relative focus distance values, and the relative focus distance values are sequenced according to a certain rule and then sent to the focusing control module.
Step four: the focusing control module sequentially sends focusing commands to the focusing motor to control the lens to shoot pictures at specific focus positions, after the commands are sent each time, the position feedback module detects whether the camera reaches the corresponding position or not and sends signals to the focusing control module, the focusing control module receives the information and then sends signals to the camera shake detection module, the shake detection module detects that the camera is in a stable state and sends signals to the DSP chip, the DSP chip controls the camera shutter to be pressed, and the shot pictures are temporarily stored on the DSP chip. And circulating the steps until the focusing control module sends the relative focus distance value transmitted by the DSP chip once.
Step five: and the DSP chip transmits all the shot pictures to the image fusion processing module and sends the fused images to the DSP. With reference to fig. 4, the steps of quality assessment and wavelet fusion for multifocal image fusion are further explained as follows:
dividing the original multiple images into blocks of size b x b, using Ai,Bi,Ci.., representing the i-th block of the image a, B, c.
Calculating a quality evaluation parameter lambda for each block of each image, and respectively recording the quality evaluation parameter lambda
Figure BDA0001704530750000071
Comparison
Figure BDA0001704530750000072
The largest of the same regions is denoted as 1.
The image labeled 1 is morphologically manipulated and then fused using different fusion rules.
Step six: the DSP chip displays the image on the display screen of the camera, and if the image pickup personnel can press an OK key, the DSP chip automatically stores the generated image on the memory chip. And the internal data of the DSP is cleared, and the focusing motor returns to the initial position to prepare for the next shooting.

Claims (4)

1. An automatic focusing method of a camera is characterized by comprising the following steps:
the camera lens collects the image of the current scene at the initial position of the focus, and the image is controlled by the microprocessor and sent to the image focus estimation module;
the image focusing module carries out focus estimation on the image and sends the information of the focus to the microprocessor for processing;
the microprocessor calculates the relative distance of the focusing motor to move each time according to the distance information of the focus to obtain corresponding relative focus distance values, and the relative focus distance values are sequenced according to a certain rule and then sent to the focusing control module;
the method comprises the steps that a focusing control module sequentially sends focusing commands to a focusing motor to control a lens to shoot pictures at a specific focus position, after each command is sent, a position feedback module detects whether the focusing motor reaches a corresponding position and sends a signal to the focusing control module, the focusing control module sends the signal to a camera shake detection module after receiving the information, the shake detection module sends the signal to a microprocessor after detecting that a camera is in a stable state, the microprocessor controls a camera shutter to be pressed down again, the shot pictures are stored in the microprocessor temporarily, and the process is repeated until the focusing control module sends a relative focus distance value transmitted by the microprocessor once;
the microprocessor transmits all the shot pictures to the image fusion processing module for processing, and the image fusion processing module sends the fused images to the microprocessor again;
the microprocessor displays the image on the display screen of the camera, if the image can be shot by a person, the microprocessor automatically stores the generated image on the memory chip by pressing an OK key; and the data in the microprocessor is cleared, and the focusing motor returns to the initial position to prepare for the next shooting.
2. The camera auto-focusing method according to claim 1, wherein the picture focusing module performs focus estimation on a picture by:
1) adjusting a focus to an initial origin position by using a camera capable of manually focusing, shooting a current scene picture, sequentially focusing, finding an optimal imaging position of each target object, and recording a focusing distance required by each target scene to form a picture;
2) training the picture data set by using a VGG-16 neural network;
3) and finally, distinguishing the picture to be detected by using the trained model, and outputting the focal distance of each target scene in the picture to be detected.
3. The camera auto-focusing method according to claim 2, wherein the VGG-16 neural network parameters are calculated as follows:
1) the formula for the convolutional layer is as follows:
Figure FDA0002438863340000021
wherein,
Figure FDA0002438863340000022
a jth feature map representing the ith convolution,
Figure FDA0002438863340000023
ith feature map, M, representing the l-1 th convolutionjA set of input feature maps is represented,
Figure FDA0002438863340000024
which represents the kernel of the convolution,
Figure FDA0002438863340000025
representing a bias term;
2) the mode adopted by the pooling layer is maximum pooling, and the calculation formula is as follows:
Figure FDA0002438863340000026
3) the three fully-connected layers are 4096, 4096, 1000 respectively, where the last fully-connected layer should change 1000 to the number of classes of the input dataset 120 for classification;
4) initializing parameters by directly using a method of randomly generating weights by Gaussian distribution proposed by a VGG team;
5) penalty systemNumber L2Since the data set is small, the value used in the present invention is 0.0001;
6) momentum is a physical parameter set to accelerate random gradient descent, and the present invention employs a modified Nesterov momentum method and sets the momentum to 0.9.
4. The method for automatically focusing a camera according to claim 2, wherein the microprocessor transmits all the taken pictures to the image fusion processing module for processing as follows:
dividing the original multiple images into blocks of size b x b, using Ai,Bi,Ci.., representing the i-th block of the image a, B, c.. respectively;
calculating a quality evaluation parameter lambda for each block of each image, and respectively recording the quality evaluation parameter lambda
Figure FDA0002438863340000031
Comparison
Figure FDA0002438863340000032
Recording the maximum in the same region as 1;
the image labeled 1 is morphologically manipulated and then fused using different fusion rules.
CN201810652650.2A 2018-06-22 2018-06-22 Automatic focusing system and method for camera Active CN108600638B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810652650.2A CN108600638B (en) 2018-06-22 2018-06-22 Automatic focusing system and method for camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810652650.2A CN108600638B (en) 2018-06-22 2018-06-22 Automatic focusing system and method for camera

Publications (2)

Publication Number Publication Date
CN108600638A CN108600638A (en) 2018-09-28
CN108600638B true CN108600638B (en) 2020-08-04

Family

ID=63633920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810652650.2A Active CN108600638B (en) 2018-06-22 2018-06-22 Automatic focusing system and method for camera

Country Status (1)

Country Link
CN (1) CN108600638B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109803090B (en) * 2019-01-25 2021-09-28 睿魔智能科技(深圳)有限公司 Automatic zooming method and system for unmanned shooting, unmanned camera and storage medium
CN109788282B (en) * 2019-03-19 2020-07-07 深圳市同为数码科技股份有限公司 Adjusting method of automatic focusing device for camera lens
CN110365971B (en) * 2019-07-17 2021-05-18 上海集成电路研发中心有限公司 Test system and method for automatically positioning optimal fixed focus
JP7395910B2 (en) * 2019-09-27 2023-12-12 ソニーグループ株式会社 Information processing equipment, electronic equipment, terminal devices, information processing systems, information processing methods and programs
CN112469984B (en) * 2019-12-31 2024-04-09 深圳迈瑞生物医疗电子股份有限公司 Image analysis device and imaging method thereof
WO2021135393A1 (en) * 2019-12-31 2021-07-08 深圳迈瑞生物医疗电子股份有限公司 Image analysis apparatus and imaging method thereof
CN112363309B (en) * 2020-11-13 2023-02-17 杭州医派智能科技有限公司 Automatic focusing method and system for pathological image under microscope

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101943840A (en) * 2009-07-02 2011-01-12 佳能株式会社 Image pickup apparatus
CN102483508A (en) * 2009-07-23 2012-05-30 株式会社理光 Imaging device and imaging method
CN102812391A (en) * 2010-01-12 2012-12-05 株式会社理光 Auto-focus controlling apparatus, electronic imaging apparatus and digital still camera
CN102946515A (en) * 2012-11-27 2013-02-27 凯迈(洛阳)测控有限公司 Full-automatic focusing device and method for infrared imaging equipment
JP2017504826A (en) * 2014-03-21 2017-02-09 ホアウェイ・テクノロジーズ・カンパニー・リミテッド Image device, method for automatic focusing in an image device, and corresponding computer program
CN107864315A (en) * 2016-09-21 2018-03-30 佳能株式会社 The control method and recording medium of picture pick-up device, picture pick-up device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101943840A (en) * 2009-07-02 2011-01-12 佳能株式会社 Image pickup apparatus
CN102483508A (en) * 2009-07-23 2012-05-30 株式会社理光 Imaging device and imaging method
CN102812391A (en) * 2010-01-12 2012-12-05 株式会社理光 Auto-focus controlling apparatus, electronic imaging apparatus and digital still camera
CN102946515A (en) * 2012-11-27 2013-02-27 凯迈(洛阳)测控有限公司 Full-automatic focusing device and method for infrared imaging equipment
JP2017504826A (en) * 2014-03-21 2017-02-09 ホアウェイ・テクノロジーズ・カンパニー・リミテッド Image device, method for automatic focusing in an image device, and corresponding computer program
CN107864315A (en) * 2016-09-21 2018-03-30 佳能株式会社 The control method and recording medium of picture pick-up device, picture pick-up device

Also Published As

Publication number Publication date
CN108600638A (en) 2018-09-28

Similar Documents

Publication Publication Date Title
CN108600638B (en) Automatic focusing system and method for camera
CN107087107B (en) Image processing apparatus and method based on dual camera
CN103384998B (en) Imaging device and imaging method
CN101309367B (en) Imaging apparatus
CN102957863B (en) Picture pick-up device, image processing equipment and image processing method
CN1977526B (en) Image capture method and image capture device
CN101877765B (en) Image transforming apparatus and method of controlling operation of same
CN113838098B (en) Intelligent tracking shooting system for long-distance high-speed moving target
CN101221341A (en) Initialization method for field depth composition
CN101943839A (en) Integrated automatic focusing camera device and definition evaluation method
CN104023177A (en) Camera control method, device and camera
CN108156371B (en) Infrared automatic focusing fast searching method
CN104735347A (en) Autofocus adjusting method and apparatus
CN104902182A (en) Method and device for realizing continuous auto-focus
CN113391644B (en) Unmanned aerial vehicle shooting distance semi-automatic optimization method based on image information entropy
CN101841653A (en) Filming apparatus and angle-of-view adjusting method
CN104184935A (en) Image shooting device and method
CN111915735B (en) Depth optimization method for three-dimensional structure outline in video
CN106791353B (en) The methods, devices and systems of auto-focusing
CN101470248B (en) Focusing apparatus and method
CN106556958A (en) The auto focusing method of Range-gated Imager
CN117058183A (en) Image processing method and device based on double cameras, electronic equipment and storage medium
CN117714862A (en) Focusing method, electronic device, chip system, storage medium and program product
CN117201937A (en) Quick high-precision camera focusing method
US20130044212A1 (en) Imaging device and distance information detecting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant