CN115802161B - Focusing method, system, terminal and medium based on self-learning - Google Patents

Focusing method, system, terminal and medium based on self-learning Download PDF

Info

Publication number
CN115802161B
CN115802161B CN202310086377.2A CN202310086377A CN115802161B CN 115802161 B CN115802161 B CN 115802161B CN 202310086377 A CN202310086377 A CN 202310086377A CN 115802161 B CN115802161 B CN 115802161B
Authority
CN
China
Prior art keywords
focusing
value
trusted
object distance
motor control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310086377.2A
Other languages
Chinese (zh)
Other versions
CN115802161A (en
Inventor
程景
葛天杰
洪志冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Xingxi Technology Co ltd
Original Assignee
Hangzhou Xingxi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Xingxi Technology Co ltd filed Critical Hangzhou Xingxi Technology Co ltd
Priority to CN202310086377.2A priority Critical patent/CN115802161B/en
Publication of CN115802161A publication Critical patent/CN115802161A/en
Application granted granted Critical
Publication of CN115802161B publication Critical patent/CN115802161B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Focusing (AREA)

Abstract

The application provides a focusing method, a focusing system, a focusing terminal and a focusing medium based on self-learning, wherein the focusing method comprises the following steps: acquiring object distance value data of a current shot main body; searching whether a corresponding trusted value exists in the updated focusing curve according to the object distance value data of the current shot main body; if the trusted value exists, the searched trusted value is used as a lens motor control value to drive the lens motor to move to a specified position for focusing. According to the invention, the updated focusing curve can be obtained by learning and calibrating through a self-learning algorithm, so that in the follow-up focusing process, the object distance value obtained through TOF can be focused to an accurate position in one step, CAF fine focusing is not required to be repeated, the influence of a picture blurring stage generated by CAF focusing is eliminated, the system calculation force required by CAF focusing is also eliminated, and the focusing time is greatly shortened. The invention has low realization cost and good system compatibility, and the parameters related to the algorithm can be dynamically adjusted, so that the algorithm has excellent ductility and adjustability.

Description

Focusing method, system, terminal and medium based on self-learning
Technical Field
The present disclosure relates to the field of camera focusing algorithms, and in particular, to a self-learning-based focusing method, system, terminal, and medium.
Background
The current mainstream focusing algorithms are divided into three types: CAF focusing algorithm, PD focusing algorithm, and TOF focusing algorithm. The TOF focusing algorithm is usually used in combination with the CAF algorithm because of a certain error generated by various reasons, such as calibration error, focus curve fitting error, motor control error, lens consistency error, camera lens assembly consistency error, temperature drift, etc.
The conventional focusing strategy is to combine TOF coarse focusing and CAF fine focusing, wherein TOF is adopted to calculate a focusing value in each focusing process, and then a motor control value is obtained from a pre-calibrated focusing fitting curve to control a focusing motor, so that the coarse focusing process is completed. After the rough focusing is finished, the CAF algorithm is used for fine focusing (focusing stabilization), motor control is carried out according to steps, definition indexes are calculated repeatedly, and finally the motor is controlled to move to the optimal focusing position. However, this focusing strategy suffers from the following disadvantages:
(1) The TOF coarse focusing link may have larger errors, so that the subsequent CAF algorithm searching needs more time;
(2) Each focusing requires the CAF algorithm to perform fine focusing, so that the focusing time is longer;
(3) The CAF focusing process causes the image to have a process of repeated clear blurring due to repeated driving of the focusing motor.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, an object of the present application is to provide a focusing method, system, terminal and medium based on self-learning, which are used for solving the technical problems that the existing focusing algorithm has long focusing time and repeated clear blurring process.
To achieve the above and other related objects, a first aspect of the present application provides a self-learning-based focusing method, including: acquiring object distance value data of a current shot main body; searching whether a corresponding trusted value exists in an updated focusing curve according to the object distance value data of the current shot main body; if the credible value exists, the searched credible value is used as a lens motor control value to drive the lens motor to move to a designated position for focusing; otherwise, focusing is performed based on the TOF focusing algorithm and the CAF focusing algorithm.
In some embodiments of the first aspect of the present application, the generating manner of the updated focusing curve includes: performing preliminary focusing on a target object under an object distance value based on a TOF focusing algorithm, and obtaining a lens motor control value corresponding to the object distance value after performing stable focusing on the target object based on a CAF focusing algorithm; judging whether to calibrate an initial focusing curve according to the obtained lens motor control value according to a focusing self-learning strategy so as to generate the updated focusing curve; the initial focusing curve is a mapping relation between the object distance value before updating and the lens motor control value.
In some embodiments of the first aspect of the present application, the determining whether to calibrate the initial focus curve according to the obtained lens motor control value according to the focus self-learning strategy includes: searching whether the object distance value of the target object has a trusted value in a trusted set; wherein the trusted set comprises a plurality of data sets; each data set consists of object distance values and corresponding statistical sets; each said statistical set comprising a plurality of lens motor control values; the credible value is an optimal lens motor control value corresponding to the object distance value; if not, adding the lens motor control value corresponding to the object distance value into a statistical set corresponding to the object distance value in the trusted set; otherwise, ending; judging whether the number of the lens motor control values in the current statistics set exceeds a sample number threshold value or not; if the sample number threshold value is exceeded, calculating a characteristic value of the statistical set based on a data set characteristic algorithm to serve as a trusted value corresponding to the object distance value; otherwise, ending; judging whether the current trusted value is in a preset interval or not to meet the construction condition of the trusted section; if yes, calibrating the initial focusing curve according to the trusted section; otherwise, the current trusted value is cached.
In some embodiments of the first aspect of the present application, the dataset characterization algorithm includes, but is not limited to, any one or more of an average calculation, a median calculation, and a K-MEANS clustering algorithm.
In some embodiments of the first aspect of the present application, the step of guaranteeing fault tolerance based on a data set checking policy is further performed before calculating the trusted value, the performing process includes: judging whether the sample average difference of the current statistical set exceeds a preset threshold value or not; if yes, clearing the statistical set or lifting the sample number threshold value of the statistical set so that the statistical set is jumped.
In some embodiments of the first aspect of the present application, the conditions for constructing the trusted section include:
q= (u 0-u 1) < a n; wherein u0 and u1 represent the object distance start point and the object distance end point of the trusted segment; n represents the number of trusted values within this interval; a represents a scaling factor; if (u 0-u 1) < a×n holds, q=1, representing that this interval holds a trusted segment; if (u 0-u 1) < a×n is not true, q=0, which means that the interval is not true for the trusted segment.
In some embodiments of the first aspect of the present application, the method further comprises: calibrating and updating an initial focusing curve of a lower-frequency focusing object distance in a scattered manner by updating all the credible points; and/or updating the initial focusing curve of the object distance with higher frequency in a calibrating way by updating each trusted segment in a paragraph aggregation way.
To achieve the above and other related objects, a second aspect of the present application provides a self-learning-based focusing system, including: the data acquisition module is used for acquiring the object distance value data of the current shot main body; the focusing logic judging module is used for searching whether a corresponding trusted value exists in the updated focusing curve according to the object distance value data of the current shot main body; the focusing control module is used for taking the searched trusted value as a lens motor control value to drive the lens motor to move to a designated position for focusing under the condition that the trusted value exists; otherwise, focusing is performed based on the TOF focusing algorithm and the CAF focusing algorithm.
In some embodiments of the second aspect of the present application, the focusing system further includes a curve generating module configured to perform the following: performing preliminary focusing on a target object under an object distance value based on a TOF focusing algorithm, and obtaining a lens motor control value corresponding to the object distance value after performing stable focusing on the target object based on a CAF focusing algorithm; judging whether to calibrate an initial focusing curve according to the obtained lens motor control value according to a focusing self-learning strategy so as to generate the updated focusing curve; the initial focusing curve is a mapping relation between the object distance value before updating and the lens motor control value.
To achieve the above and other related objects, a third aspect of the present application provides a computer-readable storage medium having stored thereon a first computer program which, when executed by a processor, implements the self-learning-based focusing method of the first aspect of the present application.
To achieve the above and other related objects, a fourth aspect of the present application provides an electronic terminal, including: a processor and a memory; the memory is used for storing a computer program, and the processor is used for executing the computer program stored in the memory, so that the terminal executes the focusing method based on self-learning provided by the first aspect of the application.
As described above, the focusing method, system, terminal and medium based on self-learning have the following beneficial effects: the invention can learn and calibrate to obtain the credible focusing curve through a self-learning algorithm, so that in the follow-up focusing process, the object distance value obtained through TOF can be focused to an accurate position in one step, CAF fine focusing is not required to be repeated, the influence of a picture blurring stage generated by CAF focusing is eliminated, the system calculation force required by CAF focusing is also eliminated, and the focusing time is greatly shortened. Meanwhile, the invention has low realization cost and good system compatibility, and the parameters related to the algorithm can be dynamically adjusted, so that the algorithm has excellent ductility and adjustability.
Drawings
Fig. 1 is a schematic flow chart of a focusing method based on self-learning in an embodiment of the application.
Fig. 2 is a schematic flow chart of generating an updated focusing curve according to an embodiment of the present application.
Fig. 3 is a flowchart illustrating a method for determining whether to calibrate an initial focus curve according to a lens motor control value according to a focus self-learning strategy according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a data structure of a trusted set in an embodiment of the present application.
FIG. 5 is a schematic diagram of an updated focus curve according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a focusing system based on self-learning according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of an electronic terminal according to an embodiment of the present application.
Detailed Description
Other advantages and effects of the present application will become apparent to those skilled in the art from the present disclosure, when the following description of the embodiments is taken in conjunction with the accompanying drawings. The present application may be embodied or carried out in other specific embodiments, and the details of the present application may be modified or changed from various points of view and applications without departing from the spirit of the present application. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It is noted that in the following description, reference is made to the accompanying drawings, which describe several embodiments of the present application. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present application. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present application is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. Spatially relative terms, such as "upper," "lower," "left," "right," "lower," "upper," and the like, may be used herein to facilitate a description of one element or feature as illustrated in the figures as being related to another element or feature.
In this application, unless specifically stated and limited otherwise, the terms "mounted," "connected," "secured," "held," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art as the case may be.
Furthermore, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including" specify the presence of stated features, operations, elements, components, items, categories, and/or groups, but do not preclude the presence, presence or addition of one or more other features, operations, elements, components, items, categories, and/or groups. The terms "or" and/or "as used herein are to be construed as inclusive, or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; A. b and C). An exception to this definition will occur only when a combination of elements, functions or operations are in some way inherently mutually exclusive.
In order to solve the problems in the background art, the invention provides a self-learning focusing method, a self-learning focusing system, a self-learning terminal and a self-learning medium based on TOF and CAF focusing algorithms, which aim to calibrate an initial focusing calibration curve based on a certain learning algorithm by recording a complete effective focusing result by adopting a self-learning technical idea; if the target object distance is found to be subjected to the relearning calibration in the follow-up focusing process, a new focusing value is directly adopted for focusing, and the focus is directly focused to the clearest focusing, so that the follow-up CAF link is omitted. By the technical scheme, the focusing time can be greatly shortened, and the influence of repeated clear blurring of the image caused by the CAF link can be reduced.
In order to make the objects, technical solutions and advantages of the present invention more apparent, further detailed description of the technical solutions in the embodiments of the present invention will be given by the following examples with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Before explaining the present invention in further detail, terms and terminology involved in the embodiments of the present invention will be explained, and the terms and terminology involved in the embodiments of the present invention are applicable to the following explanation:
(1) TOF (Time of Flight) distance measurement, which refers to time-of-flight technology, is intended to take time by measuring the distance an object, particle or wave flies through in a fixed medium. The TOF ranging method data two-way ranging technology mainly utilizes the flight time of a signal to and fro between two asynchronous transceivers (transceivers) to measure the distance between nodes;
(2) CAF (Continuous Auto Focus), which is a continuous auto-focus mode that continuously tracks as the subject moves.
The embodiment of the invention provides a focusing method based on self-learning, a system based on the focusing method based on self-learning, a storage medium storing an executable program for realizing the focusing method based on self-learning and a focusing electronic terminal based on self-learning. With respect to implementation of the self-learning based focusing method, an exemplary implementation scenario of self-learning based focusing will be described.
Fig. 1 is a schematic flow chart of a focusing method based on self-learning in an embodiment of the invention. The focusing method based on self-learning in the embodiment is used for generating a trusted focusing curve and mainly comprises the following steps.
Step S1: and acquiring the object distance value data of the current shot object.
It should be understood that the object distance value refers to the distance between the subject and the optical center of the lens, and is generally indicated by the letter U. The conjugate relation exists between the object distance U and the image distance V; the farther the object distance U is, the closer the image distance V is; conversely, the closer the object distance U, the farther the image distance V.
Step S2: and searching whether a corresponding trusted value exists in the updated focusing curve according to the object distance value data of the current shot main body.
In this embodiment, the updated focus curve is relative to an initial focus curve, which is an un-updated map describing the relationship between the object distance value and the lens motor control value. According to the embodiment of the invention, the initial focusing curve is directly used after being calibrated and updated, and after the object distance value of the shot main body is obtained, CAF secondary focusing is not needed, and the accurate lens motor control value can be obtained by searching and updating the focusing curve.
In this embodiment, the generation process of the updated focusing curve is shown in fig. 2, and includes steps S21 and S22.
Step S21: and after the target object is focused stably based on the CAF focusing algorithm, obtaining a lens motor control value corresponding to the object distance value.
The specific process of performing preliminary focusing on the target object based on the TOF focusing algorithm and performing stable focusing on the target object based on the CAF focusing algorithm is as follows.
Step S21A: image data and depth data of a subject are acquired.
It should be understood that the "subject" in this step and the "subject" in step S1 described above each refer to a subject photographed by a lens, but at different processing stages. The 'shot subject' in step S1 is an application stage after the generation of the updated focus curve; the "subject" in step S211 is data for generating a calibration initial focus curve in the generation stage of the updated focus curve.
In this embodiment, the method for acquiring the image data and the depth data of the subject includes: acquiring image data of a subject acquired by an image acquisition unit; and acquiring depth data of the shot subject acquired by the depth acquisition unit, and performing coordinate conversion on the depth data according to a coordinate system in which the image data is positioned, so that the depth data under the world coordinate system is converted into a pixel coordinate system to unify the coordinate system with the image data.
Optionally, the image acquisition unit may be a camera module, where the camera module includes a camera device, a storage device, and a processing device; the image pickup apparatus includes, but is not limited to: CAMERAs, video CAMERAs, CAMERA modules integrated with an optical system or a CCD chip, CAMERA modules integrated with an optical system and a CMOS chip, HDMI image sources or USB camel image capturing devices, and the like.
Further, image data formats suitable for use in embodiments of the present invention include, but are not limited to, e.g., bayer picture format, RGB picture format, YUV picture format, or the like. Wherein, the RGB picture format is to obtain various colors by changing three color channels of red (R), green (G) and blue (B) and overlapping each other; the Bayer picture format is an array digital image (suffix name. Raw); the YUV picture format is mainly used in the analog video field, and separates the brightness information (Y) from the color information (UV), and does not require three independent video signals to be transmitted simultaneously, so that it occupies very little bandwidth.
Alternatively, the depth acquisition unit may be a TOF depth acquisition unit, where the acquired depth data mainly refers to distance data from the subject to the TOF transceiver screen. The TOF depth acquisition unit may be composed of ITOF or DTOF, and may be single-point TOF or dot-matrix TOF, which is not limited by the embodiment of the present invention.
DTOF is understood to mean Direct TOF, i.e. measuring the time of flight directly, measuring the time interval between transmitted and received pulses, transmitting and receiving N optical signals in a single frame of measurement time, and making histogram statistics for the recorded N times of flight, where the time of flight with highest frequency of occurrence is used to calculate the target distance. ITOF refers to index TOF, i.e., indirectly measuring time of flight, typically using a method of measuring phase shift, such as the phase difference between a transmit sine wave/square wave and a receive sine wave/square wave.
In this embodiment, the coordinate conversion is performed on the depth data according to a coordinate system in which the image data is located, and the conversion process includes: the depth data is converted from the world coordinate system to the corresponding camera coordinate system, from the camera coordinate system to the corresponding image coordinate system, and from the image coordinate system to the corresponding pixel coordinate system.
Specifically, a rotation matrix is required in the conversion from the world coordinate system to the camera coordinate system
Figure SMS_1
Translation matrix->
Figure SMS_2
The specific conversion mode is as follows:
Figure SMS_3
(equation 1)
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_5
,/>
Figure SMS_6
,/>
Figure SMS_7
representing camera coordinatesTying; />
Figure SMS_8
,/>
Figure SMS_9
,/>
Figure SMS_10
Representing a world coordinate system; />
Figure SMS_11
Representing an orthogonal unitary rotation matrix,/- >
Figure SMS_4
Representing a three-dimensional translation vector.
The conversion process from the camera coordinate system to the image coordinate system is a center projection process, namely, the perspective relation from the camera coordinate system to the image coordinate system is calculated by using similar triangles, and the calculation mode is as follows:
Figure SMS_12
(equation 2)
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_13
representing points in the image coordinate system; />
Figure SMS_14
Representing a camera focal length matrix; />
Figure SMS_15
Representing points in the camera coordinate system.
The conversion process from the image coordinate system to the pixel coordinate system is a discretization process, and in fact, the pixel coordinate system and the image coordinate system are both on the imaging plane, but the origin and the measurement unit of each are different; the origin of the image coordinate system is usually the midpoint of the imaging plane, the unit is mm, and the origin belongs to the physical unit; whereas the unit of the pixel coordinate system is pixel, describing a pixel point typically as a few rows and a few columns. Therefore, the conversion from the image coordinate system to the pixel coordinate system is as follows:
Figure SMS_16
(equation 3)
Figure SMS_17
(equation 4)
Figure SMS_18
(equation 5)
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_19
refers to a point in the pixel coordinate system, +.>
Figure SMS_20
Refers to the point in the image coordinate system +.>
Figure SMS_21
And->
Figure SMS_22
Representing how many mm each column and each row represent, respectively, i.e. 1 pixel = = -j =>
Figure SMS_23
mm。
The complete formula for one-step conversion from the world coordinate system of TOF to the pixel coordinate system can be obtained by combining the above formulas (1) to (5) as follows:
Figure SMS_24
Figure SMS_25
(equation 6)
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_27
,/>
Figure SMS_29
,K=/>
Figure SMS_30
,/>
Figure SMS_31
;/>
Figure SMS_32
、/>
Figure SMS_33
、/>
Figure SMS_34
、/>
Figure SMS_26
representing camera internal parameters; k represents a camera internal parameter matrix; />
Figure SMS_28
An external parameter matrix representing the camera; r and t represent external parameters of the camera; m represents a projection matrix.
It should be understood that the purpose of performing the one-step conversion from the world coordinate system of the TOF to the pixel coordinate system is to map the world coordinate system in which the TOF is located with the pixel coordinate system of the image plane in a unified manner, and after the coordinates are unified, the subsequent processing procedure can be performed in the pixel coordinate system in a unified manner, so as to facilitate the calculation processing.
Step S21B: and determining focusing trigger time and a focusing target object according to the image data and the depth data of the shot main body, so as to perform preliminary focusing control on the focusing target object based on a TOF focusing algorithm after triggering focusing, and driving a focusing lens to move to a preset position.
In this embodiment, a focusing target object is selected from the subject based on an image focusing algorithm, wherein the image focusing algorithm completes a complete focusing process from triggering to focusing, and determines the triggering timing of the focusing algorithm and the selection of the focusing target object. It should be noted that, the image focusing algorithm is not limited in this embodiment of the present invention, and existing algorithms (for example, a focusing algorithm based on a portrait mode, a focusing algorithm based on an image center area, etc.) capable of implementing focusing triggering may be all applied to the technical scheme of the present invention.
In this embodiment, the performing focus control on the focus target object based on the TOF focusing algorithm to drive the focus lens to move to a preset position includes: acquiring the object distance of a focusing target object according to the depth data; obtaining a corresponding image distance value according to the object distance value and the lens focal length value based on an imaging formula; and obtaining the association relation between the object distance value and the lens motor control value according to the association relation between the image distance value and the lens motor control value so as to correspondingly drive the focusing lens to move to a preset position according to the object distance value.
It should be understood that the association relationship between the image distance value and the lens motor control value means: when the control value of the lens motor (for example, the driving current value of the lens motor) is changed, the lens motor rotates correspondingly, the rotation of the lens motor drives the photographing lens of the image collecting device (such as a camera) to move, and the image distance value is correspondingly changed because the image distance refers to the distance between the photographing lens and the sensor. Therefore, the corresponding relation exists between the lens motor control value and the image distance value, and the image distance value is correspondingly changed when the lens motor control value is changed.
Wherein, the imaging formula refers to:
Figure SMS_35
(equation 7)
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_36
representing object distance->
Figure SMS_37
Representing image distance +.>
Figure SMS_38
Representing the focal length; according to the currently known lens focal length and the object distance of the focusing target object obtained after TOF depth data processing, the corresponding image distance can be obtained.
Optionally, the object distance-image distance relation curve (i.e. the image distance which can make the imaging definition highest is measured under different object distances under the given focal length f) can be calibrated in advance, and then stored in an array record or fitted to form a mapping relation curve between the object distance u and the image distance v, and the mapping relation curve is directly searched through the current object distance value during focusing so as to obtain the corresponding image distance value.
Step S21C: and performing stable focusing control on the focusing target object based on a CAF focusing algorithm to acquire a lens motor control value corresponding to the current object distance value.
Since there may be some error in TOF focusing, fine focusing is required to achieve stable image sharpness using CAF algorithm. Preferably, the present embodiment adopts a CAF contrast type focusing algorithm, and the specific process is as follows: based on TOF algorithm given lens position, in the range of 10 motor steps (step) forward and backward, the lens is controlled to move from back to front by a large step number (for example, the value is 3), the image characteristic value f is calculated once each movement, the motor position capable of maximizing the image characteristic value f in the moving process is solved by using a preferential method, and the motor is moved to the solved motor position to be used as the best focusing position.
Because the step number moves, a situation that the maximum image characteristic value f is crossed may exist in the moving process, because when the step is large, a certain step contains a maximum value, and when the image characteristic value f is found to be gradually increased and then falls back in the stepping process, the condition that the maximum image characteristic value f is crossed can be judged; at this time, the motor position may be retracted by a small number of steps (e.g., a value of 1), and the image feature value f may be calculated once every time the motor position is moved until the motor position is moved to the best focus position. Reaching the best focus position indicates that the current focus state has stabilized. For example: the image characteristic value of the step 1 is 20, the image characteristic value of the step 4 is 45, and the image characteristic value from the step 7 falls back to 35, so that the step 4-7 spans the maximum image characteristic value; step can be reduced from 3 to 1 and back from step 7 until it moves to the best focus position (i.e., maximum image feature value f).
The above-mentioned preferred method is to arrange reasonable test points according to different problems in production and scientific research so as to reduce test times and find the optimal point rapidly. The preferred methods include, but are not limited to, golden section methods, step-up methods (also known as hill climbing methods), batch experimental methods, fractional methods, contrast methods, parabolic methods, and the like.
Step S22: judging whether to calibrate an initial focusing curve according to the obtained lens motor control value according to a focusing self-learning strategy so as to generate the updated focusing curve; the initial focusing curve is a mapping relation between the object distance value before updating and the lens motor control value.
In this embodiment, the process of determining whether to calibrate the initial focus curve according to the obtained lens motor control value according to the focus self-learning strategy includes the steps as shown in fig. 3:
step S22A: searching whether the object distance value of the target object has a trusted value in the trusted set.
In this embodiment, the trusted set is composed of a plurality of data sets, each data set is composed of an object distance value and a corresponding statistical set thereof, and each statistical set includes a plurality of lens motor control values. The reliable value of the object distance value refers to the optimal lens motor control value corresponding to the object distance value.
For ease of understanding, the description will be given taking the trusted architecture diagram shown in fig. 4 as an example: the credible set S stores an object distance value U 1 、U 2 …U n Object distance value U 1 Corresponding statistics set T 1 Object distance value U 2 Corresponding statistics set T 2 … object distance value U n Corresponding statistics set T n . Statistics set T 1 There are x lens motor control values (t 11 ,t 12 ,…t 1x ) Statistics set T 2 There are y lens motor control values (t 21 ,t 22 ,…t 2y ),T n In which there are z lens motor control values (t n1 ,t n2 ,…t nz )。
It should be noted that, the initial trusted set S is empty, the data in the trusted set S is generated by photographing every time, and for example, a 1 meter object distance is taken as an example, a lens motor control value of 0.60 is obtained after first performing TOF preliminary focusing and CAF stable focusing, a lens motor control value of 0.59 is obtained after second performing TOF preliminary focusing and CAF stable focusing, and a lens motor control value of 0.62 is obtained after third performing TOF preliminary focusing and CAF stable focusing. Thus, these lens motor controls, which correspond to the same object distance values, constitute a statistical set of object distance values of 1 meter.
It should be appreciated that, corresponding to the structure of the trusted set S illustrated in fig. 4, in computer storage, the trusted set may be stored in an array, for example, the trusted set s= { [ U 1 , t 11 ,t 12 ,…t 1x ], [U 2 ,t 21 ,t 22 ,…t 2y ],…[U n ,t n1 ,t n2 ,…t nz ]. The above examples are provided for illustrative purposes and should not be construed as limiting. In practical application, any data format capable of being stored in a computer device except a data group can be applied to the technical scheme of the embodiment of the invention.
Step S22B: if not, adding the lens motor control value corresponding to the object distance value into the statistical set corresponding to the object distance value in the trusted set.
In this embodiment, for an object distance value for which there is no trusted value, the current collected lens motor control value needs to be added to the statistics set corresponding to the object distance value to update the statistics set.
Step S22C: if so, ending.
In this embodiment, for an object distance value with a trusted value, calculation is not required, and in a subsequent focusing process, the trusted value corresponding to the object distance value can be directly used for focusing control.
Step S22D: whether the number of the lens motor control values in the current statistical set exceeds a sample number threshold is judged.
In this embodiment, the sample number threshold is set to determine whether to perform the trusted value calculation on a statistics set, which has the advantage that if the sample number threshold is not set, the trusted value calculation needs to be performed once every new lens motor control value is added in the statistics set, which generates many redundant calculations in practical application.
It should be noted that, in practical application, a sample number threshold with a moderate size should be selected, setting an excessive sample number threshold can improve the accuracy of the finally calculated trusted value Y, but correspondingly reduce the learning speed of the algorithm and prolong the time for learning the trusted value. Conversely, setting a smaller sample number threshold may increase the algorithm learning speed, but may decrease the accuracy of the confidence value.
Step S22E: and if the sample number threshold value is exceeded, calculating the characteristic value of the statistical set based on a data set characteristic algorithm to serve as a trusted value corresponding to the object distance value.
The data set feature algorithm is used for performing feature calculation on the data set, extracting feature values of the data set, wherein the feature values can be used for representing the whole data set. The dataset characterization algorithm in this embodiment includes, but is not limited to, any one or a combination of a number of mean calculation, median calculation, K-MEANS clustering algorithms.
Further, the step of guaranteeing fault tolerance based on the data set checking strategy is also executed before the trusted value is calculated, and the executing process includes: judging whether the sample average difference of the current statistical set exceeds a preset threshold value or not; if yes, clearing the statistical set or lifting the sample number threshold value of the statistical set so that the statistical set is jumped.
The jump to some data sets is to consider that some data sets have a certain number of lens motor control values, but the average difference between the lens motor control values is too large, and even if the reliable value is calculated by the average value calculation method, the median calculation method, the K-MEANS clustering algorithm and the like, the reliable value is a distorted value and cannot be used for precisely controlling the lens motor. Therefore, the embodiment of the invention also executes the step of ensuring the fault tolerance based on the data set checking strategy before calculating the trusted value so as to ensure that the trusted value with better control effect is obtained.
Step S22F: and if the sample number threshold is not exceeded, ending.
Step S22G: judging whether the current trusted value is in a preset interval or not to meet the construction condition of the trusted section.
The credible value corresponding to a certain object distance is expressed as a single credible point in the object distance-motor control value mapping relation chart, and a plurality of credible points can form a credible section.
In this embodiment, the conditions for forming the trusted segment include:
q= (u 0-u 1) < a n; (equation 8)
Wherein u0 and u1 represent the object distance start point and the object distance end point of the trusted segment; n represents the number of trusted values within this interval; a represents a scaling factor; if (u 0-u 1) < a×n holds, q=1, representing that this interval holds a trusted segment; if (u 0-u 1) < a×n is not true, q=0, which means that the interval is not true for the trusted segment.
Step S22H: and if so, calibrating the initial focusing curve according to the trusted segment.
In this embodiment, the initial focusing curve of the lower-frequency focusing object distance is calibrated and updated by updating each scattered credible point; and/or updating the initial focusing curve of the object distance with higher frequency in a calibrating way by updating each trusted segment in a paragraph aggregation way. It should be understood that the lower frequency focusing object distance and the higher frequency focusing object distance are two opposite concepts, and may be defined by themselves in practical applications to divide the lower frequency and the higher frequency, which are not limited by the embodiments of the present invention.
Further explanation is as follows: after learning, the trusted points may be scattered, or may appear as paragraph-like gathers at high-frequency in-focus object distances. The updating of the focusing curve can be staged, and if the number of trusted points in a certain trusted segment reaches a threshold value, the updating of the trusted segment can also be performed. The subsequent focusing process directly refers to the focus value of the trusted segment. As learning time increases, as more and more trusted segments are added, it is believed that the trusted segments eventually combine into a complete updated focus curve.
It will be appreciated that one trusted value constitutes a single trusted point, and that multiple trusted points may be fitted to form a trusted segment. The trusted points are discrete and the trusted segments are continuous. Although the initial focusing curve can be calibrated and updated according to the trusted points, the discrete points are difficult to exhaust, so that the calibration efficiency of the focusing curve is hindered, and the practicability of the focusing curve is seriously affected.
For example, 11 fitting points in 1.0 m to 1.2 m are stored in the trust set S, wherein the trust set S has an object distance value of 1.0 m and a corresponding trust value a1, an object distance value of 1.02 m and a corresponding trust value a2, an object distance value of 1.04 m and a corresponding trust value a3, an object distance value of 1.06 m and a corresponding trust value a4, an object distance value of 1.08 m and a corresponding trust value a5, an object distance value of 1.10 m and a corresponding trust value a6, an object distance value of 1.12 m and a corresponding trust value a7, an object distance value of 1.14 m and a corresponding trust value a8, an object distance value of 1.16 m and a corresponding trust value a9, an object distance value of 1.18 m and a corresponding trust value a10, and an object distance value of 1.20 m and a corresponding trust value a11. The curve updating is performed by adopting a discrete point mode, and the method has the advantages that no error exists, but the algorithm learning time is very long due to the fact that no calibration function exists for points which are not updated, for example, the points with the object distance value of 1.11 meters in the above examples cannot be calibrated; the problem can be well solved by curve fitting, for example, the point with the object distance value of 1.11 m in the above example can be calibrated because the point falls into the fitting section range with the object distance value of 1.0 m-1.2 m. Specific fitting means include, but are not limited to, fitting as polynomial, fitting as four parameter equations, and the like.
It should be noted that the trusted point-to-trusted segment appears to be just a point-to-curve extension, but the back is a technical problem that considers how to make the focus curve more practical and how to improve the calibration update efficiency. The calibration updating mode of the trusted section is essentially different from the calibration updating mode of the trusted point, and before the trusted section is calibrated, whether the composition condition of the trusted section is achieved is judged; the next trusted segment calibration can be performed only on the premise of forming the trusted segment, so that the embodiment of the invention performs the trusted segment judgment before the calibration update.
Further, the embodiment of the invention can adopt an array recording mode, and update the focusing curve by updating the value of the array; the focusing curve can also be updated by adopting a function curve recording mode and updating the original data atlas; the motor control value corresponding to the current object distance value U in the updated curve is equal to or very close to the trusted value Y. It should be understood that for an object distance value calibrated according to a trusted point, the corresponding lens motor control value is equal to the trusted value; for an object distance value calibrated in a manner of fitting a plurality of trusted points to form a trusted segment, the corresponding lens motor control value is not necessarily equal to the trusted value but is very close to the trusted value. Taking the fitting section with the object distance value of 1.0 m-1.2 m as an example, on the trusted section formed by fitting, the lens motor control value corresponding to the object distance value of 1.0 m is equal to the trusted value, and the lens motor control value corresponding to the object distance value of 1.11 m is obtained by fitting, and certain error is brought but is very close to the trusted value obtained in a discrete point mode.
For ease of understanding, the updated focus curve will be described with reference to FIG. 5: the focusing curve is a mapping relation diagram between the object distance U and the lens motor control value V, in the diagram, a hollow circle represents a calibrated trusted point, a solid circle represents a real point before calibration, and the trusted value between the object distance starting point U0 and the object distance end point U1 is fitted to form a trusted segment D.
Step S22I: if not, the current trusted value is cached and ended.
The function of caching the current trusted value is to accumulate the number of the trusted values, and when the number of the trusted values is accumulated to meet the requirement of the formula 7, a trusted segment can be formed, so that the trusted segment fitting can be implemented.
Step S3: if the credible value exists, the searched credible value is used as a lens motor control value to drive the motor to move to a designated position for focusing.
For an object distance value with a trusted value, the direct focusing can be realized by directly searching a lens motor control value from an updated focusing curve without performing preliminary focusing control by a TOF focusing algorithm and performing stable focusing control by a CAF focusing algorithm, so that the influence of a picture blurring stage generated by CAF focusing is eliminated, the system calculation force required by CAF focusing is also eliminated, and the focusing time is greatly shortened.
Step S4: and if the trusted value does not exist, focusing is performed based on the TOF focusing algorithm and the CAF focusing algorithm.
If the trusted value does not exist, focusing still needs to be realized in a mode of preliminary focusing control by a TOF focusing algorithm and stable focusing control by a CAF focusing algorithm.
Taking fig. 5 as an example, if the object distance value of the current shot subject falls within the range of u0 to u1, the lens motor control value can be directly found from the updated focusing curve; if the object distance value of the current shot main body falls in a range beyond u 0-u 1 and the corresponding trusted point does not exist in the object distance value of the shot main body, focusing is realized in a mode of preliminary focusing control by a TOF focusing algorithm and stable focusing control by a CAF focusing algorithm.
As shown in fig. 6, a schematic structural diagram of a focusing system based on self-learning in an embodiment of the present invention is shown. The focusing system 600 includes a data acquisition module 601, a focusing logic determination module 602, and a focusing control module 603. The data acquisition module 601 is configured to acquire object distance value data of a current subject; the focusing logic judging module 602 is configured to search whether a corresponding trusted value exists in an updated focusing curve according to the object distance value data of the current subject; the focusing control module 603 is configured to, when it is determined that there is a trusted value, use the found trusted value as a lens motor control value to drive the motor to move to a specified position for focusing; otherwise, focusing is performed based on the TOF focusing algorithm and the CAF focusing algorithm.
In this embodiment, the focusing system 600 further includes a curve generating module 601 for executing the following steps: performing preliminary focusing on a target object under an object distance value based on a TOF focusing algorithm, and obtaining a lens motor control value corresponding to the object distance value after performing stable focusing on the target object based on a CAF focusing algorithm; judging whether to calibrate an initial focusing curve according to the obtained lens motor control value according to a focusing self-learning strategy so as to generate the updated focusing curve; the initial focusing curve is a mapping relation between the object distance value before updating and the lens motor control value.
It should be noted that: in the self-learning-based focusing system provided in the above embodiment, only the division of the program modules is used for illustration, and in practical application, the processing allocation may be performed by different program modules according to needs, i.e. the internal structure of the system is divided into different program modules to complete all or part of the processing described above. In addition, the focusing system based on self-learning provided in the above embodiment and the focusing method based on self-learning are the same conception, and detailed implementation process of the focusing system based on self-learning is shown in the method embodiment, and will not be repeated here.
Referring to fig. 7, an optional hardware structure diagram of a focusing electronic terminal 700 based on self-learning according to an embodiment of the present invention may be shown, and the terminal 700 may be a direct broadcast machine, a mobile phone, a computer device, a tablet device, a personal digital processing device, a factory background processing device, etc. that integrates a photographing/camera function. The self-learning based focusing terminal 700 includes: at least one processor 701, memory 702, at least one network interface 704, and a user interface 706. The various components in the device are coupled together by a bus system 705. It is to be appreciated that the bus system 705 is employed to facilitate connection communications between these components. The bus system 705 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled as bus systems in fig. 7.
The user interface 706 may include, among other things, a display, keyboard, mouse, trackball, click gun, keys, buttons, touch pad, or touch screen, etc.
It is to be appreciated that the memory 702 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), a programmable Read Only Memory (PROM, programmable Read-Only Memory), which serves as an external cache, among others. By way of example, and not limitation, many forms of RAM are available, such as static random Access Memory (SRAM, staticRandom Access Memory), synchronous static random Access Memory (SSRAM, synchronous Static RandomAccess Memory). The memory described by embodiments of the present invention is intended to comprise, without being limited to, these and any other suitable types of memory.
The memory 702 in the embodiment of the present invention is used to store various kinds of data to support the operation of the self-learning based focus terminal 700. Examples of such data include: any executable program for operating on the self-learning based focus terminal 700, such as the operating system 7021 and the application programs 7022; the operating system 7021 contains various system programs, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks. The application programs 7022 may include various application programs such as a media player (MediaPlayer), a Browser (Browser), and the like for implementing various application services. The focusing method based on self-learning provided by the embodiment of the invention can be contained in the application 7022.
The method disclosed in the above embodiment of the present invention may be applied to the processor 701 or implemented by the processor 701. The processor 701 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 701 or by instructions in the form of software. The processor 701 may be a general purpose processor, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The processor 701 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. The general purpose processor 701 may be a microprocessor or any conventional processor or the like. The steps of the accessory optimization method provided by the embodiment of the invention can be directly embodied as the execution completion of the hardware decoding processor or the execution completion of the hardware and software module combination execution in the decoding processor. The software modules may be located in a storage medium having memory and a processor reading information from the memory and performing the steps of the method in combination with hardware.
In an exemplary embodiment, the self-learning based focus terminal 700 may be used by one or more application specific integrated circuits (ASIC, application Specific Integrated Circuit), DSPs, programmable logic devices (PLDs, programmable Logic Device), complex programmable logic devices (CPLDs, complex Programmable LogicDevice) for performing the aforementioned methods.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by computer program related hardware. The aforementioned computer program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
In the embodiments provided herein, the computer-readable storage medium may include read-only memory, random-access memory, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, U-disk, removable hard disk, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. In addition, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable and data storage media do not include connections, carrier waves, signals, or other transitory media, but are intended to be directed to non-transitory, tangible storage media. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
In summary, the invention provides a focusing method, a system, a terminal and a medium based on self-learning, which can learn and calibrate to obtain a reliable focusing curve through a self-learning algorithm, so that in the subsequent focusing process, the reliable focusing curve can be focused to an accurate position in one step only through an object distance value obtained by TOF, CAF fine focusing is not required to be repeated, the influence of a picture blurring stage generated by CAF focusing is eliminated, the system calculation force required by CAF focusing is also eliminated, and the focusing time is greatly shortened. Meanwhile, the invention has low realization cost and good system compatibility, and the parameters related to the algorithm can be dynamically adjusted, so that the algorithm has excellent ductility and adjustability. Therefore, the method effectively overcomes various defects in the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles of the present application and their effectiveness, and are not intended to limit the application. Modifications and variations may be made to the above-described embodiments by those of ordinary skill in the art without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications and variations which may be accomplished by persons skilled in the art without departing from the spirit and technical spirit of the disclosure be covered by the claims of this application.

Claims (7)

1. A self-learning based focusing method, comprising:
acquiring object distance value data of a current shot main body;
searching whether a corresponding trusted value exists in an updated focusing curve according to the object distance value data of the current shot main body;
if the credible value exists, the searched credible value is used as a lens motor control value to drive the lens motor to move to a designated position for focusing; otherwise, focusing is carried out based on a TOF focusing algorithm and a CAF focusing algorithm;
the generation mode of the updated focusing curve comprises the following steps: performing preliminary focusing on a target object under an object distance value based on a TOF focusing algorithm, and obtaining a lens motor control value corresponding to the object distance value after performing stable focusing on the target object based on a CAF focusing algorithm; judging whether to calibrate an initial focusing curve according to the obtained lens motor control value according to a focusing self-learning strategy so as to generate the updated focusing curve; the initial focusing curve is a mapping relation between an object distance value before updating and a lens motor control value;
the process of judging whether to calibrate the initial focusing curve according to the obtained lens motor control value according to the focusing self-learning strategy comprises the following steps:
Searching whether the object distance value of the target object has a trusted value in a trusted set; wherein the trusted set comprises a plurality of data sets; each data set consists of object distance values and corresponding statistical sets; each said statistical set comprising a plurality of lens motor control values; the credible value is an optimal lens motor control value corresponding to the object distance value;
if not, adding the lens motor control value corresponding to the object distance value into a statistical set corresponding to the object distance value in the trusted set; otherwise, ending;
judging whether the number of the lens motor control values in the current statistics set exceeds a sample number threshold value or not;
if the sample number threshold value is exceeded, calculating a characteristic value of the statistical set based on a data set characteristic algorithm to serve as a trusted value corresponding to the object distance value; otherwise, ending;
judging whether the current trusted value is in a preset interval or not to meet the construction condition of the trusted section; wherein the composition conditions of the trusted segment include: q= (u 0-u 1) < a n; u0 and u1 represent the object distance start point and the object distance end point of the trusted segment; n represents the number of trusted values within this interval; a represents a scaling factor; if (u 0-u 1) < a×n holds, q=1, representing that this interval holds a trusted segment; if (u 0-u 1) < a×n is not true, q=0, representing that the interval is not true for the trusted segment;
If yes, calibrating the initial focusing curve according to the trusted section;
if not, the current trusted value is cached and ended.
2. The self-learning based focusing method of claim 1 wherein the dataset characterization algorithm comprises: any one or a combination of a plurality of average value calculation method, a median calculation method and a K-MEANS clustering algorithm.
3. The self-learning based focusing method according to claim 1, wherein the step of guaranteeing fault tolerance based on a dataset checking policy is further performed before calculating the trusted value, the performing process comprising: judging whether the sample average difference of the current statistical set exceeds a preset threshold value or not; if yes, clearing the statistical set or lifting the sample number threshold value of the statistical set so that the statistical set is jumped.
4. The self-learning based focusing method of claim 1, further comprising: calibrating and updating an initial focusing curve of a lower-frequency focusing object distance in a scattered manner by updating all the credible points; and/or updating the initial focusing curve of the object distance with higher frequency in a calibrating way by updating each trusted segment in a paragraph aggregation way.
5. A self-learning based focusing system, comprising:
the data acquisition module is used for acquiring the object distance value data of the current shot main body;
the focusing logic judging module is used for searching whether a corresponding trusted value exists in the updated focusing curve according to the object distance value data of the current shot main body;
the focusing control module is used for taking the searched trusted value as a lens motor control value to drive the lens motor to move to a designated position for focusing under the condition that the trusted value exists; otherwise, based on TOF focusing algorithm and CAF
Focusing by a focusing algorithm;
the generation mode of the updated focusing curve comprises the following steps: performing preliminary focusing on a target object under an object distance value based on a TOF focusing algorithm, and obtaining a lens motor control value corresponding to the object distance value after performing stable focusing on the target object based on a CAF focusing algorithm; judging whether to calibrate an initial focusing curve according to the obtained lens motor control value according to a focusing self-learning strategy so as to generate the updated focusing curve; the initial focusing curve is a mapping relation between an object distance value before updating and a lens motor control value;
The process of judging whether to calibrate the initial focusing curve according to the obtained lens motor control value according to the focusing self-learning strategy comprises the following steps:
searching whether the object distance value of the target object has a trusted value in a trusted set; wherein the trusted set comprises a plurality of data sets; each data set consists of object distance values and corresponding statistical sets; each said statistical set comprising a plurality of lens motor control values; the credible value is an optimal lens motor control value corresponding to the object distance value;
if not, adding the lens motor control value corresponding to the object distance value into a statistical set corresponding to the object distance value in the trusted set; otherwise, ending;
judging whether the number of the lens motor control values in the current statistics set exceeds a sample number threshold value or not;
if the sample number threshold value is exceeded, calculating a characteristic value of the statistical set based on a data set characteristic algorithm to serve as a trusted value corresponding to the object distance value; otherwise, ending;
judging whether the current trusted value is in a preset interval or not to meet the construction condition of the trusted section; wherein the composition conditions of the trusted segment include: q= (u 0-u 1) < a n; u0 and u1 represent the object distance start point and the object distance end point of the trusted segment; n represents the number of trusted values within this interval; a represents a scaling factor; if (u 0-u 1) < a×n holds, q=1, representing that this interval holds a trusted segment; if (u 0-u 1) < a×n is not true, q=0, representing that the interval is not true for the trusted segment;
If yes, calibrating the initial focusing curve according to the trusted section;
if not, the current trusted value is cached and ended.
6. A computer-readable storage medium, on which a first computer program is stored, characterized in that the first computer program, when executed by a processor, implements the self-learning based focusing method of any one of claims 1 to 4.
7. An electronic terminal, comprising: a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the computer program stored in the memory, to cause the terminal to execute the self-learning-based focusing method according to any one of claims 1 to 4.
CN202310086377.2A 2023-02-09 2023-02-09 Focusing method, system, terminal and medium based on self-learning Active CN115802161B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310086377.2A CN115802161B (en) 2023-02-09 2023-02-09 Focusing method, system, terminal and medium based on self-learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310086377.2A CN115802161B (en) 2023-02-09 2023-02-09 Focusing method, system, terminal and medium based on self-learning

Publications (2)

Publication Number Publication Date
CN115802161A CN115802161A (en) 2023-03-14
CN115802161B true CN115802161B (en) 2023-05-09

Family

ID=85430627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310086377.2A Active CN115802161B (en) 2023-02-09 2023-02-09 Focusing method, system, terminal and medium based on self-learning

Country Status (1)

Country Link
CN (1) CN115802161B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020015754A1 (en) * 2018-07-19 2020-01-23 杭州海康威视数字技术股份有限公司 Image capture method and image capture device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2901116B2 (en) * 1992-11-30 1999-06-07 株式会社日立製作所 Control method of autofocus device
CN105611159A (en) * 2015-12-23 2016-05-25 北京奇虎科技有限公司 Calibration method and device of zooming tracking curve
CN107071243B (en) * 2017-03-09 2019-12-27 成都西纬科技有限公司 Camera focusing calibration system and focusing calibration method
CN110913129B (en) * 2019-11-15 2021-05-11 浙江大华技术股份有限公司 Focusing method, device, terminal and storage device based on BP neural network
US11726392B2 (en) * 2020-09-01 2023-08-15 Sorenson Ip Holdings, Llc System, method, and computer-readable medium for autofocusing a videophone camera
CN112565591B (en) * 2020-11-20 2022-08-02 广州朗国电子科技股份有限公司 Automatic focusing lens calibration method, electronic equipment and storage medium
CN115701123A (en) * 2021-07-29 2023-02-07 华为技术有限公司 Focusing method and device
CN114727023A (en) * 2022-06-07 2022-07-08 杭州星犀科技有限公司 Method and system for adjusting camera parameters

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020015754A1 (en) * 2018-07-19 2020-01-23 杭州海康威视数字技术股份有限公司 Image capture method and image capture device

Also Published As

Publication number Publication date
CN115802161A (en) 2023-03-14

Similar Documents

Publication Publication Date Title
US20230419437A1 (en) Systems and methods for fusing images
US9998650B2 (en) Image processing apparatus and image pickup apparatus for adding blur in an image according to depth map
US8154647B2 (en) Automated extended depth of field imaging apparatus and method
WO2018099037A1 (en) Control method, control device and electronic device
US8610784B2 (en) Imaging apparatus, imaging system, control method of imaging apparatus, and program
US9807293B2 (en) Imaging device capable of combing a plurality of image data, and control method for imaging device
US10070038B2 (en) Image processing apparatus and method calculates distance information in a depth direction of an object in an image using two images whose blur is different
CN104618639B (en) Focusing control device and its control method
CN108391035A (en) A kind of image pickup method, device and equipment
WO2023016025A1 (en) Image capture method and device
EP2065741B1 (en) Auto-focus apparatus, image- pickup apparatus, and auto- focus method
US9008412B2 (en) Image processing device, image processing method and recording medium for combining image data using depth and color information
JP6555857B2 (en) Imaging apparatus and control method thereof
CN105187722A (en) Depth-of-field adjustment method and apparatus, terminal
CN105580349A (en) Image processing device, imaging device, image processing method, and image processing program
CN112567274A (en) Temperature drift coefficient compensation method and device, lens, imaging device and movable platform
WO2021134179A1 (en) Focusing method and apparatus, photographing device, movable platform and storage medium
CN104570299B (en) Single focal length lens system and image pickup apparatus using the same
CN115802161B (en) Focusing method, system, terminal and medium based on self-learning
US20050104994A1 (en) Image-taking apparatus and focus control method of image-taking apparatus
JP2018040929A (en) Imaging control device and imaging control method
CN109698902B (en) Synchronous focusing method and device
JP6645711B2 (en) Image processing apparatus, image processing method, and program
CN110166676B (en) Imaging device, imaging control method, electronic device and medium
US20110103709A1 (en) Digital image signal processing apparatus and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant