CN116233605A - Focusing implementation method and device, storage medium and image pickup equipment - Google Patents

Focusing implementation method and device, storage medium and image pickup equipment Download PDF

Info

Publication number
CN116233605A
CN116233605A CN202310505127.8A CN202310505127A CN116233605A CN 116233605 A CN116233605 A CN 116233605A CN 202310505127 A CN202310505127 A CN 202310505127A CN 116233605 A CN116233605 A CN 116233605A
Authority
CN
China
Prior art keywords
focusing
definition value
camera
preset
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310505127.8A
Other languages
Chinese (zh)
Other versions
CN116233605B (en
Inventor
熊大军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cix Technology Wuhan Co ltd
Original Assignee
Cix Technology Wuhan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cix Technology Wuhan Co ltd filed Critical Cix Technology Wuhan Co ltd
Priority to CN202310505127.8A priority Critical patent/CN116233605B/en
Publication of CN116233605A publication Critical patent/CN116233605A/en
Application granted granted Critical
Publication of CN116233605B publication Critical patent/CN116233605B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Automatic Focus Adjustment (AREA)
  • Studio Devices (AREA)

Abstract

The application provides a focusing implementation method, a focusing implementation device, a storage medium and image pickup equipment, comprising the following steps: controlling the first camera to move a first preset step length from a focusing starting point to a preset determined focusing direction, and the second camera to move a second preset step length from the focusing starting point to the preset determined focusing direction, and determining a focusing target position based on the first starting definition value, the first current definition value, the second starting definition value and the second current definition value; the first initial definition value is an image definition value of the first camera at the focusing starting point, the second initial definition value is an image definition value of the second camera at the focusing starting point, and focusing is carried out on the first camera and the second camera based on the focusing target position. And the overall focusing speed is improved by rapidly determining the focusing target position. The repeated adjustment and confirmation are not needed, the repeated front and back pulling of the picture can not occur, and the experience effect of the user is ensured.

Description

Focusing implementation method and device, storage medium and image pickup equipment
Technical Field
The present invention relates to the field of images, and in particular, to a focusing implementation method, a device, a storage medium, and an image capturing apparatus.
Background
Along with the development of society and the progress of science, people's living standard is gradually promoted, and the frequency of use of image acquisition is higher and higher, and the kind in application field is more and more. For example, images may be captured by a camera during travel, security monitoring, driving recording, and daily film retention. And the demands of people for definition of images are also increasing.
The definition of the image is closely related to the focusing result, the quality of the focusing result directly influences the definition of the image, and how to quickly and accurately complete focusing becomes a difficult problem focused by the person skilled in the art.
Disclosure of Invention
An object of the present application is to provide a focus implementation method, apparatus, storage medium, and image pickup device to at least partially improve the above-described problems.
In order to achieve the above purpose, the technical solution adopted in the embodiment of the present application is as follows:
in a first aspect, an embodiment of the present application provides a focusing implementation method, where the method includes:
controlling a first camera to move a first preset step length from a focusing starting point to a preset determined focusing direction, and controlling a second camera to move a second preset step length from the focusing starting point to the preset determined focusing direction, wherein the first preset step length is smaller than the second preset step length;
determining a focusing target position based on the first starting definition value, the first current definition value, the second starting definition value and the second current definition value;
the first initial definition value is an image definition value of the first camera at the focusing starting point, the first current definition value is an image definition value of the first camera at the current position, the second initial definition value is an image definition value of the second camera at the focusing starting point, and the second current definition value is an image definition value of the second camera at the current position;
and focusing the first camera and the second camera based on the focusing target position.
In a second aspect, an embodiment of the present application provides a focusing implementation device, including:
the processing unit is used for controlling the first camera to move a first preset step length from a focusing starting point to a preset determined focusing direction, and the second camera to move a second preset step length from the focusing starting point to the preset determined focusing direction, wherein the first preset step length is smaller than the second preset step length;
the processing unit is further used for determining a focusing target position based on the first initial definition value, the first current definition value, the second initial definition value and the second current definition value;
the first initial definition value is an image definition value of the first camera at the focusing starting point, the first current definition value is an image definition value of the first camera at the current position, the second initial definition value is an image definition value of the second camera at the focusing starting point, and the second current definition value is an image definition value of the second camera at the current position;
and the execution unit is used for focusing the first camera and the second camera based on the focusing target position.
In a third aspect, embodiments of the present application provide a storage medium having stored thereon a computer program which, when executed by a processor, implements the method described above.
In a fourth aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory for storing one or more programs; the above-described method is implemented when the one or more programs are executed by the processor.
Compared with the prior art, the focusing implementation method, the device, the storage medium and the image capturing apparatus provided by the embodiment of the application comprise the following steps: controlling the first camera to move a first preset step length from a focusing starting point to a preset determined focusing direction, and controlling the second camera to move a second preset step length from the focusing starting point to the preset determined focusing direction, wherein the first preset step length is smaller than the second preset step length; determining a focusing target position based on the first starting definition value, the first current definition value, the second starting definition value and the second current definition value; the first initial definition value is an image definition value of the first camera at the focusing starting point, the first current definition value is an image definition value of the first camera at the current position, the second initial definition value is an image definition value of the second camera at the focusing starting point, and the second current definition value is an image definition value of the second camera at the current position; and focusing the first camera and the second camera based on the focusing target position. And the overall focusing speed is improved by rapidly determining the focusing target position. The repeated adjustment and confirmation are not needed, the repeated front and back pulling of the picture can not occur, and the experience effect of the user is ensured.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting in scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a relationship between a sharpness value of an image and a focus lens position according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an image capturing apparatus provided in an embodiment of the present application;
fig. 3 is a schematic flow chart of a focusing implementation method according to an embodiment of the present application;
FIG. 4 is one of the sub-step flow diagrams of S107 provided in the embodiment of the present application;
FIG. 5 is a second schematic flow chart of the substep of S107 according to the embodiment of the present application;
FIG. 6 is a second flowchart of a focusing implementation method according to an embodiment of the present disclosure;
FIG. 7 is a third flow chart of a focusing implementation method according to the embodiment of the present application;
FIG. 8 is one of the sub-step flow diagrams of S103 provided in the embodiments of the present application;
FIG. 9 is a second schematic flow chart of the substep of S103 according to the embodiment of the present application;
fig. 10 is a schematic unit diagram of a focusing implementation device provided in an embodiment of the present application.
In the figure: 10-a processor; 11-a first camera; 12-a second camera; 13-memory; 201-a processing unit; 202-execution unit.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the description of the present application, it should be noted that, the terms "upper," "lower," "inner," "outer," and the like indicate an orientation or a positional relationship based on the orientation or the positional relationship shown in the drawings, or an orientation or a positional relationship conventionally put in use of the product of the application, merely for convenience of description and simplification of the description, and do not indicate or imply that the apparatus or element to be referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present application.
In the description of the present application, it should also be noted that, unless explicitly specified and limited otherwise, the terms "disposed," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art in a specific context.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
The focusing method includes a phase focusing method and a contrast focusing method. The phase focusing method needs a phase checking chip or a camera sensor to support the phase detection function, and has higher hardware cost, and conversely, the contrast focusing method does not need the phase checking chip or the camera sensor to support the phase detection function.
The contrast focusing method searches the most accurate focusing position by pushing the motor position, and also searches the lens position when the definition value is maximum according to the change of the definition value (also called as an evaluation function value) of the picture at the focus.
Optionally, the image sharpness value may be obtained by the following several alternative implementations. For example, the sum of squares of gray differences between two adjacent pixels of the whole image is calculated, the two gray differences in each pixel area are multiplied and then accumulated pixel by pixel, and so on. The calculation functions of the above evaluation methods are collectively referred to as evaluation functions. The sharpest point of the image is also when the evaluation function value (sharpness value) is the largest.
The evaluation function of the image has unimodal property, namely, only one maximum value exists in a focusing range; unbiased, i.e. the evaluation function of the image is at maximum only when the system is in the best focus state; but also the polarity of defocus (either in the pre-focal position or in the post-focal position).
Referring to fig. 1, fig. 1 is a schematic diagram of a relationship between a sharpness value of an image and a focusing lens position according to an embodiment of the present application. As shown in fig. 1, the sharpness value of the image is maximum at the near or far focus, and gradually decreases from the near or far focus.
The contrast focusing method comprises the following implementation processes:
in the unfocused state: because the whole focal picture is in a virtual focus state, similar to a Gaussian blur effect, the direct color of the pixel is relatively uniform.
Focusing is started: the lens starts to move, the picture is gradually clear, the evaluation function value (definition value) starts to rise, and the moving direction of the lens is correct; if the picture is gradually blurred, the evaluation function value (definition value) is lowered, the lens moving direction is wrong and the lens starts to move in the opposite direction; the lens position and the evaluation function value (sharpness value) are recorded during the lens movement.
In-focus state: continuing to move the lens frame is the sharpest, and the evaluation function value (sharpness value) is the highest, and the lens position and the evaluation function value (sharpness value) are recorded, but the camera is not known, so that the lens continues to be moved.
Continuing to move the lens finds that the evaluation function value (sharpness value) starts to decrease, further moving the lens finds that the evaluation function value (sharpness value) decreases further, and the camera has missed the focus.
And the lens is retracted to the position with the maximum evaluation function value (definition value) so as to complete focusing.
It should be appreciated that lens movement may be controlled by a motor.
In the implementation process of the contrast focusing method, the range of lens pushing is long, the focusing lens can possibly be pushed from the near focus to the far focus of the lens to find the maximum value, so that the focusing time is too long, the speed is slower, and the method is not applicable to scenes with high focusing speed. And the lens can be continuously pushed after the focusing position is found out, and the pictures are repeatedly pulled back and forth in the process, so that the experience effect is poor.
In order to overcome the above problems, the embodiments of the present application provide a focusing implementation method, which can efficiently implement focusing, and the focusing implementation method is applied to the image capturing apparatus provided in the embodiments of the present application. Referring to fig. 2, fig. 2 is a schematic structural diagram of an image capturing apparatus according to an embodiment of the present application. As shown in fig. 2, the image pickup apparatus includes a processor 10, a first camera 11, and a second camera 12, the processor 10 being connected to the first camera 11 and the second camera 12, respectively, and the processor 10 being capable of controlling motor motors in the first camera 11 and the second camera 12 to shift lenses so as to complete focusing.
The processor 10 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the focus implementation method may be accomplished by integrated logic circuitry of hardware in the processor 10 or instructions in the form of software. The processor 10 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processor, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In an alternative implementation, the image capturing apparatus further comprises a memory 13, where the memory 13 may comprise a high-speed random access memory (RAM: random Access Memory) and may further comprise a non-volatile memory (non-volatile memory), such as at least one disk memory.
The memory 13 is used to store programs such as programs corresponding to the focus implementation means. The focus implementation means include at least one software function module that can be stored in the memory 13 in the form of software or firmware (firmware) or cured in an Operating System (OS) of the image pickup apparatus. The processor 10 executes the program to implement the focus implementation method after receiving the execution instruction.
In an alternative implementation, the image capturing apparatus may further include other control systems.
It should be understood that the structure shown in fig. 2 is only a schematic structural diagram of a part of the image capturing apparatus, and the image capturing apparatus may further include more or fewer components than those shown in fig. 2, or have a different configuration from that shown in fig. 2. The components shown in fig. 2 may be implemented in hardware, software, or a combination thereof.
The method for implementing focusing provided in the embodiment of the present application may be applied to, but not limited to, the image capturing apparatus shown in fig. 2, and refer to fig. 3, where the method for implementing focusing includes: s104, S105, S107, and S108 are specifically described below.
S104, controlling the first camera to move a first preset step length from a focusing starting point to a preset determined focusing direction.
S105, controlling the second camera to move a second preset step length from a focusing starting point to a preset determined focusing direction.
The first preset step length is smaller than the second preset step length. By keeping the step difference, the focusing target position can be more conveniently positioned.
It should be noted that, in the embodiment of the present application, the execution sequence of S104 and S105 is not limited, and the two may be executed synchronously or may be executed separately.
It should be understood that the moving step length in the present application may be a rotating step length of a motor in the camera, and the motor is controlled to rotate in a direction to drive the lens therein to move toward the far focus end or the near focus end.
It is to be understood that after S104 and S105, the image sharpness corresponding to the first camera 11 and the second camera 12 may change.
And S107, determining a focusing target position based on the first initial definition value, the first current definition value, the second initial definition value and the second current definition value.
The first initial definition value is an image definition value of the first camera at the focusing starting point, the first current definition value is an image definition value of the first camera at the current position, the second initial definition value is an image definition value of the second camera at the focusing starting point, and the second current definition value is an image definition value of the second camera at the current position.
It should be appreciated that the trend of the image sharpness change may be analyzed by the first starting sharpness value, the first current sharpness value, the second starting sharpness value, and the second current sharpness value, and thus the focus target position may be determined. The focusing target position is the position with the maximum corresponding image definition value. And the overall focusing speed is improved by rapidly determining the focusing target position.
S108, focusing is carried out on the first camera and the second camera based on the focusing target position.
Alternatively, the first camera 11 and the second camera 12 are adjusted to the focus target positions. Or the main camera of the first camera 11 and the second camera 12 is adjusted to a focusing target position, and the auxiliary camera is adjusted to a preset position. It will be appreciated that no adjustment is required if it is itself in the in-focus target position. The repeated adjustment and confirmation are not needed, the repeated front and back pulling of the picture can not occur, and the experience effect of the user is ensured.
In summary, the embodiment of the present application provides a focusing implementation method, including: controlling the first camera to move a first preset step length from a focusing starting point to a preset determined focusing direction, and controlling the second camera to move a second preset step length from the focusing starting point to the preset determined focusing direction, wherein the first preset step length is smaller than the second preset step length; determining a focusing target position based on the first starting definition value, the first current definition value, the second starting definition value and the second current definition value; the first initial definition value is an image definition value of the first camera at the focusing starting point, the first current definition value is an image definition value of the first camera at the current position, the second initial definition value is an image definition value of the second camera at the focusing starting point, and the second current definition value is an image definition value of the second camera at the current position; and focusing the first camera and the second camera based on the focusing target position. And the overall focusing speed is improved by rapidly determining the focusing target position. The repeated adjustment and confirmation are not needed, the repeated front and back pulling of the picture can not occur, and the experience effect of the user is ensured.
With respect to the content in S107 on the basis of fig. 3, the embodiment of the present application further provides an alternative implementation, please refer to fig. 4, S107 includes: s107-1 is specifically described below.
S107-1, when the first current definition value is smaller than the first initial definition value and the second current definition value is smaller than the second initial definition value, the focusing starting point is used as the focusing target position.
It should be appreciated that the first and second cameras 11 and 12 are at the previous maximum values at the first and second starting sharpness values of the focus start point. When the first current sharpness value is smaller than the first starting sharpness value and the second current sharpness value is smaller than the second starting sharpness value, it means that the peak value has been crossed, which is less likely to be misjudged by the fluctuation. The focus start point can be directly taken as the focus target position.
With respect to the content in S107 on the basis of fig. 3, the embodiment of the present application further provides an alternative implementation, please refer to fig. 5, S107 includes: s107-2 and S107-3 are specifically described below.
And S107-2, when the first current definition value is larger than the first initial definition value and the second current definition value is smaller than the second initial definition value, controlling the second camera to move a first preset step length towards the focusing direction again, and obtaining the check definition value after the second camera moves.
Alternatively, when the first current sharpness value is greater than the first starting sharpness value and the second current sharpness value is less than the second starting sharpness value, it may be because an error occurs in the fluctuation, at this time, the focus start point may not be a position corresponding to the in-focus, and if the focus start point is directly taken as the focus target position, focusing inaccuracy is caused, so that S107-3 needs to be continuously performed.
And S107-3, when the check definition value is smaller than the second current definition value, taking the current position of the first camera as a focusing target position.
Optionally, when the check definition value is smaller than the second current definition value, it indicates that the peak value has been crossed, and the current position of the first camera is used as the focusing target position, so as to avoid misjudgment due to fluctuation.
On the basis of fig. 3, regarding how focusing is performed when the peak value is not crossed, an embodiment of the present application further provides an alternative implementation manner, as shown in fig. 6, where the focusing implementation method further includes: s106 and S109 are specifically described below.
S106, determining whether the first current definition value is larger than the first initial definition value and the second current definition value is larger than the second initial definition value. If not, executing S107; if yes, S109 is executed.
After the execution of S104 and S105, S106 may be executed first. In the case where the first current sharpness value is greater than the first starting sharpness value and the second current sharpness value is greater than the second starting sharpness value, the execution condition of S107 is inevitably satisfied, and S107 is executed. If so, it is indicated that the peak has not been reached, the case focusing direction may be adjusted, and the first camera 11 and the second camera 12 need to be adjusted to the same position, and S109 is executed.
S109, moving the first camera to a focusing direction by a third preset step length, and determining the positions of the first camera and the second camera at the moment as new focusing starting points.
The third preset step length is the difference between the second preset step length and the first preset step length.
Alternatively, after S109 is performed, the first camera 11 and the second camera 12 are in the same position. It should be appreciated that the positions herein are relatively the same, e.g., the step size is the same from the far end, or the step size is the same from the near end.
It should be understood that after S109, S104 and S105 may be repeatedly performed.
In an alternative embodiment, the first preset step size when S104 is executed the i+1th time is smaller than the first preset step size when S104 is executed the i+1th time, the second preset step size when S105 is executed the i+1th time is smaller than the second preset step size when S105 is executed the i th time, and the first preset step size when S104 is executed the i th time is smaller than the second preset step size when S105 is executed the i th time.
It will be appreciated that as the adjustment is made closer to the sharpness maximum, the size of the first and second preset steps is reduced, which may reduce the risk of crossing the maximum.
Optionally, on the basis of fig. 3, regarding how to determine the focus start point and the focus direction to shorten the duration of the whole focusing process, the embodiment of the present application further provides a possible implementation manner, please refer to fig. 7, and before S104 and S105, the focusing implementation method further includes: s101, S102, and S103 are specifically described below.
S101, controlling the first camera to move a first preset step length from a preset position to a first direction.
S102, controlling the second camera to move a first preset step length from a preset position to a second direction.
The first direction is the far focus direction or the near focus direction, and the second direction is the opposite direction of the first direction. The preset position may be, but is not limited to, an intermediate position of the far and near focal ends.
It should be understood that the execution order of S101 and S102 is not limited.
S103, determining a focusing starting point and a focusing direction based on the first preset definition value, the first verification definition value, the second preset definition value and the second verification definition value.
The first preset definition value is an image definition value of the first camera at a preset position, the first verification definition value is an image definition value of the first camera at the first verification position, the first verification position is a position corresponding to a first preset step length of movement of the first camera from the preset position to a first direction, the second preset definition value is an image definition value of the second camera at the preset position, the second verification definition value is an image definition value of the second camera at the second verification position, and the second verification position is a position corresponding to the first preset step length of movement of the second camera from the preset position to a second direction.
With respect to the content in S103 on the basis of fig. 7, the embodiment of the present application further provides an alternative implementation, please refer to fig. 8, S103 includes: s103-1 is specifically described below.
S103-1, when the first preset definition is smaller than the first verification definition and the second preset definition is larger than the second verification definition, determining the first verification position as a focusing starting point and determining the first direction as a focusing direction.
It should be understood that the first preset definition is smaller than the first verification definition and the second preset definition is larger than the second verification definition, which means that the moving direction of the first camera 11 is the correct focusing direction and the first verification position is closer to the focusing target position, so S103-1 may be performed.
In order to further shorten the focusing time on the basis of fig. 7, an alternative implementation manner is further provided in the embodiment of the present application, please refer to fig. 9, S103 includes: s103-2, S103-3, S103-4 and S103-5 are specifically described below.
S103-2, when the first preset definition is smaller than the first verification definition and the second preset definition is larger than the second verification definition, the second camera is moved to the farthest end in the first direction, and a third verification definition value is obtained.
At this time, it has not been possible to determine who the first verification position and the farthest end (near-focal end or far-focal end) of the first direction are shorter than the distance to the in-focus target position. Therefore, S103-2 and S103-3 need to be performed.
S103-3, determining whether the first verification definition value is greater than the third verification definition value. If yes, executing S103-4; if not, S103-5 is performed.
The third verification definition value is an image definition value of the second camera at the farthest end of the first direction.
If so, it means that the distance from the first verification position to the focusing target position is shorter, then S103-4 is performed. If not, the distance from the farthest end in the first direction to the focusing target position is shorter, and S103-5 is executed.
S103-4, when the first verification definition value is larger than the third verification definition value, the first verification position is determined to be a focusing starting point, and the first direction is determined to be a focusing direction.
S103-5, when the first verification definition value is smaller than the third verification definition value, determining the farthest end of the first direction as a focusing starting point, and determining the second direction as a focusing direction.
It should be understood that the focusing path can be shortened through the steps shown in fig. 9, so as to reduce the focusing time and improve the focusing efficiency.
In the method, when focusing is started by two cameras, the directions of the clear points are searched for simultaneously in the two directions, the range of the maximum searching clear point is reduced by half, and the point which is closer to the clear point is searched for further focusing after the focusing direction is determined, so that the focusing range is further reduced; in the focusing process, the two cameras are used for simultaneously calculating the value of the evaluation function, so that the focusing speed is higher. And the repeated stretching process of the camera lens is reduced, and the user experience is improved.
Referring to fig. 10, fig. 10 is a schematic diagram illustrating an embodiment of a focusing device, which is optionally applied to the electronic apparatus described above.
The focusing implementation device includes: a processing unit 201 and an execution unit 202.
The processing unit 201 is configured to control the first camera to move a first preset step length from a focusing start point to a preset determined focusing direction, and the second camera to move a second preset step length from the focusing start point to the preset determined focusing direction, where the first preset step length is smaller than the second preset step length;
the processing unit 201 is further configured to determine a focusing target position based on the first starting definition value, the first current definition value, the second starting definition value, and the second current definition value;
the first initial definition value is an image definition value of the first camera at the focusing starting point, the first current definition value is an image definition value of the first camera at the current position, the second initial definition value is an image definition value of the second camera at the focusing starting point, and the second current definition value is an image definition value of the second camera at the current position;
the execution unit 202 is configured to focus the first camera and the second camera based on the focus target position.
Alternatively, the processing unit 201 may execute S101 to S107 and S109 described above, and the execution unit 202 may execute S108 described above.
It should be noted that, the focusing implementation device provided in this embodiment may execute the method flow shown in the method flow embodiment to implement the corresponding technical effect. For a brief description, reference is made to the corresponding parts of the above embodiments, where this embodiment is not mentioned.
The present application also provides a storage medium storing computer instructions, a program which when read and executed perform the focusing implementation method of the above embodiments. The storage medium may include memory, flash memory, registers, combinations thereof, or the like.
An image pickup apparatus, which may be a video camera, a still camera, a mobile phone, a computer, etc., is provided below, and as shown in fig. 2, the above-described focusing implementation method may be implemented; specifically, the image pickup apparatus includes: the image pickup apparatus includes a processor 10, a first camera 11, a second camera 12, and a memory 13, the processor 10 is connected to the first camera 11 and the second camera 12, respectively, and the processor 10 can control motor motors in the first camera 11 and the second camera 12 to shift lenses, thereby completing focusing. The processor 10 may be a CPU. The memory 13 is used to store one or more programs that, when executed by the processor 10, perform the focus implementation method of the above-described embodiments.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the same, but rather, various modifications and variations may be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (10)

1. A method of achieving focusing, the method comprising:
controlling a first camera to move a first preset step length from a focusing starting point to a preset determined focusing direction, and controlling a second camera to move a second preset step length from the focusing starting point to the preset determined focusing direction, wherein the first preset step length is smaller than the second preset step length;
determining a focusing target position based on the first starting definition value, the first current definition value, the second starting definition value and the second current definition value;
the first initial definition value is an image definition value of the first camera at the focusing starting point, the first current definition value is an image definition value of the first camera at the current position, the second initial definition value is an image definition value of the second camera at the focusing starting point, and the second current definition value is an image definition value of the second camera at the current position;
and focusing the first camera and the second camera based on the focusing target position.
2. The focus implementation method according to claim 1, wherein the step of determining the focus target position based on the first starting sharpness value, the first current sharpness value, the second starting sharpness value, and the second current sharpness value includes:
and when the first current definition value is smaller than the first starting definition value and the second current definition value is smaller than the second starting definition value, the focusing starting point is used as a focusing target position.
3. The focus implementation method according to claim 1, wherein the step of determining the focus target position based on the first starting sharpness value, the first current sharpness value, the second starting sharpness value, and the second current sharpness value includes:
when the first current definition value is larger than the first initial definition value and the second current definition value is smaller than the second initial definition value, controlling the second camera to move a first preset step length towards the focusing direction again, and obtaining a check definition value after the second camera moves;
and when the checked definition value is smaller than the second current definition value, taking the current position of the first camera as a focusing target position.
4. The focus implementation method according to claim 1, wherein before said determining a focus target position based on the first starting sharpness value, the first current sharpness value, the second starting sharpness value, and the second current sharpness value, the method further comprises:
when the first current definition value is larger than the first initial definition value and the second current definition value is larger than the second initial definition value, moving the first camera towards the focusing direction by a third preset step length, determining the positions of the first camera and the second camera at the moment as new focusing starting points, repeatedly controlling the first camera to move a first preset step length from the focusing starting points towards the preset determined focusing directions, and controlling the second camera to move a second preset step length from the focusing starting points towards the preset determined focusing directions;
wherein the third preset step length is the difference between the second preset step length and the first preset step length.
5. The focus implementation method according to claim 1, wherein before the controlling the first camera to move a first preset step from the focus start point to a preset determined focus direction and the second camera to move a second preset step from the focus start point to the preset determined focus direction, the method further comprises:
controlling the first camera to move a first preset step length from a preset position to a first direction, and controlling the second camera to move a first preset step length from the preset position to a second direction;
the first direction is a far-focus direction or a near-focus direction, and the second direction is a direction opposite to the first direction;
determining the focusing starting point and the focusing direction based on a first preset definition value, a first verification definition value, a second preset definition value and a second verification definition value;
the first preset definition value is an image definition value of the first camera at the preset position, the first verification definition value is an image definition value of the first camera at the first verification position, the first verification position is a position corresponding to a first preset step length of movement of the first camera from the preset position to a first direction, the second preset definition value is an image definition value of the second camera at the preset position, the second verification definition value is an image definition value of the second camera at the second verification position, and the second verification position is a position corresponding to a first preset step length of movement of the second camera from the preset position to a second direction.
6. The method of claim 5, wherein the determining the focus start point and the focus direction based on the first preset definition value, the first verification definition value, the second preset definition value, and the second verification definition value comprises:
and when the first preset definition is smaller than the first verification definition and the second preset definition is larger than the second verification definition, determining the first verification position as the focusing starting point and determining the first direction as the focusing direction.
7. The method of claim 5, wherein the determining the focus start point and the focus direction based on the first preset definition value, the first verification definition value, the second preset definition value, and the second verification definition value comprises:
when the first preset definition is smaller than the first verification definition and the second preset definition is larger than the second verification definition, the second camera is moved to the farthest end in the first direction, and a third verification definition value is obtained;
the third verification definition value is an image definition value of the second camera at the farthest end of the first direction;
when the first verification definition value is larger than the third verification definition value, determining the first verification position as the focusing starting point, and determining the first direction as the focusing direction;
and when the first verification definition value is smaller than the third verification definition value, determining the farthest end of the first direction as the focusing starting point, and determining the second direction as the focusing direction.
8. A focus achieving apparatus, characterized in that the apparatus comprises:
the processing unit is used for controlling the first camera to move a first preset step length from a focusing starting point to a preset determined focusing direction, and the second camera to move a second preset step length from the focusing starting point to the preset determined focusing direction, wherein the first preset step length is smaller than the second preset step length;
the processing unit is further used for determining a focusing target position based on the first initial definition value, the first current definition value, the second initial definition value and the second current definition value;
the first initial definition value is an image definition value of the first camera at the focusing starting point, the first current definition value is an image definition value of the first camera at the current position, the second initial definition value is an image definition value of the second camera at the focusing starting point, and the second current definition value is an image definition value of the second camera at the current position;
and the execution unit is used for focusing the first camera and the second camera based on the focusing target position.
9. A computer readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the method according to any of claims 1-7.
10. An electronic device, comprising: a processor and a memory for storing one or more programs; the method of any of claims 1-7 is implemented when the one or more programs are executed by the processor.
CN202310505127.8A 2023-05-08 2023-05-08 Focusing implementation method and device, storage medium and image pickup equipment Active CN116233605B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310505127.8A CN116233605B (en) 2023-05-08 2023-05-08 Focusing implementation method and device, storage medium and image pickup equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310505127.8A CN116233605B (en) 2023-05-08 2023-05-08 Focusing implementation method and device, storage medium and image pickup equipment

Publications (2)

Publication Number Publication Date
CN116233605A true CN116233605A (en) 2023-06-06
CN116233605B CN116233605B (en) 2023-07-25

Family

ID=86580907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310505127.8A Active CN116233605B (en) 2023-05-08 2023-05-08 Focusing implementation method and device, storage medium and image pickup equipment

Country Status (1)

Country Link
CN (1) CN116233605B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080181595A1 (en) * 2007-01-29 2008-07-31 Ayelet Pnueli Method and apparatus for calculating a focus metric
CN106713718A (en) * 2017-02-27 2017-05-24 努比亚技术有限公司 Dual camera-based focusing method and mobile terminal
CN107566735A (en) * 2017-09-30 2018-01-09 努比亚技术有限公司 A kind of dual camera focusing method, mobile terminal and computer-readable recording medium
WO2018103299A1 (en) * 2016-12-09 2018-06-14 中兴通讯股份有限公司 Focusing method, and focusing device
WO2018228479A1 (en) * 2017-06-16 2018-12-20 Oppo广东移动通信有限公司 Automatic focusing method and apparatus, storage medium and electronic device
US10247910B1 (en) * 2018-03-14 2019-04-02 Nanotronics Imaging, Inc. Systems, devices and methods for automatic microscopic focus
CN109981965A (en) * 2017-12-27 2019-07-05 华为技术有限公司 The method and electronic equipment of focusing
CN110618578A (en) * 2018-06-19 2019-12-27 广景视睿科技(深圳)有限公司 Projector and projection method
CN111711759A (en) * 2020-06-29 2020-09-25 重庆紫光华山智安科技有限公司 Focusing method, device, storage medium and electronic equipment
CN112492210A (en) * 2020-12-01 2021-03-12 维沃移动通信有限公司 Photographing method and device, electronic equipment and storage medium
US20220311945A1 (en) * 2021-03-29 2022-09-29 Advanced Micro Devices, Inc. Adaptive lens step control with multiple filters for camera fast auto focus

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080181595A1 (en) * 2007-01-29 2008-07-31 Ayelet Pnueli Method and apparatus for calculating a focus metric
WO2018103299A1 (en) * 2016-12-09 2018-06-14 中兴通讯股份有限公司 Focusing method, and focusing device
CN106713718A (en) * 2017-02-27 2017-05-24 努比亚技术有限公司 Dual camera-based focusing method and mobile terminal
WO2018228479A1 (en) * 2017-06-16 2018-12-20 Oppo广东移动通信有限公司 Automatic focusing method and apparatus, storage medium and electronic device
CN107566735A (en) * 2017-09-30 2018-01-09 努比亚技术有限公司 A kind of dual camera focusing method, mobile terminal and computer-readable recording medium
CN109981965A (en) * 2017-12-27 2019-07-05 华为技术有限公司 The method and electronic equipment of focusing
US10247910B1 (en) * 2018-03-14 2019-04-02 Nanotronics Imaging, Inc. Systems, devices and methods for automatic microscopic focus
CN110618578A (en) * 2018-06-19 2019-12-27 广景视睿科技(深圳)有限公司 Projector and projection method
CN111711759A (en) * 2020-06-29 2020-09-25 重庆紫光华山智安科技有限公司 Focusing method, device, storage medium and electronic equipment
CN112492210A (en) * 2020-12-01 2021-03-12 维沃移动通信有限公司 Photographing method and device, electronic equipment and storage medium
US20220311945A1 (en) * 2021-03-29 2022-09-29 Advanced Micro Devices, Inc. Adaptive lens step control with multiple filters for camera fast auto focus

Also Published As

Publication number Publication date
CN116233605B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
US9998650B2 (en) Image processing apparatus and image pickup apparatus for adding blur in an image according to depth map
KR102032882B1 (en) Autofocus method, device and electronic apparatus
CN108076278B (en) Automatic focusing method and device and electronic equipment
US9703175B2 (en) Systems and methods for autofocus trigger
TWI538512B (en) Method for adjusting focus position and electronic apparatus
KR20190104216A (en) Focus method, terminal and computer readable storage media
US20050052564A1 (en) Image-taking apparatus and focus control program for image-taking apparatus
TWI471677B (en) Auto focus method and auto focus apparatus
JP5278564B2 (en) Imaging device
CN112752026A (en) Automatic focusing method, automatic focusing device, electronic equipment and computer readable storage medium
JP6447840B2 (en) Image device, method for automatic focusing in an image device, and corresponding computer program
CN107800951B (en) Electronic device and lens switching method thereof
CN106154688B (en) Automatic focusing method and device
EP3218756B1 (en) Direction aware autofocus
CN109698902B (en) Synchronous focusing method and device
CN110830726B (en) Automatic focusing method, device, equipment and storage medium
US20110075017A1 (en) Portable electronic device and method for measuring distance by performing auto focus function
US10051190B2 (en) Interchangeable lens, camera system, imaging apparatus, control method of camera system, and control method of imaging apparatus in which the size of an image circle varies inversely to the focal distance
CN116233605B (en) Focusing implementation method and device, storage medium and image pickup equipment
JP2013130762A (en) Imaging device, method for controlling the same, and program
JP2019083580A (en) Image processing apparatus, image processing method, and program
JPH05211625A (en) Image pickup device
CN113709366B (en) Information processing method and device
CN116456189A (en) Shooting method, mobile terminal and storage medium
US20080056703A1 (en) Image capture methods and systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant