CN116471480A - Focusing method and device, electronic equipment and storage medium - Google Patents

Focusing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116471480A
CN116471480A CN202310393248.8A CN202310393248A CN116471480A CN 116471480 A CN116471480 A CN 116471480A CN 202310393248 A CN202310393248 A CN 202310393248A CN 116471480 A CN116471480 A CN 116471480A
Authority
CN
China
Prior art keywords
focus position
depth
background
field compensation
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310393248.8A
Other languages
Chinese (zh)
Inventor
樊荣荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Wingtech Electronic Technology Co Ltd
Original Assignee
Xian Wingtech Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Wingtech Electronic Technology Co Ltd filed Critical Xian Wingtech Electronic Technology Co Ltd
Priority to CN202310393248.8A priority Critical patent/CN116471480A/en
Publication of CN116471480A publication Critical patent/CN116471480A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/671Focus control based on electronic image sensor signals in combination with active ranging signals, e.g. using light or sound signals emitted toward objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a focusing method and device, electronic equipment and a storage medium; the method comprises the following steps: determining an initial focus position corresponding to an image to be shot; performing depth of field compensation on the initial focus position to obtain a target focus position, wherein the depth of field compensation is determined according to a shooting distance between a shooting object and a lens of the electronic equipment, and the shooting object corresponds to the region of interest; and focusing the image to be shot according to the target focus position. The method can adjust the focus position according to the shooting distance between the shooting object and the lens of the electronic equipment, so that the definition of the shooting object can be ensured in the focusing process, the definition of the background can be obviously improved, and the satisfaction of users is improved.

Description

Focusing method and device, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of photographing focusing, and relates to a focusing method and device, electronic equipment and a storage medium.
Background
Camera is a very popular application of the current intelligent terminal, a plurality of application scenes can be derived based on the Camera, and focusing claps, focusing claps and the like are one of the most common scenes in life. In the image effect, auto-focus AF debugging is particularly important, which determines whether the scene can present a clear image.
When the AF focuses on the shooting object at the present stage, the interested region ROI is locked on the shooting object, so that the focusing of the shooting object is clearer. However, such focusing will lock the quasi-focus on the subject, while preserving the subject definition, the background definition will generally be poor, and the effect of both will not be achieved. Especially when people want to shoot a scene and want to shoot a picture of a shooting object in the scene, a satisfactory effect is difficult to achieve.
Disclosure of Invention
In view of this, the focusing method, the focusing device, the electronic device and the storage medium provided in the embodiments of the present application can adjust the focal position according to the shooting distance between the shooting object and the lens of the electronic device, so that the definition of the shooting object can be ensured in the focusing process, the definition of the background can be significantly improved, and the user satisfaction can be improved.
In a first aspect, a focusing method provided by an embodiment of the present application is applied to an electronic device, where the method includes:
determining an initial focus position corresponding to an image to be shot, wherein the initial focus position is determined according to an interested region in the image to be shot;
performing depth of field compensation on the initial focus position to obtain a target focus position, wherein the depth of field compensation is determined according to a shooting distance between a shooting object and a lens of the electronic equipment, and the shooting object corresponds to the region of interest;
And focusing the image to be shot according to the target focus position.
In a second aspect, an embodiment of the present application provides a focusing device applied to an electronic device, where the device includes:
the initial determining module is used for determining an initial focus position corresponding to the image to be shot, wherein the initial focus position is determined according to the region of interest in the image to be shot;
the target acquisition module is used for performing depth of field compensation on the initial focus position to obtain a target focus position, wherein the depth of field compensation is determined according to a shooting distance between a shooting object and a lens of the electronic equipment, and the shooting object corresponds to the region of interest;
and the first processing module is used for focusing the image to be shot according to the target focus position.
In a third aspect, an electronic device provided in an embodiment of the present application includes a memory and a processor, where the memory stores a computer program that can be executed on the processor, and the processor implements the steps of the focusing method provided in the first aspect of the embodiment of the present application when the processor executes the program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the focusing method provided in the first aspect of embodiments of the present application.
According to the focusing method, the focusing device, the electronic equipment and the computer readable storage medium, the focus position can be adjusted according to the shooting distance between the shooting object and the lens of the electronic equipment, so that the definition of the shooting object can be ensured in the focusing process, the definition of the background can be obviously improved, the satisfaction of a user is improved, and the technical problem in the background technology is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and, together with the description, serve to explain the technical aspects of the application.
Fig. 1 is a schematic implementation flow chart of a focusing method according to an embodiment of the present application;
fig. 2 is a schematic implementation flow chart of another focusing method according to an embodiment of the present application;
fig. 3 is a schematic implementation flow chart of another focusing method according to an embodiment of the present application;
fig. 4 is a schematic implementation flow chart of another focusing method according to an embodiment of the present application;
fig. 5 is a general flow chart of a face focusing method according to an embodiment of the present application;
fig. 6 is a schematic diagram of effects before and after depth of field compensation for face focusing according to an embodiment of the present application;
Fig. 7 is a schematic diagram illustrating a configuration of a depth of field compensation value according to an embodiment of the present application;
fig. 8 is a schematic diagram illustrating another configuration of depth of field compensation values according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a focusing device according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes, technical solutions and advantages of the embodiments of the present application to be more apparent, the specific technical solutions of the present application will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are illustrative of the present application, but are not intended to limit the scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
It should be noted that the term "first/second/third" in reference to the embodiments of the present application is used to distinguish similar or different objects, and does not represent a specific ordering of the objects, it being understood that the "first/second/third" may be interchanged with a specific order or sequence, as permitted, to enable the embodiments of the present application described herein to be implemented in an order other than that illustrated or described herein.
Technical terms related to the present application are described below in order to facilitate understanding of technical solutions of the present application by those skilled in the art.
Camera: a camera or webcam is also called a computer camera, a computer eye, an electronic eye and the like, is video input equipment, and is widely applied to video conference, remote medical treatment, real-time monitoring and the like. Ordinary persons can also talk and communicate with each other in a network with images and sounds through the camera. In addition, people can also use the video processing device for various popular digital images, video processing and the like.
AF: automatic Focus is an algorithm that automatically adjusts the lens position to Focus the camera to the clearest position. The algorithm is widely applied to terminal communication equipment such as mobile phones, notebooks, vehicles and the like.
ROI: region of interest, a region of interest. In machine vision and image processing, a region to be processed, called a region of interest, ROI, is outlined from a processed image in the form of a square, a circle, an ellipse, an irregular polygon, or the like. Various operators and functions are commonly used in machine vision software such as Halcon, openCV, matlab to calculate the ROI and process the image in the next step.
Camera is a very popular application of the current intelligent terminal, a plurality of application scenes can be derived based on the Camera, and focusing claps, focusing claps and the like are the most common scenes in life. In the image effect, auto-focus AF debugging is particularly important, which determines whether the scene can present a clear image.
When the AF focuses on the shooting object at the present stage, the interested region ROI is locked on the shooting object, so that the focusing of the shooting object is clearer. However, such focusing will lock the quasi-focus on the subject, while preserving the subject definition, the background definition will generally be poor, and the effect of both will not be achieved. Especially when people want to shoot a scene and want to shoot a picture of a shooting object in the scene, a satisfactory effect is difficult to achieve.
In view of this, an embodiment of the present application provides a focusing method, by determining an initial focus position corresponding to an image to be photographed; performing depth of field compensation on the initial focus position to obtain a target focus position, wherein the depth of field compensation is determined according to a shooting distance between a shooting object and a lens of the electronic equipment, and the shooting object corresponds to the region of interest; according to the method for focusing the image to be shot according to the target focus position, the focus position can be adjusted according to the shooting distance between the shooting object and the lens of the electronic equipment, so that the definition of the shooting object can be ensured in the focusing process, the definition of the background can be obviously improved, and the user satisfaction is improved.
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Fig. 1 is a schematic implementation flow chart of a focusing method provided in an embodiment of the present application, where the method may be applied to an electronic device, and the electronic device may be various types of devices with information processing capabilities in an implementation process. For example, the electronic device may include a personal computer, a notebook computer, a palm top computer, a server, or the like; the electronic device may also be a mobile terminal, which may include a mobile phone, a car computer, a tablet computer, a projector, or the like, for example. As shown in fig. 1, the method may include the following steps 101 to 103:
Step 101: and determining an initial focus position corresponding to the image to be shot, wherein the initial focus position is determined according to the region of interest in the image to be shot.
It should be noted that, the initial focus position corresponding to the image to be shot is determined according to the region of interest in the image to be shot, where the region of interest is a shooting region of central interest in the image to be shot, which may be said to be a shooting region requiring higher definition, and the position and size of the region of interest in the image to be shot may be set according to the requirement. For example, the region of interest may be a preset fixed region, for example, a circular region having a certain area in the middle of the image to be captured may be set as the region of interest; the region of interest may also be a region automatically determined according to a shooting scene or the like, for example, when the shooting scene is detected to be a face shooting scene, the region of interest may be a region corresponding to a face and move along with the movement of the face; the region of interest may also be determined according to a shooting object set by the user, for example, when it is determined that the shooting object is a tree, the region of interest may be a region corresponding to the tree. The position, the size and the corresponding shooting objects of the region of interest are not limited.
In the embodiment of the present application, after the region of interest is determined, the initial focus position may be obtained according to the region of interest, where an existing method for determining the focus position may be adopted, and the method for determining the initial focus position corresponding to the image to be captured in the embodiment of the present application is not limited.
The electronic device detects whether a face exists in the current shooting scene, and when the face exists in the current shooting scene, the electronic device determines that the region of interest is a region corresponding to the face, and determines an initial focus position according to an image in the region of interest.
Step 102: and performing depth of field compensation on the initial focus position to obtain a target focus position, wherein the depth of field compensation is determined according to a shooting distance between a shooting object and a lens of the electronic equipment, and the shooting object corresponds to the region of interest.
After the initial focus position is obtained, depth of field compensation can be performed according to a shooting distance between a shooting object corresponding to the region of interest and a lens of the electronic device, so as to obtain the target focus position. Because the depth of field is related to the shooting distance, the larger the shooting distance is, the larger the depth of field is, and the smaller the shooting distance is, the smaller the depth of field is, so that the depth of field compensation is carried out according to the shooting distance between a shooting object and a lens, and the effect of improving the definition of the focused background can be realized.
For example, when an electronic device such as a mobile phone photographs, a face frame is automatically identified, the AF is automatically focused by taking the face frame as a focusing point, a photographing distance can be calculated after focusing is stable, a depth of field compensation value can be determined according to the photographing distance, and a target focusing position can be obtained according to the depth of field compensation value. The shooting distance may be calculated by a conventional method, for example, by a distance sensor, or by an algorithm.
In some embodiments, different depth of field compensation values may be given for different shooting distance values when the target focus position is acquired.
In this embodiment of the present application, performing depth of field compensation on the initial focal position to obtain a target focal position may include: determining a first depth of field compensation value according to the shooting distance and a preset first corresponding relation, wherein the preset first corresponding relation comprises a corresponding relation between the shooting distance and the depth of field compensation value; and performing depth of field compensation on the initial focus position according to the first depth of field compensation value to obtain a target focus position.
It should be noted that the electronic device may pre-store a first correspondence between the shooting distance and the depth of field compensation value, and after the shooting distance between the shooting object corresponding to the region of interest and the lens of the electronic device is obtained, the first depth of field compensation value may be determined according to the shooting distance and the first correspondence.
After the first depth of field compensation value is obtained, the target focus position can be obtained according to the first depth of field compensation value and the initial focus position. For example, the corresponding target focus position may be calculated in such a way that the initial focus position is moved away from the lens by the first depth of field compensation value.
Further, the determining the first depth of field compensation value according to the shooting distance and the preset first correspondence may include: and determining a first depth of field compensation value according to the shooting distance range corresponding to the shooting distance and a preset first corresponding relation.
For example, as the larger the photographing distance is, the larger the depth of field is, the smaller the photographing distance is, the smaller the depth of field is, and the corresponding relation between the photographing distance range and the depth of field compensation value is set by the electronic device, that is, different photographing distance ranges correspond to different depth of field compensation values, when it is determined that the obtained photographing distance belongs to the first photographing distance range, and when the first photographing distance range corresponds to the first depth of field compensation value according to the preset corresponding relation, the depth of field compensation value corresponding to the photographing distance can be obtained to be the first depth of field compensation value.
In some embodiments, although the subject is generally considered to be clear in the depth of field range, there is actually some influence on the sharpness of the subject in the depth of field compensation. When the area of the region corresponding to the shooting object in the image to be shot is large, the shooting distance is short, the detail characteristic of the shooting object is possibly focused on by the user, and the requirement on the rear depth of field is not high, so that the shooting distance and the area of the region corresponding to the shooting object can be considered simultaneously when the depth of field compensation value is determined.
In this embodiment of the present application, performing depth of field compensation on the initial focal position to obtain a target focal position may include: acquiring the area of a region corresponding to a shooting object in the region of interest; determining a second depth of field compensation value according to the shooting distance, the area and the preset second corresponding relation, wherein the preset second corresponding relation comprises a corresponding relation among the shooting distance, the area and the depth of field compensation value; and performing depth of field compensation on the initial focus position according to the second depth of field compensation value to obtain a target focus position.
It should be noted that the electronic device may pre-store a second correspondence between the shooting distance, the area of the region, and the depth of field compensation value, and after obtaining the shooting distance between the shooting object corresponding to the region of interest and the lens of the electronic device, and the area of the region corresponding to the shooting object in the region of interest, determine the second depth of field compensation value according to the shooting distance, the area of the region, and the second correspondence. The method for obtaining the area corresponding to the shooting object in the region of interest may adopt an existing method, and the method for obtaining the area corresponding to the shooting object in the region of interest is not limited in this embodiment of the present application.
After the second depth of field compensation value is obtained, the target focus position can be obtained according to the second depth of field compensation value and the initial focus position. For example, the corresponding target focus position may be calculated in such a way that the initial focus position is moved away from the lens by the second depth of field compensation value.
Further, the determining a second depth of field compensation value according to the shooting distance, the area, and the preset second correspondence may include: and determining a second depth of field compensation value according to the distance range corresponding to the shooting distance, the area range corresponding to the area and the preset second corresponding relation.
The larger the shooting distance, the closer the shooting object is, the more attention is paid to the details of the shooting object, i.e., the requirement for the post depth of field is not high. In this case, the depth of field compensation value can be determined by the size of the subject in the image together with the imaging distance. The electronic device sets a corresponding relation between a shooting distance range, an area range and a depth of field compensation value, namely different shooting distance ranges and different depth of field compensation values corresponding to the area range, when the acquired shooting distance is determined to belong to a first shooting distance range, the acquired area belongs to a first area range, and when the first shooting distance range and the first area range correspond to a second depth of field compensation value according to a preset corresponding relation, the depth of field compensation value corresponding to the shooting distance and the area can be obtained to be the second depth of field compensation value.
For example, when performing depth of field compensation for a face focusing scene, the size and shooting distance of the face may be checked first; the shooting distance detection technology is mature at present, for example, the shooting distance can be determined by adopting modes of face feature extraction, a distance sensor and the like, and the size of a face can be obtained according to the minimum circle diameter which can be enclosed by eyes and a mouth (biological recognition technology), namely the size of the face in an image to be shot; after the shooting distance and the face size are obtained, the depth of field compensation value can be determined through a preset corresponding relation.
It can be understood that when the preset first corresponding relation and the preset second corresponding relation are generated, the depth of field value can be calculated according to the depth of field calculation formula, and then the distance required to move the lens is reversely calculated through the depth of field value, and converted into the compensation value compensation of the digital-analog converter dac. During shooting, the focus of the shooting lens is the position of the moving lens, and is controlled by the dac value in the camera.
Step 103: and focusing the image to be shot according to the target focus position.
After the target focal position is determined, focusing may be performed according to the target focal position. Illustratively, the focusing principle of the current auto-focusing AF is to adjust the position of a lens by changing the magnitude of a direct current of a coil in a motor so that the lens presents a clear image. After the target focus position is obtained, the electronic device adjusts the motor according to the target focus position, and the motor pushes the lens to the clearest position.
The focusing method provided by the embodiment of the application effectively utilizes the depth of field characteristic of the module, namely, the front depth of field and the rear depth of field of the lens are applied to actual photographing, so that the definition of a photographed object is ensured in the focusing process, and the definition of the rear scene is obviously improved.
In addition, when the distance between the shooting object and the lens is changed, the depth of field is also changed, and aiming at the problem, the method for flexibly configuring the depth of field compensation displacement according to the distance between the shooting object and the lens is provided, so that the method can be better adapted to various scenes focused by the shooting object, and the background definition is further optimized by effectively utilizing the principle that the further the shooting distance is, the larger the depth of field is.
In some embodiments, to avoid meaningless depth of field compensation operations, it may be determined whether the current scene is suitable for depth of field compensation before the depth of field compensation is performed, if not, the depth of field compensation is not performed, and if so, the depth of field compensation is performed.
Fig. 2 is a schematic implementation flow chart of another focusing method provided in the embodiment of the present application, as shown in fig. 2, where the method may further include, before step 102, on the basis of the embodiment shown in fig. 1:
Step 201: determining that a current environmental parameter meets a preset environmental parameter requirement, wherein the current environmental parameter comprises at least one of environmental temperature, environmental humidity and environmental brightness;
the method further comprises the steps of: step 210: and determining that the current environmental parameter does not meet the preset environmental parameter requirement, and focusing according to the initial focus position.
It should be noted that when the current environmental parameter is determined to meet the preset environmental parameter requirement, the target focus position is obtained and focusing is performed according to the target focus position, and when the current environmental parameter is determined not to meet the preset environmental parameter requirement, focusing is performed directly according to the initial focus position. The current environmental parameter may include at least one of an environmental temperature, an environmental humidity and an environmental brightness, and the preset environmental parameter requirement may be set according to a requirement. For example, when the ambient temperature is too high or too low, the ambient humidity is too high, and the ambient brightness is too low, a better background sharpness cannot be obtained, and thus it may be set not to perform depth of field compensation.
For example, the electronic device detects the current scene brightness to determine whether to enable the face depth of field; when the ambient brightness is higher than the brightness level gain threshold, a face depth of field compensation mechanism is started, and when the ambient brightness is lower than the brightness level gain threshold, the face depth of field compensation machine is not started, because the dark environment does not need to pay special attention to the background definition, and the gain threshold can be flexibly configured.
In some embodiments, to avoid meaningless depth of field compensation operations, it may be determined whether the background parameters are suitable for depth of field compensation before the depth of field compensation is performed, if not, the depth of field compensation is not performed, and if so, the depth of field compensation is performed.
Fig. 3 is a schematic implementation flow chart of another focusing method according to the embodiment of the present application, as shown in fig. 3, where the method may further include, before step 102, on the basis of the embodiment shown in fig. 1:
step 301: determining that a background state parameter meets a preset background parameter requirement, wherein the background state parameter comprises at least one of a background distance, a background area and a background Jing Zhanbi, the background distance is used for indicating the distance between a background in an image to be shot and the region of interest, the background area is used for indicating the area of the background in the image to be shot, and the background Jing Zhanbi is used for indicating the ratio of the area of the background in the image to be shot to the total area of the image to be shot;
the method may further comprise: step 310: and determining that the background state parameters do not meet the preset background parameter requirements, and focusing according to the initial focus position.
It should be noted that, when the background state parameter is determined to meet the preset background parameter requirement, the target focus position is obtained and focusing is performed according to the target focus position, and when the background state parameter is determined not to meet the preset background parameter requirement, focusing is performed directly according to the initial focus position. The background state parameter may include at least one of a background distance, a background area and a background occupation, and the preset background parameter requirement may be set according to the requirement. For example, when the background distance is too large and the background area is too small, the rear Jing Zhanbi is too small, it is not necessary to obtain better background sharpness, so that it is possible to set not to perform depth of field compensation.
Illustratively, the electronic device employs background distance detection to determine whether to enable a face depth of field compensation mechanism: firstly, setting a distance threshold, and closing a depth-of-field compensation mechanism when the distance between the background and the face is too large, namely, the distance is larger than or equal to the distance threshold, so as to avoid the depth-of-field compensation when the face is still in the condition of no background, or the depth-of-field compensation is carried out when the distance between the background and the face is large.
In some embodiments, the current environmental parameters and the background state parameters may be used together to determine whether to perform depth of view compensation to avoid meaningless depth of view compensation operations.
Fig. 4 is a schematic implementation flow chart of another focusing method according to the embodiment of the present application, as shown in fig. 4, where the method may further include, before step 102, on the basis of the embodiment shown in fig. 1:
step 401: determining that a current environment parameter meets preset environment parameter requirements, and a background state parameter meets preset background parameter requirements, wherein the current environment parameter comprises at least one of an environment temperature, an environment humidity and an environment brightness, the background state parameter comprises at least one of a background distance, a background area and a background Jing Zhanbi, the background distance is used for indicating the distance between a background in an image to be shot and the region of interest, the background area is used for indicating the area of the background in the image to be shot, and the background Jing Zhanbi is used for indicating the ratio of the area of the background in the image to be shot to the total area of the image to be shot;
The method may further comprise: step 410: and determining that the current environment parameters do not meet the preset environment parameter requirements and/or the background state parameters do not meet the preset background parameter requirements, and focusing according to the initial focus position.
It should be noted that, the current environmental parameter and the background state parameter are used together to determine whether to perform depth of field compensation, if at least one of the current environmental parameter and the background state parameter does not meet the preset requirement, the depth of field compensation may not be set, and the method can determine whether the current scene is suitable for starting the depth of field compensation mechanism more accurately, so as to avoid performing meaningless depth of field compensation, wasting equipment resources and affecting user experience.
An exemplary application of the embodiments of the present application in a practical application scenario will be described below.
Fig. 5 is a general flow chart of a face focusing method according to an embodiment of the present application. As shown in fig. 5, the method includes the following steps 501 to 507:
step 501, determining whether a face scene is entered, determining that the face scene is entered 502, and determining that the face scene is not entered 507;
step 502: determining a face region of interest (ROI);
step 503: acquiring face focusing data according to a face region of interest;
Step 504: calculating a focusing face position according to the face focusing data;
step 505: starting the depth of field compensation of the face scene;
step 506: calculating a final focus position, and entering 508;
step 507: other focusing methods are adopted, and 508 is entered;
step 508: and (5) finishing focusing.
The focusing method comprises the steps of firstly detecting whether a face scene exists or not, if the face exists, selecting a proper ROI frame by an AF algorithm to obtain face focusing data, obtaining a quasi-focus position of the face according to the face focusing data, and then performing depth of field compensation on the quasi-focus position of the face to obtain a final focusing position.
Fig. 6 is a schematic diagram of effects before and after depth of field compensation for face focusing according to an embodiment of the present application. As can be seen from fig. 6, before the depth of field compensation mechanism acts, the quasi-focal position just falls on the face position, and there is a clear distance in front of the face, namely the front depth of field; after the depth of field compensation mechanism is acted, the distance is shortened, the object distance is increased, and the quasi-focus position moves backwards, so that the human face falls in the front depth of field stage, and the purposes of ensuring the definition of the human face and optimizing the definition of the background are achieved. And as the quasi-focal distance increases, the depth of field range also increases, and the background definition is further optimized. The principle is as follows: the farther the shooting distance is, the greater the depth of field is; the closer the shooting distance, the smaller the depth of field.
By way of example, the depth-of-field compensation mechanism can flexibly configure a proper depth-of-field compensation size according to the distance between a face and a lens. Because the depth of field range is smaller as the quasi-focal object is closer to the lens; the depth of field range is greater as the quasi-focal object is farther from the lens. Therefore, in order to adapt to different face distances and prevent overcompensation, a depth-of-field compensation mechanism can be configured by using a parameter structure, namely a preset corresponding relation for depth-of-field compensation.
Fig. 7 is a schematic diagram of configuration of a depth of field compensation value according to an embodiment of the present application, as shown in fig. 7, the parameter structure mainly sets different Index segments Index according to different face distances, and then configures different compensation thresholds (compensate threshold) at different distances. The parameter structure can flexibly increase and decrease Index segments Index according to the needs, and the position start and position end of the start position can also be flexibly configured according to the needs, so that a depth-of-field compensation mechanism, namely a corresponding relation for depth-of-field compensation, can be subjectively configured according to the needs.
Fig. 8 is a schematic diagram of another configuration of a depth-of-field compensation value according to an embodiment of the present application, as shown in fig. 8, segments with face sizes may be set in each distance segment index, and a final depth-of-field compensation value may be obtained by multiplying the depth-of-field compensation value obtained by the distance segments by the compensated scaling factor according to the scaling factors corresponding to different face sizes. The focus of the shooting lens is the position of the moving lens, and the dac value is used for controlling the camera, so that the depth of field compensation value is finally used for adjusting the position of the lens in the form of the dac value.
Wherein, different depth of field compensation sizes (compensation threshold compensate threshold, i.e. Δpos) are configured according to the size of the face and the shooting distance; the far the face is, the smaller the distance far and near depth of field compensation (i.e., the smaller compensate threshold (Δpos)) and vice versa. After the quasi-focus is found to be shifted back by comparing before and after the action of a depth of field compensation mechanism, the shooting distance is increased, and the depth of field is increased; and the human face is moved to the front depth of field position, so that the front depth of field is effectively utilized. By adopting the method, the definition of the shot object can be ensured in the focusing process, the definition of the background can be obviously improved, and the satisfaction of the user is improved.
It should be understood that, although the steps in the flowcharts of fig. 1-5 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1-5 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the sub-steps or stages are performed necessarily occur in sequence, but may be performed alternately or alternately with at least a portion of the other steps or sub-steps or stages of other steps.
Based on the foregoing embodiments, the embodiments of the present application provide a focusing device, where each module included in the device and each unit included in each module may be implemented by a processor; of course, the method can also be realized by a specific logic circuit; in an implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 9 is a schematic structural diagram of a focusing device provided in the embodiment of the present application, as shown in fig. 9, the device 600 includes an initial determining module 601, a target obtaining module 602, and a first processing module 603, where:
an initial determining module 601, configured to determine an initial focus position corresponding to an image to be shot, where the initial focus position is determined according to a region of interest in the image to be shot;
the target obtaining module 602 is configured to perform depth of field compensation on the initial focus position to obtain a target focus position, where the depth of field compensation is determined according to a shooting distance between a shooting object and a lens of the electronic device, and the shooting object corresponds to the region of interest;
the first processing module 603 is configured to focus the image to be photographed according to the target focus position.
In some embodiments, the target acquisition module 602 includes a first determination unit and a first focusing unit, wherein,
the first determining unit is configured to determine a first depth of field compensation value according to the shooting distance and a preset first correspondence, where the preset first correspondence includes a correspondence between the shooting distance and the depth of field compensation value;
and the first focusing unit is used for performing depth of field compensation on the initial focus position according to the first depth of field compensation value to obtain a target focus position.
In some embodiments, the target acquisition module 602 includes an area acquisition unit, a second determination unit, and a second focusing unit, wherein,
the area acquisition unit is used for acquiring the area of the region corresponding to the shooting object in the region of interest;
the second determining unit is configured to determine a second depth of field compensation value according to the shooting distance, the area, and the preset second correspondence, where the preset second correspondence includes a correspondence between the shooting distance, the area, and the depth of field compensation value;
and the second focusing unit is used for performing depth of field compensation on the initial focus position according to the second depth of field compensation value to obtain a target focus position.
In some embodiments, the second focusing unit is further specifically configured to: and determining a second depth of field compensation value according to the distance range corresponding to the shooting distance, the area range corresponding to the area and the preset second corresponding relation.
In some embodiments, the target acquisition module 602 is further specifically configured to: determining that a current environmental parameter meets a preset environmental parameter requirement, wherein the current environmental parameter comprises at least one of environmental temperature, environmental humidity and environmental brightness;
the apparatus further comprises: and the second processing module is used for determining that the current environmental parameter does not meet the preset environmental parameter requirement and focusing according to the initial focus position.
In some embodiments, the target acquisition module 602 is further specifically configured to: determining that a background state parameter meets a preset background parameter requirement, wherein the background state parameter comprises at least one of a background distance, a background area and a background Jing Zhanbi, the background distance is used for indicating the distance between a background in an image to be shot and the region of interest, the background area is used for indicating the area of the background in the image to be shot, and the background Jing Zhanbi is used for indicating the ratio of the area of the background in the image to be shot to the total area of the image to be shot;
The apparatus further comprises: and the second processing module is used for determining that the background state parameter does not meet the preset background parameter requirement and focusing according to the initial focus position.
In some embodiments, the target acquisition module 602 is further specifically configured to: determining that a current environment parameter meets preset environment parameter requirements, and a background state parameter meets preset background parameter requirements, wherein the current environment parameter comprises at least one of an environment temperature, an environment humidity and an environment brightness, the background state parameter comprises at least one of a background distance, a background area and a background Jing Zhanbi, the background distance is used for indicating the distance between a background in an image to be shot and the region of interest, the background area is used for indicating the area of the background in the image to be shot, and the background Jing Zhanbi is used for indicating the ratio of the area of the background in the image to be shot to the total area of the image to be shot;
the apparatus further comprises: and the second processing module is used for determining that the current environment parameter does not meet the preset environment parameter requirement and/or the background state parameter does not meet the preset background parameter requirement, and focusing according to the initial focus position.
In the embodiment of the application, the focus position can be adjusted according to the shooting distance between the shooting object and the lens of the electronic equipment, so that the definition of the shooting object can be ensured in the focusing process, the definition of the background can be obviously improved, and the user satisfaction is improved.
The description of the apparatus embodiments above is similar to that of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the device embodiments of the present application, please refer to the description of the method embodiments of the present application for understanding.
It should be noted that, in the embodiment of the present application, the division of the modules by the focusing device shown in fig. 8 is schematic, and is merely a logic function division, and there may be another division manner in practical implementation. In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units. Or in a combination of software and hardware.
It should be noted that, in the embodiment of the present application, if the method is implemented in the form of a software functional module, and sold or used as a separate product, the method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or part contributing to the related art, and the computer software product may be stored in a storage medium, including several instructions for causing an electronic device to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, an optical disk, or other various media capable of storing program codes. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 10, the electronic device 100 may include a processor 110, a memory 120, a wireless communication module 130, a display screen 140, a camera 150, a usb interface 160, and the like.
Processor 110 may include one or more processing units. For example, processor 110 is a central processing unit (central processing unit, CPU), may be an integrated circuit specific (application specific integrated circuit, ASIC), or may be one or more integrated circuits configured to implement embodiments of the present application, such as: one or more microprocessors (digital signal processor, DSPs), or one or more field programmable gate arrays (field programmable gate array, FPGAs). Wherein the different processing units may be separate devices or may be integrated in one or more processors.
Memory 120 may be used to store computer-executable program code that includes instructions. The internal memory 120 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, video data, etc.), and so on. In addition, the memory 120 may include a high-speed random access memory, and may also include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications and data processing of the electronic device 100 by executing instructions stored in the memory 120 and/or instructions stored in a memory provided in the processor.
The wireless communication module 130 may provide solutions for wireless communication including WLAN, such as Wi-Fi network, bluetooth, NFC, IR, etc., as applied on the electronic device 100. The wireless communication module 130 may be one or more devices integrating at least one communication processing module. In some embodiments of the present application, the electronic device 100 may establish a wireless communication connection with other electronic devices through the wireless communication module 130.
The display screen 140 is used to display images, videos, and the like. The display screen 140 includes a display panel. The display panel may be a liquid crystal display, an organic light emitting diode, an active matrix organic light emitting diode or an active matrix organic light emitting diode, a flexible light emitting diode, a Mini Led, a Micro oLed, a quantum dot light emitting diode, or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 140, N being a positive integer greater than 1.
The camera 150 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a complementary metal oxide semiconductor (complementary metal oxide semiconductor, CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the processor 110 for conversion into a digital image signal, which is then converted into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the electronic device 100 may include 1 or N cameras 150, N being a positive integer greater than 1.
The USB interface 160 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 160 may be used to connect other electronic devices. In still other embodiments, the electronic device 100 may also be connected to a camera through the USB interface 160 for capturing images.
It should be understood that the illustrated structure of the embodiment of the present invention does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
An embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps in the focusing method provided in the above-described embodiment.
Any combination of one or more computer readable media may be utilized as the above-described computer readable storage media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (erasable programmable read only memory, EPROM) or flash memory, an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, radio Frequency (RF), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for the present specification may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (local area network, LAN) or a wide area network (wide area network, WAN), or may be connected to an external computer (e.g., connected via the internet using an internet service provider).
The present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of the focusing method provided by the above method embodiments.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It should be noted here that: the description of the storage medium, program product, and apparatus embodiments above is similar to that of the method embodiments described above, with similar benefits as the method embodiments. For technical details not disclosed in the embodiments of the storage medium, the program product and the apparatus of the present application, please refer to the description of the method embodiments of the present application.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments are merely illustrative, and the division of the modules is merely a logical function division, and other divisions may be implemented in practice, such as: multiple modules or components may be combined, or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or modules, whether electrically, mechanically, or otherwise.
The modules described above as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules; can be located in one place or distributed to a plurality of network units; some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated in one processing unit, or each module may be separately used as one unit, or two or more modules may be integrated in one unit; the integrated modules may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read Only Memory (ROM), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the integrated units described above may be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or part contributing to the related art, and the computer software product may be stored in a storage medium, including several instructions for causing an electronic device to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
The methods disclosed in the several method embodiments provided in the present application may be arbitrarily combined without collision to obtain a new method embodiment.
The features disclosed in the several product embodiments provided in the present application may be combined arbitrarily without conflict to obtain new product embodiments.
The features disclosed in the several method or apparatus embodiments provided in the present application may be arbitrarily combined without conflict to obtain new method embodiments or apparatus embodiments.
The foregoing is merely an embodiment of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A focusing method, applied to an electronic device, comprising:
determining an initial focus position corresponding to an image to be shot, wherein the initial focus position is determined according to an interested region in the image to be shot;
performing depth of field compensation on the initial focus position to obtain a target focus position, wherein the depth of field compensation is determined according to a shooting distance between a shooting object and a lens of the electronic equipment, and the shooting object corresponds to the region of interest;
And focusing the image to be shot according to the target focus position.
2. The method of claim 1, wherein performing depth of field compensation on the initial focus position to obtain a target focus position comprises:
determining a first depth of field compensation value according to the shooting distance and a preset first corresponding relation, wherein the preset first corresponding relation comprises a corresponding relation between the shooting distance and the depth of field compensation value;
and performing depth of field compensation on the initial focus position according to the first depth of field compensation value to obtain the target focus position.
3. The method of claim 1, wherein performing depth of field compensation on the initial focus position to obtain a target focus position comprises:
acquiring the area of a region corresponding to a shooting object in the region of interest;
determining a second depth of field compensation value according to the shooting distance, the area and a preset second corresponding relation, wherein the preset second corresponding relation comprises a corresponding relation among the shooting distance, the area and the depth of field compensation value;
and performing depth of field compensation on the initial focus position according to the second depth of field compensation value to obtain the target focus position.
4. The method of claim 3, wherein the determining a second depth of field compensation value according to the shooting distance, the area, and a preset second correspondence comprises:
and determining the second depth-of-field compensation value according to the distance range corresponding to the shooting distance, the area range corresponding to the area and the preset second corresponding relation.
5. The method of claim 1, further comprising, prior to said depth of field compensation for said initial focus position to obtain a target focus position:
determining that a current environmental parameter meets a preset environmental parameter requirement, wherein the current environmental parameter comprises at least one of environmental temperature, environmental humidity and environmental brightness;
the method further comprises the steps of:
and determining that the current environmental parameter does not meet the preset environmental parameter requirement, and focusing according to the initial focus position.
6. The method of claim 1, further comprising, prior to said depth of field compensation for said initial focus position to obtain a target focus position:
determining that a background state parameter meets a preset background parameter requirement, wherein the background state parameter comprises at least one of a background distance, a background area and a background Jing Zhanbi, the background distance is used for indicating the distance between a background in an image to be shot and the region of interest, the background area is used for indicating the area of the background in the image to be shot, and the background Jing Zhanbi is used for indicating the ratio of the area of the background in the image to be shot to the total area of the image to be shot;
The method further comprises the steps of:
and determining that the background state parameters do not meet the preset background parameter requirements, and focusing according to the initial focus position.
7. The method of claim 1, further comprising, prior to said depth of field compensation for said initial focus position to obtain a target focus position:
determining that a current environment parameter meets preset environment parameter requirements, and a background state parameter meets preset background parameter requirements, wherein the current environment parameter comprises at least one of an environment temperature, an environment humidity and an environment brightness, the background state parameter comprises at least one of a background distance, a background area and a background Jing Zhanbi, the background distance is used for indicating the distance between a background in an image to be shot and the region of interest, the background area is used for indicating the area of the background in the image to be shot, and the background Jing Zhanbi is used for indicating the ratio of the area of the background in the image to be shot to the total area of the image to be shot;
the method further comprises the steps of:
and determining that the current environment parameters do not meet the preset environment parameter requirements and/or the background state parameters do not meet the preset background parameter requirements, and focusing according to the initial focus position.
8. A focusing device, characterized in that it is applied to an electronic apparatus, said device comprising:
the initial determining module is used for determining an initial focus position corresponding to an image to be shot, wherein the initial focus position is determined according to an interested region in the image to be shot;
the target acquisition module is used for performing depth of field compensation on the initial focus position to obtain a target focus position, wherein the depth of field compensation is determined according to a shooting distance between a shooting object and a lens of the electronic equipment, and the shooting object corresponds to the region of interest;
and the first processing module is used for focusing the image to be shot according to the target focus position.
9. A computer device comprising a memory and a processor, the memory storing a computer program executable on the processor, characterized in that the processor implements the steps of the focusing method of any one of claims 1 to 7 when the program is executed.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the focusing method as claimed in any one of claims 1 to 7.
CN202310393248.8A 2023-04-13 2023-04-13 Focusing method and device, electronic equipment and storage medium Pending CN116471480A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310393248.8A CN116471480A (en) 2023-04-13 2023-04-13 Focusing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310393248.8A CN116471480A (en) 2023-04-13 2023-04-13 Focusing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116471480A true CN116471480A (en) 2023-07-21

Family

ID=87176503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310393248.8A Pending CN116471480A (en) 2023-04-13 2023-04-13 Focusing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116471480A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117676331A (en) * 2024-02-01 2024-03-08 荣耀终端有限公司 Automatic focusing method and electronic equipment
CN117915200A (en) * 2024-03-19 2024-04-19 成都唐米科技有限公司 Fast focus-following shooting method and device based on binocular camera and binocular equipment
CN118158529A (en) * 2024-05-09 2024-06-07 深圳市众鑫创展科技有限公司 Focusing method and device of image pickup equipment, terminal equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117676331A (en) * 2024-02-01 2024-03-08 荣耀终端有限公司 Automatic focusing method and electronic equipment
CN117915200A (en) * 2024-03-19 2024-04-19 成都唐米科技有限公司 Fast focus-following shooting method and device based on binocular camera and binocular equipment
CN118158529A (en) * 2024-05-09 2024-06-07 深圳市众鑫创展科技有限公司 Focusing method and device of image pickup equipment, terminal equipment and storage medium

Similar Documents

Publication Publication Date Title
US20220076006A1 (en) Method and device for image processing, electronic device and storage medium
CN116471480A (en) Focusing method and device, electronic equipment and storage medium
US9300858B2 (en) Control device and storage medium for controlling capture of images
US20200314331A1 (en) Image capturing apparatus, method for controlling the same, and storage medium
CN106031147B (en) For using the method and system of corneal reflection adjustment camera setting
CN108040204B (en) Image shooting method and device based on multiple cameras and storage medium
CN104811609A (en) Photographing parameter adjustment method and device
CN105959543A (en) Shooting device and method of removing reflection
US20110128401A1 (en) Digital photographing apparatus and method of controlling the same
US10778903B2 (en) Imaging apparatus, imaging method, and program
US11729488B2 (en) Image capturing apparatus, method for controlling the same, and storage medium
JP2024023712A (en) Imaging apparatus, method for imaging, imaging program, and recording medium
US20200204722A1 (en) Imaging apparatus, imaging method, and program
EP3211879B1 (en) Method and device for automatically capturing photograph, electronic device
CN111586280A (en) Shooting method, shooting device, terminal and readable storage medium
CN111277754B (en) Mobile terminal shooting method and device
CN114338956B (en) Image processing method, image processing apparatus, and storage medium
CN107707819B (en) Image shooting method, device and storage medium
KR20230091097A (en) Mechanisms to improve image capture operations
CN114339017B (en) Distant view focusing method, device and storage medium
US12041337B2 (en) Imaging control apparatus, imaging control method, program, and imaging device
CN109862252B (en) Image shooting method and device
CN117177055A (en) Focusing method, focusing device and storage medium
CN117395504A (en) Image acquisition method, device and storage medium
CN107317977B (en) Shooting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination