CN116801101A - Focusing method, focusing device, electronic equipment and readable storage medium - Google Patents

Focusing method, focusing device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN116801101A
CN116801101A CN202310943947.5A CN202310943947A CN116801101A CN 116801101 A CN116801101 A CN 116801101A CN 202310943947 A CN202310943947 A CN 202310943947A CN 116801101 A CN116801101 A CN 116801101A
Authority
CN
China
Prior art keywords
image
focusing
focus
value
display area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310943947.5A
Other languages
Chinese (zh)
Inventor
邓智桂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202310943947.5A priority Critical patent/CN116801101A/en
Publication of CN116801101A publication Critical patent/CN116801101A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices

Abstract

The application discloses a focusing method, a focusing device, electronic equipment and a readable storage medium, and belongs to the technical field of camera shooting. The method comprises the following steps: generating a third image based on the first image and the second image continuously acquired by the image sensor in the electronic device; acquiring a first focusing value of a first image, a second focusing value of a second image and a third focusing value of a third image; based on the first focus value, the second focus value, and the third focus value, the image sensor is controlled to focus based on the first focus position.

Description

Focusing method, focusing device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of image pickup, and particularly relates to a focusing method, a focusing device, electronic equipment and a readable storage medium.
Background
Currently, in an Automatic Focus (AF) process of an electronic device, AF stats may be acquired by an AF stats (i.e., AF statistics) module, and then the AF stats module transmits the acquired AF stats to the AF control algorithm module. After receiving the AF stats, the AF control algorithm module may perform phase focusing according to the AF stats to determine a focusing position, and then perform contrast focusing for a plurality of times according to the AF stats to correct the focusing position determined by the phase focusing, and then may perform focusing at the corrected focusing position.
However, according to the above method, when the electronic device is automatically focused, phase focusing and multiple contrast focusing are required to be performed to focus at the corrected focusing position, and the phase focusing and the multiple contrast focusing are required to be calculated in a large amount by the AF control algorithm, so that the power consumption is large when the electronic device is automatically focused.
Disclosure of Invention
The embodiment of the application aims to provide a focusing method, a focusing device, electronic equipment and a readable storage medium, which can solve the problem of larger power consumption during automatic focusing of the electronic equipment.
In a first aspect, an embodiment of the present application provides a focusing method, including: generating a third image based on the first image and the second image continuously acquired by the image sensor in the electronic device; acquiring a first Focus Value (FV) of a first image, a second Focus Value of a second image, and a third Focus Value of a third image; based on the first focus value, the second focus value, and the third focus value, the image sensor is controlled to focus based on the first focus position.
In a second aspect, an embodiment of the present application provides a focusing device, where the device includes a generating module, an acquiring module, and a control module; the generation module is used for generating a third image based on the first image and the second image which are continuously acquired by the image sensor in the electronic equipment; the acquisition module is used for acquiring a first focusing value of the first image, a second focusing value of the second image and a third focusing value of the third image; and the control module is used for controlling the image sensor to focus based on the first focusing position based on the first focusing value, the second focusing value and the third focusing value acquired by the acquisition module.
In a third aspect, an embodiment of the present application provides an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, the third image can be generated based on the first image and the second image which are continuously acquired by the image sensor in the electronic equipment; acquiring a first focusing value of the first image, a second focusing value of the second image and a third focusing value of the third image; and controlling the image sensor to focus based on the first focus position based on the first focus value, the second focus value and the third focus value. According to the scheme, when the electronic equipment is in automatic focusing, the calculation of the electronic equipment can be reduced because the focusing values of the two images which are continuously acquired and the focusing values of the images generated based on the two images are not needed to be focused through phase focusing and multiple times of contrast focusing, and therefore the power consumption of the electronic equipment during automatic focusing can be reduced.
Drawings
FIG. 1 is a flowchart of a focusing method according to an embodiment of the present application;
FIG. 2 is a second flowchart of a focusing method according to an embodiment of the present application;
FIG. 3 is a third flowchart of a focusing method according to an embodiment of the present application;
FIG. 4 is a flowchart of a focusing method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of determining a first focusing position in a focusing method according to an embodiment of the present application;
fig. 6 is a schematic diagram of a data transmission path in a focusing method according to an embodiment of the present application;
fig. 7 is a schematic diagram of phase focusing in a focusing method according to an embodiment of the present application;
FIG. 8 is a flowchart of a focusing method according to an embodiment of the present application;
fig. 9 is a schematic diagram of a first display area in a focusing method according to an embodiment of the present application;
FIG. 10 is a diagram illustrating a second data transmission path in a focusing method according to an embodiment of the present application;
fig. 11 is a schematic diagram of a second display area in the focusing method according to the embodiment of the present application;
FIG. 12 is a flowchart of a focusing method according to an embodiment of the present application;
fig. 13 is a schematic diagram of a first identifier in a focusing method according to an embodiment of the present application;
FIG. 14 is a flowchart of a focusing method according to an embodiment of the present application;
FIG. 15 is a schematic diagram of determining a third display area in a focusing method according to an embodiment of the present application;
FIG. 16 is a diagram illustrating a second embodiment of a focusing method according to the present application;
fig. 17 is a schematic diagram of overlapping a first display area and a third display area in a focusing method according to an embodiment of the present application;
FIG. 18 is a schematic view of a focusing device according to an embodiment of the present application;
FIG. 19 is a schematic diagram of an electronic device according to an embodiment of the present application;
fig. 20 is a schematic hardware diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The terms "at least one," "at least one," and the like in the description and in the claims, mean that they encompass any one, any two, or a combination of two or more of the objects. For example, at least one of a, b, c (item) may represent: "a", "b", "c", "a and b", "a and c", "b and c" and "a, b and c", wherein a, b, c may be single or plural. Similarly, the term "at least two" means two or more, and the meaning of the expression is similar to the term "at least one".
The terms or terms used in the description and claims of the present application will be explained first.
Integrated circuit (Integrated Circuit, IC) chip: an integrated circuit formed by a large number of microelectronic components (such as transistors, resistors, capacitors and the like) is placed on a plastic substrate to form a chip; the IC chip includes a wafer chip or a package chip.
RAW: the image sensor outputs a format of an image, RAW image, i.e. an original image, also called digital negative, which cannot be used directly as an image, but creates an image containing all image information; typically, RAW images have a wide gamut of internal colors, can be precisely adjusted, and can be modified simply before being converted to images of other formats.
Region of interest (Region Of Interest, ROI): in machine vision and image processing, a region to be processed is outlined from a processed image in a square, circle, ellipse, irregular polygon or the like.
AF stats: the auto-focus statistics data is information such as focus position, confidence coefficient, phase difference, average brightness, focus value and the like of the ROI of the current RAW image, which is calculated and counted according to the input RAW image data, is used for judging and using an auto-focus control algorithm and calling a motor.
Micro control unit (Micro Controller Unit, MCU): the frequency and specification of the central processing unit (Central Process Unit, CPU) are properly reduced, and the peripheral interfaces such as a Memory (Memory), a counter (Timer), a universal serial bus (Universal Serial Bus, USB), an A/D conversion, a universal asynchronous receiver Transmitter (Universal Asynchronous Receiver/Transmitter, UART), a programmable logic controller (Programmable Logic Controller, PLC), a direct Memory access (Direct Memory Access, DMA) and even a liquid crystal display (Liquid Crystal Display, LCD) driving circuit are integrated on a single chip to form a chip-level computer.
Motion estimation and motion compensation (Motion Estimate and Motion Compensation, MEMC): by calculating the motion trail of the picture object between frames, the motion compensation frame is simulated and inserted, so that the frame number higher than the original frame number is displayed.
The focusing method, the focusing device, the electronic equipment and the readable storage medium provided by the embodiment of the application are described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Currently, AF in a camera generally includes phase focus (Phase Detection Autofocus) and contrast focus (Contrast Detection Autofocus), and the principle of which is to calculate and count AF stats (including focus position, confidence, phase difference, average brightness, etc.) by using the information of ROI in the current captured image; the AF control algorithm can then push the motor to move to the focus position based on the AF stats obtained, thereby making the ROI in the current shot clearly complete focus.
In the whole focusing process, it is also important to calculate and count the AF stats of the current shot image. AF stats statistics for contrast focus are typically done by hardware; AF stats statistics for phase focus are limited by the variety of differences of the image sensor, typically done by software, or by hardware.
In general, contrast focus is commonly used in consumer digital cameras. The lens is repeatedly moved to drive the lens along with the forward and backward movement of the lens for focusing, image data are acquired in real time through the photosensitive element, and the acquired image data are transmitted to the image processor in real time; then the image processor calculates the inverse difference according to the received image data, compares and screens out the value with the maximum inverse difference, and determines whether to focus according to the value with the maximum contrast. It can be seen that the focusing speed of contrast focusing is relatively slow, but a higher focusing accuracy can be obtained. The phase focusing is to reserve some shielding pixel points on the photosensitive element, and is specially used for phase detection, and then determine the offset value of focusing through the distance between pixels and the change thereof, so as to realize focusing. Compared with contrast focusing, phase focusing does not need repeated movement of a lens, focusing stroke is short, and focusing process is clean and is not hesitant. However, because the phase detection is required to be performed by using the shielding pixel points on the photosensitive element, the requirement of phase focusing on the light intensity is higher; although quick focusing is possible, there is a problem of focusing accuracy, that is, the most accurate focusing position cannot be necessarily reached after focusing.
Thus, current AF of electronic devices typically uses hybrid focusing, i.e. a focus position is calculated from phase focusing, and the lens is pushed to the focus position by a motor, but due to the depth of field, the ROI at the focus position is basically clear, and then the precise focus position is found using contrast focusing multiple times. Specifically, in the AF process of the electronic device, AF stats may be obtained by an AF stats module of a main control chip in the electronic device, and then the AF stats module transmits the obtained AF stats to an AF control algorithm module in the main control chip. After receiving the AF stats, the AF control algorithm module may perform phase focusing according to the AF stats to determine a focusing position, and then perform contrast focusing according to the AF stats multiple times to correct the focusing position determined by the phase focusing, and then may perform focusing at the corrected focusing position.
However, according to the above method, when the electronic device is automatically focused, phase focusing and multiple contrast focusing are required to be performed to focus at the corrected focusing position, and the phase focusing and the multiple contrast focusing are required to be calculated in a large amount through an AF control algorithm; in the automatic focusing process, the AF control algorithm module, the AF stats module and the motor of the main control chip are required to be communicated back and forth; resulting in greater power consumption when the electronic device is auto-focused.
In order to solve the above problems, embodiments of the present application provide a focusing method, a device, an electronic apparatus, and a readable storage medium.
In the scheme of the application, the third image can be generated based on the first image and the second image which are continuously acquired by the image sensor in the electronic equipment; acquiring a first focusing value of the first image, a second focusing value of the second image and a third focusing value of the third image; and controlling the image sensor to focus based on the first focus position based on the first focus value, the second focus value and the third focus value. According to the scheme, when the electronic equipment is in automatic focusing, the calculation of the electronic equipment can be reduced because the focusing values of the two images which are continuously acquired and the focusing values of the images generated based on the two images are not needed to be focused through phase focusing and multiple times of contrast focusing, and therefore the power consumption of the electronic equipment during automatic focusing can be reduced.
It should be noted that, in the focusing method provided by the embodiment of the present application, the execution body may be a focusing device, an electronic device, or a functional module in the electronic device. In some embodiments of the present application, an electronic device executes a focusing method as an example, and the focusing method provided by the embodiments of the present application is described.
Fig. 1 shows a flowchart of a focusing method provided by an embodiment of the present application. As shown in fig. 1, the focusing method provided in the embodiment of the present application may include the following steps 101 to 103.
Step 101, the electronic device generates a third image based on the first image and the second image continuously acquired by the image sensor in the electronic device.
Alternatively, in the embodiment of the present application, the first image and the second image may be video frames continuously acquired by the image sensor.
Alternatively, in an embodiment of the present application, the first image and the second image may be images continuously acquired by the image sensor for the first object.
Alternatively, in the embodiment of the present application, the first object may be a stationary object or a moving object.
For example, the first object may be a stationary cup, a flower, a building, or the like.
For example, the first object may be a bird, a fallen leaf, a traveling car, or the like.
The image sensor is a device for converting an optical image on a photosensitive surface into an electrical signal in a proportional relationship with the optical image by using a photoelectric conversion function of a photoelectric device in an electronic apparatus.
Optionally, in an embodiment of the present application, the second image may be an image acquired by the image sensor recently (or at a current moment of the system), and the first image may be: an image acquired before the second image and having an acquisition time closest to the acquisition time of the second image.
Optionally, in an embodiment of the present application, the third image may be: a new image is inserted between the first image and the second image.
A specific method for generating the third image by the electronic device will be described in detail.
Alternatively, in the embodiment of the present application, as shown in fig. 2 in conjunction with fig. 1, the above step 101 may be specifically implemented by the following step 101 a.
In step 101a, the electronic device performs frame interpolation processing by using the image data of the first image and the image data of the second image through the image processing chip, so as to generate a third image.
The image processing chip is an IC chip independent of the main control chip in the electronic equipment.
In the embodiment of the application, the image processing chip is arranged outside the main control chip.
It should be noted that, the main control chip, that is, the core component of the main board or the hard disk of the electronic device, is a bridge connecting the hardware, and is also a brain for controlling the operation of the electronic device. The master control chip generally comprises a south bridge chip and a north bridge chip. The south bridge chip is responsible for communication between I/O (i.e. input/output) buses, and the north bridge chip is responsible for connection with the CPU and controlling the memory. The main control chip can be used for loading and running an operating system, encoding and decoding and the like.
Alternatively, in the embodiment of the present application, the image data may be RAW data collected by an image sensor in the electronic device and transmitted to the image processing chip.
In the embodiment of the application, the frame inserting process is to generate an intermediate frame by a frame inserting algorithm according to the data of the front and rear images which are continuously acquired, and insert the intermediate frame into the space between the front and rear images; the basic idea of the frame interpolation processing is to calculate the displacement of a moving object in an image sequence and predict the position of the moving object at the middle moment so as to complement the motion trail of the moving object, so that the picture is smoother and the picture tailing phenomenon is reduced.
Optionally, in an embodiment of the present application, the frame inserting algorithm may be: phase-based interpolation, adaptive convolution kernel-based interpolation, phantom-based interpolation, or optical flow-based interpolation.
For example, taking the above-mentioned frame interpolation algorithm as the above-mentioned optical flow-based frame interpolation algorithm as an example, the optical flow-based frame interpolation algorithm may be classified into a sparse optical flow method and a dense optical flow method; the sparse optical flow method is small in calculated amount, strong in expandability and general in frame inserting effect; the dense optical flow method takes all points on the image as target points, and then calculates the motion trail of all points in the subsequent frames.
For the specific description of the above-mentioned frame inserting algorithm, reference may be made to the related description in the related art, and in order to avoid repetition, the description is omitted here.
In the embodiment of the application, the electronic equipment can generate the third image between the first image and the second image through frame insertion processing, so that the smoothness and continuity between the generated third image and the first image and the second image respectively can be ensured.
Optionally, in an embodiment of the present application, the image processing chip may include a MEMC module for motion estimation and motion compensation; the image data of the image acquired by the image sensor in real time may be transmitted to the MEMC module, and the MEMC module generates the third image based on the image data of the first image and the image data of the second image after acquiring the image data of the first image and the image data of the second image.
Step 102, the electronic device obtains a first focus value of the first image, a second focus value of the second image, and a third focus value of the third image.
Alternatively, in the embodiment of the present application, the first focus value may be calculated by the electronic device after the first image is acquired and stored in the electronic device, or may be calculated by the electronic device after the third image is generated.
Optionally, in an embodiment of the present application, the second focus value may be calculated by the electronic device after the second image is acquired and stored in the electronic device, or may be calculated by the electronic device after the third image is generated.
Optionally, in an embodiment of the present application, the third focus value may be calculated by the electronic device after the third image is generated.
A specific method for the electronic device to acquire the first, second, and third focus values will be described in detail below, taking as an example that the first, second, and third focus values are all calculated after the third image is generated.
Alternatively, in an embodiment of the present application, as shown in fig. 3 in conjunction with fig. 1, the above step 102 may be specifically implemented by the following steps 102a to 102 c.
Step 102a, the electronic device obtains focusing information of the first image according to image data of the first image through the image processing chip, and obtains a first focusing value according to the focusing information of the first image.
Wherein, the focusing information includes: image contrast, image confidence, and image average brightness.
It should be noted that, the image contrast of the first image refers to the measurement of different brightness levels between the brightest white and the darkest black in the bright-dark area of the first image, that is, the magnitude of the gray contrast of the first image, the larger the gray contrast is represented by the larger the image contrast, the smaller the gray contrast is represented by the smaller the image contrast; the image confidence (also referred to as confidence level) of the first image refers to the confidence level of the pixels in the first image, that is, the proportion of pixels in the first image for which the pixel value is in a certain confidence interval; the average brightness of the first image is the average value of brightness values of all pixels in the first image.
Optionally, in an embodiment of the present application, the image data may include: a gray value of a pixel, a pixel value of a pixel, a luminance value of a pixel, and the like.
Optionally, in the embodiment of the present application, the image processing chip may calculate an image contrast of the first image by using gray values of all pixels in image data of the first image; and the image confidence of the first image can be obtained through calculation through the pixel values of all pixels in the image data of the first image; and calculating the average brightness of the first image by the brightness values of all pixels in the image data of the first image.
Optionally, in the embodiment of the present application, after the image processing chip acquires the focusing information of the first image, the ROI (for example, a display area of a shooting object in the first image) in the first image may be determined according to the image contrast in the focusing information; then judging the credibility of pixels in the ROI according to the image confidence coefficient in the focusing information, and determining the average brightness of the ROI according to the average brightness of the image in the focusing information under the condition that the image confidence coefficient is larger than or equal to a confidence coefficient threshold value; and then calculating the focus value corresponding to the ROI through the average brightness of the ROI, and determining the focus value as the first focus value.
Step 102b, the electronic device obtains focusing information of the second image according to the image data of the second image through the image processing chip, and obtains a second focusing value according to the focusing information of the second image.
Step 102c, the electronic device obtains focusing information of the third image according to the image data of the third image through the image processing chip, and obtains a third pair of focusing values according to the focusing information of the third image.
For a specific method for the electronic device to obtain the second focus value and the third focus value, reference may be made to the description of the electronic device to obtain the first focus value in the step 102a, so that repetition is avoided and details are not repeated here.
It should be noted that, for the execution sequence of the steps 102a, 102b, and 102c, the embodiment of the present application is merely exemplified by the electronic device executing the steps 102a, 102b, and 102c in sequence, and in actual implementation, the electronic device may be executed according to any execution sequence, which is not limited by the embodiment of the present application; that is, the electronic device may sequentially perform step 102b, step 102c, and step 102a, or may sequentially perform step 102b, step 102a, and step 102c, or may sequentially perform step 102c, step 102a, and step 102b, etc.
In the embodiment of the application, the electronic device can acquire the focusing value of each image based on the image contrast, the image confidence and the image average brightness of each image in the first image, the second image and the third image, so that the focusing value of each image can be acquired based on the characteristics of multiple dimensions of the image, and the accuracy of acquiring the focusing value of the image can be improved.
Optionally, in the embodiment of the present application, the image processing chip may further include an AF stats module; the MEMC module may transmit the image data of the first image, the image data of the second image, and the image data of the third image to the AF stats module after generating the third image; the AF stats module may calculate the first pair of focus values, the second pair of focus values, and the third pair of focus values, respectively, according to the acquired image data after receiving the image data transmitted by the MEMC module. Step 103, the electronic device controls the image sensor to focus based on the first focusing position based on the first focusing value, the second focusing value and the third focusing value.
Optionally, in an embodiment of the present application, controlling the image sensor to focus based on the first focusing position may be understood as: the camera is pushed to the first focusing position by a motor in the camera module of the electronic equipment to focus.
Optionally, in the embodiment of the present application, each focusing value corresponds to one focusing position; illustratively, in connection with fig. 1, as shown in fig. 4, the above-described step 103 may be implemented specifically by the following steps 103a and 103 b.
Step 103a, the electronic device determines the focusing position corresponding to the third focusing value as the first focusing position when the first focusing value and the second focusing value are smaller than the third focusing value.
For example, as shown in fig. 5, assume that a first focus value corresponds to focus position 1, a second focus value corresponds to focus position 2, a third focus value corresponds to focus position 3, and both the first focus value and the second focus value are smaller than the third focus value; the electronic device may determine the focus position 3 of the third focus value as the above-mentioned first focus position.
Optionally, in an embodiment of the present application, the image processing chip may further include an MCU; illustratively, as shown in fig. 6, the MEMC module may transmit the first focus value, the second focus value, and the third focus value to the MCU after calculating the first focus value, the second focus value, and the third focus value; after the MCU receives the first focus value, the second focus value and the third focus value, the first focus value, the second focus value and the third focus value may be aligned in pairs, and the focus position corresponding to the third focus value may be determined as the first focus position when the first focus value and the second focus value are smaller than the third focus value.
Step 103b, the electronic device controls the image sensor to focus at the first focusing position.
In the embodiment of the application, the electronic device can determine the focusing position corresponding to the third focusing value as the first focusing position and focus at the first focusing position under the condition that the first focusing value and the second focusing value are smaller than the third focusing value, so that the electronic device can focus at the focusing position corresponding to the peak value of the focusing value, and the peak value of the focusing value is usually the optimal focusing position, thereby improving the precision of automatic focusing.
Optionally, in the embodiment of the present application, after the second image is acquired, the electronic device may first perform phase focusing once, and control the motor to push the camera to a focusing position obtained by the phase focusing, and then control the motor to push the camera to the first focusing position from a focusing position obtained by the phase focusing to perform focusing after determining the first focusing position, so that the camera may quickly reach the first focusing position
However, the focus position obtained by the phase focusing is not necessarily the most accurate focus position, and, for example, as shown in fig. 7, the electronic device is basically in the depth of field range of imaging after controlling the motor to push the camera to the focus position obtained by the phase focusing, but the focus position may have the focus in front or may have the focus in back, that is, the focus position not necessarily in the focus state. Therefore, the electronic device can perform contrast focusing once again after acquiring the image data of the first image, and control the motor to push the camera to the focusing position obtained by the contrast focusing, so that the focusing position of the camera is closer to the finally determined first focusing position, and the focusing speed can be improved.
In the focusing method provided by the embodiment of the application, because the electronic equipment can be based on the respective focusing values of the two images which are continuously collected and the focusing values of the images generated based on the two images during automatic focusing, phase focusing and repeated contrast focusing are not needed, the calculation of the electronic equipment can be reduced, and thus the power consumption of the electronic equipment during automatic focusing can be reduced.
Optionally, in the embodiment of the present application, as shown in fig. 8 in conjunction with fig. 1, after the step 103, the focusing method provided in the embodiment of the present application may further include the following steps 104 to 106.
Step 104, the electronic device detects the first ROI in the fourth image displayed in the photographing interface, and determines a first display area of the first ROI in the photographing interface.
Alternatively, in the embodiment of the present application, the shooting interface may be a shooting interface for the first object, or may be a shooting preview interface for the first object.
Optionally, in the embodiment of the present application, the electronic device may detect the first ROI by using an AF control algorithm through an AF algorithm module in the main control chip.
Alternatively, in the embodiment of the present application, the fourth image may be the second image, or may be an image acquired recently (or at the current time of the system) after the second image is acquired.
Optionally, in the embodiment of the present application, if the fourth image is an image acquired for the first object, the first ROI may be a region of the image of the first object in the fourth image.
Alternatively, in the embodiment of the present application, the first ROI may have any shape such as a rectangle, a circle, or an ellipse.
Illustratively, as shown in fig. 9, the electronic device displays a currently acquired image 90 (i.e., the fourth image) in an image acquisition interface (i.e., the shooting interface) of the walking small person (i.e., the first object); the electronic device detects the ROI-1 (i.e., the first ROI) in the image 90 using the AF control algorithm, the coordinates of the corner 91 of the ROI-1 are (x 1, y 1), so that the electronic device can determine the display area (i.e., the first display area) of the ROI-1 in the image acquisition interface according to the coordinates of the corner 91. It can be seen that the ROI-1 is the region of the image 92 of the walking small person in the image 90.
Step 105, the electronic device determines, through the image processing chip, a second display area in the shooting interface based on the image data of the fourth image.
The second display area is a display area of a second ROI corresponding to the first ROI in a determined fifth image, and the fifth image is a first image acquired after the fourth image is acquired.
Illustratively, as shown in fig. 10, after the electronic device detects the first ROI through the AF algorithm module, the AF algorithm module may transmit data of the first ROI to the MEMC module in the image processing chip through a secure digital input output (Secure Digital Input and Output, SDIO) interface; the image sensor can input image data of an image acquired in real time to the image processing chip through a mobile industry processor (Mobile Industry Processor Interface, MIPI) interface result, and the image data is transmitted to the MEMC module after preliminary image signal processing (Image Signal Processing, ISP); therefore, the MEMC module can perform dynamic frame insertion processing after acquiring the image data of the first ROI and the fourth image, estimate the motion condition of the main body and determine the second ROI and the second display area.
Alternatively, in the embodiment of the present application, the display size of the second ROI and the display size of the first ROI may be the same.
Alternatively, in the embodiment of the present application, the second ROI corresponds to the first ROI may be understood as: the matching degree of the pixel value of the pixel in the second ROI and the pixel value of the pixel in the first ROI is larger than or equal to a preset threshold value.
For example, as shown in FIG. 11, ROI-1 is the first ROI, and ROI-2 is the second ROI; it can be seen that the display sizes of the ROI-1 and the ROI-2 are the same, and the matching degree of the pixel values of the pixels is larger.
And 106, the electronic device controls the image sensor to focus based on the second focusing position based on the focusing information of the second display area.
Illustratively, as shown in fig. 10, the MEMC module may transmit the determined data of the second ROI and the estimated image data of the fifth image to the AF stats module after determining the second ROI and the second display region; after acquiring the data of the second ROI and the estimated image data of the fifth image, the AF stats module may calculate and count the focusing information of the second ROI based on the acquired data, and transmit the focusing information of the second ROI to the AF algorithm module in the main control chip; after the AF algorithm module obtains the focusing information of the second ROI, the AF algorithm module may determine the second focusing position according to the focusing information of the second ROI, calculate the distance that the motor needs to move, and then control the motor to push the camera to the second focusing position for focusing.
In the embodiment of the application, after focusing is performed at the first focusing position, the electronic equipment can determine the motion track of the ROI in the acquired image and focus at the determined second focusing position based on the determined display area of the ROI in the next frame image, so that the focus tracking accuracy can be improved during the focus tracking shooting.
Optionally, in the embodiment of the present application, as shown in fig. 12 in conjunction with fig. 8, after the step 106, the focusing method provided in the embodiment of the present application may further include the following step 107.
Step 107, the electronic device displays the first identifier in the shooting interface based on the position information of the first display area and the position information of the second display area.
Wherein the first identifier is used for keeping the electronic device and the shooting object in the first ROI moving synchronously.
Alternatively, in the embodiment of the present application, the shooting object may be a moving object.
In the embodiment of the application, the first identifier can indicate the moving direction and the moving distance of the electronic device during focus tracking, and the user can move the electronic device according to the indication of the first identifier so as to track focus on the shooting object when shooting the shooting object.
Optionally, in the embodiment of the present application, the first identifier may be any possible identifier, such as an arrow identifier or a text identifier.
For example, taking the first identifier as an arrow identifier, as shown in fig. 13, the direction indicated by the arrow identifier 10 may indicate the moving direction of the electronic device during focus tracking, and the length of the arrow identifier 10 may indicate the moving distance of the electronic device during focus tracking.
Optionally, in an embodiment of the present application, the display parameter of the first identifier may be any display parameter.
Alternatively, in the embodiment of the present application, the display parameters may include a display color, a display transparency, and the like.
Optionally, in the embodiment of the present application, the electronic device may determine a translation amount (including a translation amount in an X-axis direction and a translation amount in a Y-axis direction) between the first display area and the second display area based on the position information of the first display area and the position information of the second display area, determine a display parameter of the first identifier according to the translation amount, and then display the first identifier in the shooting interface with the determined display parameter.
Optionally, in the embodiment of the present application, after displaying the first identifier, the user may move the electronic device according to the indication of the first identifier, so that the electronic device and the movement of the shooting object are kept in synchronization, thereby completing focus tracking, and keeping the main body position of the image of the shooting object in the acquired image unchanged, so as to obtain an image with clear focusing and blurred background motion.
In the embodiment of the application, the electronic equipment can display the first mark for keeping synchronous movement of the electronic equipment and the shooting object in the ROI in the acquired image, so that a user can accurately move the electronic equipment through the first mark, and accurate focus tracking of the electronic equipment is realized.
Optionally, in the embodiment of the present application, when the location of the electronic device changes, the electronic device may update and display the first identifier according to the changed location information.
In the embodiment of the present application, the updated first identifier after display may indicate: and the moving direction and the moving distance of the electronic equipment after the position change are changed during focus tracking.
For example, taking the first identifier as the arrow identifier, after the electronic device displays the arrow identifier, if the position of the electronic device is changed, the electronic device may update and display the arrow identifier, which includes at least one of the following: updating and displaying the indication direction of the arrow mark, and updating and displaying the length of the arrow mark; the arrow mark after updating display can indicate the moving direction and the moving distance of the electronic device after changing the position during tracking.
Optionally, in the embodiment of the present application, the position of the electronic device changes, that is, the electronic device moves in the focus tracking process.
In the embodiment of the application, under the condition that the position of the electronic equipment is changed, the electronic equipment can update and display the first identifier according to the position information of the electronic equipment after the change, so that the first identifier can be updated and displayed in real time according to the real-time position of the electronic equipment, so as to indicate the moving direction and the moving distance required in real time when the electronic equipment is in focus tracking, and further improve the accuracy of the focus tracking of the electronic equipment.
Optionally, in the embodiment of the present application, as shown in fig. 14 in conjunction with fig. 8, before the step 104 and after the step 103, the focusing method provided in the embodiment of the present application may further include the following steps 108 and 109; and the above step 105 may be specifically implemented by the following step 105 a.
Step 108, the electronic device receives a first input of a user to the shooting interface.
In the embodiment of the present application, the first input is used to determine a display area in the shooting interface.
Optionally, in the embodiment of the present application, the first input may be any possible input such as a touch input, a hover input, or a voice input.
For example, taking the first input as a touch input as an example, the first input may be a press input, a three-click input, or a special track input.
Step 109, the electronic device determines a third display area in the shooting interface in response to the first input.
Alternatively, in the embodiment of the present application, the third display area may be a display area of a preset ROI, and a display size of the preset ROI may be the same as a display size of the ROI in the second image.
Illustratively, as shown in fig. 15, when the user needs to determine the third display area in the interface 130 (i.e., the above-described photographing interface), the user may triple click on the position 131 in the interface 130; after receiving the three-click input, the electronic device may determine a preset ROI with the same display size according to the display size of the ROI in the image currently displayed in the interface 130; then, as shown in fig. 16, the electronic device may determine the display area 132 of the preset ROI, that is, the third display area, based on the position 131 of the three-click input, centering on the position 131.
In step 105a, when the first display area and the third display area overlap, the electronic device determines, by using the image processing chip, the second display area based on the image data of the fourth image.
Optionally, in the embodiment of the present application, after the electronic device determines the third display area, the user may move the electronic device and determine the second display area to start focusing when the first display area overlaps with the third display area.
Optionally, in the embodiment of the present application, the overlapping of the first display area and the third display area may be understood as: the contact ratio of the first display area and the third display area is larger than or equal to a contact threshold value.
For example, as shown in fig. 17, it is assumed that the display area 141 is the third display area, the display area 142 is the first display area, and the overlapping threshold is 90%; if the overlap ratio of the display area 141 and the display area 142 is 95%, the display area 141 and the display area 142 may be considered to overlap, so that the electronic device may determine the second display area to turn on the focus tracking mode.
For other descriptions of embodiments of the present application, reference may be made to the related descriptions of the above embodiments, and for the sake of avoiding repetition, the description is omitted here.
In the embodiment of the application, the user can preset a display area and start to execute the focus tracking operation when the preset display area is overlapped with the first display area determined by the electronic equipment, so that the flexibility of triggering the electronic equipment to track focus can be improved.
The foregoing method embodiments, or various possible implementation manners in the foregoing method embodiments, may be executed separately, or may be executed in combination with each other on the premise that no contradiction exists, and may be specifically determined according to actual use requirements, which is not limited by the embodiment of the present application.
According to the focusing method provided by the embodiment of the application, the execution main body can be a focusing device. In the embodiment of the present application, a focusing device is used as an example to execute a focusing method.
As shown in fig. 18, an embodiment of the present application provides a focusing device 180, and the focusing device 180 may include a generating module 181, an acquiring module 182, and a control module 183.
The generating module 181 may be configured to generate the third image based on the first image and the second image continuously acquired by the image sensor in the electronic device. The acquiring module 182 may be configured to acquire a first focus value of the first image, a second focus value of the second image, and a third focus value of the third image. The control module 183 may be configured to control the image sensor to perform focusing based on the first focusing position based on the first focusing value, the second focusing value, and the third focusing value acquired by the acquisition module 182.
In one possible implementation, each focus value corresponds to a focus position. The control module 183 may be specifically configured to determine, when the first focus value and the second focus value are both smaller than the third focus value, a focus position corresponding to the third focus value as the first focus position; and controlling the image sensor to focus at the first focusing position.
In a possible implementation manner, the obtaining module 182 may specifically be configured to obtain, by using an image processing chip, focus information of the first image according to image data of the first image, and obtain the first focus value according to the focus information of the first image; the image processing chip acquires focusing information of the second image according to the image data of the second image, and acquires the second focusing value according to the focusing information of the second image; and acquiring focusing information of the third image according to the image data of the third image by the image processing chip, and acquiring the third focusing value according to the focusing information of the third image. The image processing chip is an IC chip independent of a main control chip in the electronic device, and the focusing information includes: image contrast, image confidence, and image average brightness.
In a possible implementation manner, the generating module 181 may specifically be configured to perform, by using the image processing chip, frame interpolation processing using the image data of the first image and the image data of the second image, to generate the third image; the image processing chip is an IC chip independent of the main control chip in the electronic equipment.
In a possible implementation, the focusing device 180 may further include a processing module. A processing module, configured to detect a first ROI in a fourth image displayed in a photographing interface and determine a first display area of the first ROI in the photographing interface after the control module 183 controls the image sensor to focus based on the first focusing position based on the first focusing value, the second focusing value, and the third focusing value; and determining a second display area in the shooting interface based on the image data of the fourth image through the image processing chip, wherein the second display area is a display area of a second ROI corresponding to the first ROI in a determined fifth image, the fifth image is a first image acquired after the fourth image is acquired, and the image processing chip is an IC chip independent of a main control chip in the electronic equipment. The control module 183 may be further configured to control the image sensor to focus based on the second focusing position based on the focusing information of the second display area.
In a possible implementation, the focusing device 180 may further include a display module. The display module may be configured to display a first mark on the photographing interface, the first mark being used for keeping the electronic device and the photographing object in the first ROI moving synchronously, based on the position information of the first display area and the position information of the second display area, after the control module 183 controls the image sensor to focus based on the second focusing position based on the focusing information of the second display area.
In a possible implementation, the focusing device 180 may further include a receiving module. And a receiving module, configured to detect the first ROI in the fourth image displayed in the photographing interface by the processing module, and determine that the first ROI is in front of the first display area in the photographing interface, and receive a first input from a user to the photographing interface. The processing module may be further configured to determine a third display area in the capture interface in response to the first input received by the receiving module. The processing module is specifically configured to determine, by using the image processing chip, the second display area based on the image data of the fourth image when the first display area overlaps the third display area.
In the focusing device provided by the embodiment of the application, because the focusing device can be based on the respective focusing values of two images which are continuously acquired and the focusing values of the images generated based on the two images during automatic focusing, phase focusing and multiple contrast focusing are not needed, calculation can be reduced, and thus, the power consumption during automatic focusing can be reduced.
The focusing device in the embodiment of the application can be an electronic device or a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The focusing device in the embodiment of the application can be a device with an operating system. The operating system may be an Android operating system, an IOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The focusing device provided by the embodiment of the application can realize each process realized by the embodiment of the method, achieves the same technical effect, and is not repeated here for avoiding repetition.
As shown in fig. 19, the embodiment of the present application further provides an electronic device 190, including a processor 191 and a memory 192, where the memory 192 stores a program or an instruction that can be executed on the processor 191, and the program or the instruction implements the steps of the above focusing method embodiment when executed by the processor 191, and can achieve the same technical effects, so that repetition is avoided and redundant description is omitted.
It should be noted that, the electronic device in the embodiment of the present application includes a mobile electronic device and a non-mobile electronic device.
Fig. 20 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
As shown in fig. 20, the electronic device 1000 includes, but is not limited to: radio frequency unit 1001, network module 1002, audio output unit 1003, input unit 1004, sensor 1005, display unit 1006, user input unit 1007, interface unit 1008, memory 1009, and processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1010 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 20 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown in the drawings, or may combine some components, or may be arranged in different components, which will not be described in detail herein.
Wherein the processor 1010 may be configured to generate a third image based on the first image and the second image continuously acquired by the image sensor in the electronic device 1000; acquiring a first focusing value of the first image, a second focusing value of the second image and a third focusing value of the third image; and controlling the image sensor to focus based on the first focusing position based on the acquired first focusing value, second focusing value and third focusing value.
In one possible implementation, each focus value corresponds to a focus position. The processor 1010 may be specifically configured to determine, when the first focus value and the second focus value are both smaller than the third focus value, a focus position corresponding to the third focus value as the first focus position; and controlling the image sensor to focus at the first focusing position.
In a possible implementation manner, the processor 1010 may be specifically configured to obtain, by using an image processing chip, focusing information of the first image according to image data of the first image, and obtain the first focus value according to the focusing information of the first image; the image processing chip acquires focusing information of the second image according to the image data of the second image, and acquires the second focusing value according to the focusing information of the second image; and acquiring focusing information of the third image according to the image data of the third image by the image processing chip, and acquiring the third focusing value according to the focusing information of the third image. The image processing chip is an IC chip of the electronic device 1000 independent of the main control chip, and the focusing information includes: image contrast, image confidence, and image average brightness.
In a possible implementation manner, the processor 1010 may be specifically configured to perform, by using the image processing chip, frame interpolation processing using the image data of the first image and the image data of the second image, to generate the third image; the image processing chip is an IC chip independent of the main control chip in the electronic device 1000.
In a possible implementation manner, the processor 1010 may be further configured to detect a first ROI in a fourth image displayed in a shooting interface after controlling the image sensor to focus based on the first focus value, the second focus value, and the third focus value, and determine a first display area of the first ROI in the shooting interface; determining a second display area in the shooting interface based on the image data of the fourth image through the image processing chip, wherein the second display area is a display area of a second ROI corresponding to the first ROI in a determined fifth image, the fifth image is a first image acquired after the fourth image is acquired, and the image processing chip is an IC chip independent of a main control chip in the electronic device 1000; and controlling the image sensor to focus based on the second focusing position based on the focusing information of the second display area.
In a possible implementation manner, the display unit 1006 may be configured to display, on the basis of the position information of the first display area and the position information of the second display area, a first identifier in the photographing interface after the processor 1010 controls the image sensor to perform focusing on the basis of the second focusing position based on the focusing information of the second display area, where the first identifier is used for keeping the electronic device 1000 and the photographing object in the first ROI moving synchronously.
In a possible implementation, the user input unit 1007 may be configured to receive a first input from a user to the photographing interface before the processor 1010 detects the first ROI in the fourth image displayed in the photographing interface and determines that the first ROI is in front of the first display region in the photographing interface. The processor 1010 may also be configured to determine a third display area in the capture interface in response to the first input received by the user input unit 1007. The processor 1010 is specifically configured to determine, by using the image processing chip, the second display area based on the image data of the fourth image when the first display area overlaps the third display area.
In the electronic device provided by the embodiment of the application, because the electronic device can be based on the respective focusing values of the two images which are continuously collected and the focusing values of the images generated based on the two images during automatic focusing, phase focusing and multiple contrast focusing are not needed, the calculation of the electronic device can be reduced, and thus the power consumption of the electronic device during automatic focusing can be reduced.
It should be appreciated that in an embodiment of the present application, the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, and the graphics processor 10041 processes image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory 1009 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 1009 in embodiments of the application includes, but is not limited to, these and any other suitable types of memory.
The processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above focusing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the focusing method embodiment, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the above-described focusing method embodiments, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (12)

1. A focusing method, the method comprising:
generating a third image based on the first image and the second image continuously acquired by the image sensor in the electronic device;
acquiring a first focus value of the first image, a second focus value of the second image and a third focus value of the third image;
and controlling the image sensor to focus based on the first focusing position based on the first focusing value, the second focusing value and the third focusing value.
2. The method of claim 1, wherein each focus value corresponds to a focus position;
the controlling the image sensor to focus based on the first focus value, the second focus value, and the third focus value includes:
determining a focusing position corresponding to the third focusing value as the first focusing position under the condition that the first focusing value and the second focusing value are smaller than the third focusing value;
and controlling the image sensor to focus at the first focusing position.
3. The method of claim 1, wherein the acquiring the first focus value of the first image, the second focus value of the second image, and the third focus value of the third image comprises:
Acquiring focusing information of the first image according to image data of the first image through an image processing chip, and acquiring the first focusing value according to the focusing information of the first image;
acquiring focusing information of the second image according to the image data of the second image through the image processing chip, and acquiring the second focusing value according to the focusing information of the second image;
acquiring focusing information of the third image according to the image data of the third image through the image processing chip, and acquiring the third focusing value according to the focusing information of the third image;
the image processing chip is an Integrated Circuit (IC) chip independent of a main control chip in the electronic equipment, and the focusing information comprises: image contrast, image confidence, and image average brightness.
4. The method of claim 1, wherein generating the third image based on the first image and the second image continuously acquired by the image sensor in the electronic device comprises:
performing frame interpolation processing by using the image data of the first image and the image data of the second image through an image processing chip to generate the third image;
The image processing chip is an IC chip independent of the main control chip in the electronic equipment.
5. The method of any of claims 1-4, wherein the controlling the image sensor after focusing based on the first, second, and third focus values further comprises:
detecting a first region of interest (ROI) in a fourth image displayed in a shooting interface, and determining a first display region of the first ROI in the shooting interface;
determining a second display area in the shooting interface based on the image data of the fourth image through an image processing chip, wherein the second display area is a display area of a second ROI corresponding to the first ROI in a determined fifth image, the fifth image is a first image acquired after the fourth image is acquired, and the image processing chip is an IC chip independent of a main control chip in the electronic equipment;
and controlling the image sensor to focus based on a second focusing position based on the focusing information of the second display area.
6. The method of claim 5, wherein the controlling the image sensor to focus based on the second focus position further comprises, after the focusing based on the focus information of the second display area:
And displaying a first mark in the shooting interface based on the position information of the first display area and the position information of the second display area, wherein the first mark is used for keeping synchronous movement of the electronic equipment and a shooting object in the first ROI.
7. The method of claim 5, wherein the detecting a first ROI in a fourth image displayed in a capture interface and determining that the first ROI is in front of a first display region in the capture interface, the method further comprising:
receiving a first input of a user to the shooting interface;
determining a third display area in the shooting interface in response to the first input;
the determining, by the image processing chip, a second display area in the photographing interface based on the image data of the fourth image, includes:
and determining, by the image processing chip, the second display area based on the image data of the fourth image, in a case where the first display area coincides with the third display area.
8. A focusing device, which is characterized by comprising a generating module, an acquiring module and a control module;
the generation module is used for generating a third image based on the first image and the second image which are continuously acquired by the image sensor in the electronic equipment;
The acquisition module is used for acquiring a first focusing value of the first image, a second focusing value of the second image and a third focusing value of the third image;
the control module is used for controlling the image sensor to focus based on the first focusing value, the second focusing value and the third focusing value acquired by the acquisition module.
9. The apparatus of claim 8, wherein each focus value corresponds to a focus position;
the control module is specifically configured to determine, as the first focusing position, a focusing position corresponding to the third focusing value when the first focusing value and the second focusing value are both smaller than the third focusing value; and controlling the image sensor to focus at the first focusing position.
10. The apparatus according to claim 8 or 9, further comprising a processing module;
the processing module is configured to detect a first region of interest ROI in a fourth image displayed in a shooting interface after the control module controls the image sensor to focus based on the first focusing position based on the first focusing value, the second focusing value and the third focusing value, and determine a first display region of the first ROI in the shooting interface; determining a second display area in the shooting interface based on the image data of the fourth image through an image processing chip, wherein the second display area is a display area of a second ROI corresponding to the first ROI in a determined fifth image, the fifth image is a first image acquired after the fourth image is acquired, and the image processing chip is an IC chip independent of a main control chip in the electronic equipment;
The control module is further configured to control the image sensor to focus based on a second focusing position based on focusing information of the second display area.
11. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the focusing method according to any one of claims 1-7.
12. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the focusing method according to any one of claims 1 to 7.
CN202310943947.5A 2023-07-28 2023-07-28 Focusing method, focusing device, electronic equipment and readable storage medium Pending CN116801101A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310943947.5A CN116801101A (en) 2023-07-28 2023-07-28 Focusing method, focusing device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310943947.5A CN116801101A (en) 2023-07-28 2023-07-28 Focusing method, focusing device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN116801101A true CN116801101A (en) 2023-09-22

Family

ID=88036896

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310943947.5A Pending CN116801101A (en) 2023-07-28 2023-07-28 Focusing method, focusing device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116801101A (en)

Similar Documents

Publication Publication Date Title
EP3599760B1 (en) Image processing method and apparatus
CN109194876B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
US9973672B2 (en) Photographing for dual-lens device using photographing environment determined using depth estimation
JP2021060590A (en) Pass-through display of captured imagery
CN111837159A (en) Method and apparatus for performing depth estimation of an object
US8228383B2 (en) Image pickup apparatus and method for controlling ranging area based on detected object
US10255682B2 (en) Image detection system using differences in illumination conditions
EP3627821B1 (en) Focusing method and apparatus for realizing clear human face, and computer device
WO2016164166A1 (en) Automated generation of panning shots
JP2009522591A (en) Method and apparatus for controlling autofocus of a video camera by tracking a region of interest
US11143879B2 (en) Semi-dense depth estimation from a dynamic vision sensor (DVS) stereo pair and a pulsed speckle pattern projector
CN105637852B (en) A kind of image processing method, device and electronic equipment
CN112822412B (en) Exposure method, exposure device, electronic equipment and storage medium
JP7182020B2 (en) Information processing method, device, electronic device, storage medium and program
CN108513069B (en) Image processing method, image processing device, storage medium and electronic equipment
US10602077B2 (en) Image processing method and system for eye-gaze correction
CN112930677B (en) Method for switching between first lens and second lens and electronic device
US11798177B2 (en) Hand tracking method, device and system
WO2021046793A1 (en) Image acquisition method and apparatus, and storage medium
CN115209057A (en) Shooting focusing method and related electronic equipment
CN113747067A (en) Photographing method and device, electronic equipment and storage medium
CN112543284B (en) Focusing system, method and device
CN111031256B (en) Image processing method, image processing device, storage medium and electronic equipment
CN114286011B (en) Focusing method and device
WO2023072030A1 (en) Automatic focusing method and apparatus for lens, and electronic device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination