CN117412168A - Focusing method, device, equipment and storage medium - Google Patents

Focusing method, device, equipment and storage medium Download PDF

Info

Publication number
CN117412168A
CN117412168A CN202210779457.1A CN202210779457A CN117412168A CN 117412168 A CN117412168 A CN 117412168A CN 202210779457 A CN202210779457 A CN 202210779457A CN 117412168 A CN117412168 A CN 117412168A
Authority
CN
China
Prior art keywords
focusing
main body
frame
target
lens
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210779457.1A
Other languages
Chinese (zh)
Inventor
韩豪
姬向东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202210779457.1A priority Critical patent/CN117412168A/en
Publication of CN117412168A publication Critical patent/CN117412168A/en
Pending legal-status Critical Current

Links

Abstract

The disclosure relates to a focusing method, a focusing device, focusing equipment and a storage medium. The focusing method is applied to the terminal equipment with the lens and comprises the following steps: acquiring depth information of a focusing main body in an exposure time period corresponding to each frame of image; acquiring depth difference information between a target frame and a focusing main body in a frame before the target frame; if the depth difference information is larger than the preset difference value, controlling the lens to move to a first target position according to the depth information of the focusing main body in the target frame in the exposure time period; the first target position is a position for enabling the focusing main body in the target frame to be in a clear state. By using the method, the focusing speed can be improved, each frame can be ensured to be focused clearly, the technical effect of real-time focusing is realized, and the shooting experience of a user is improved.

Description

Focusing method, device, equipment and storage medium
Technical Field
The disclosure relates to the field of imaging technologies, and in particular, to a focusing method, a focusing device, and a storage medium.
Background
In the related art, the auto-focusing system mainly includes three schemes of contrast focus (CAF, contrast Detection Auto Focus), phase focus (PDAF, phase Detection Auto-focus) and time-of-flight focus (TOF).
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a focusing method, apparatus, device, and storage medium.
According to a first aspect of embodiments of the present disclosure, there is provided a focusing method applied to a terminal device having a lens, the focusing method including:
acquiring depth information of a focusing main body in an exposure time period corresponding to each frame of image;
acquiring depth difference information between a target frame and the focusing main body in a frame before the target frame;
if the depth difference information is larger than a preset difference value, controlling the lens to move to a first target position according to the depth information of the focusing main body in the target frame in an exposure time period;
the first target position is a position at which the focusing main body in the target frame is in a clear state.
In an exemplary embodiment, the controlling the lens to move to the first target position according to the depth information of the focusing body in the target frame includes:
determining the focusing position of the focusing main body according to the depth information of the focusing main body in the target frame;
determining a first target position of the lens according to the focusing position of the focusing main body;
and controlling the lens to move to the first target position.
In an exemplary embodiment, the focusing method further includes:
and if the depth difference information is smaller than or equal to the preset difference value, recording the depth information of the focusing main body in the target frame.
In an exemplary embodiment, the focusing method further includes:
after the exposure time period corresponding to the target frame is over, performing image processing on the target frame to obtain phase information;
determining a focal length state of the focusing main body according to the phase information;
and controlling the lens to move to a second target position according to the focal length state of the focusing main body.
In an exemplary embodiment, controlling the lens to move to a second target position according to a focal length state of the focusing body includes:
if the focal length state of the focusing main body is defocused, determining the focusing position of the focusing main body by adopting a phase focusing method and/or a contrast focusing method;
determining a second target position of the lens according to the focusing position of the focusing main body;
and controlling the lens to move to the second target position.
In an exemplary embodiment, the determining the focus position of the focus subject using a phase focus method and/or a contrast focus method includes:
and when the phase focusing method cannot determine the focusing position of the focusing main body, determining the focusing position of the focusing main body by adopting the contrast focusing method.
In an exemplary embodiment, the acquiring depth information of the focusing body includes:
obtaining an image of each frame by adopting a time-of-flight focusing method;
the focus subject is determined in the image using a target detection method and/or a saliency detection method.
In an exemplary embodiment, the focusing method further includes:
and if the image information does not have a subject, determining the middle area of the image as the focusing subject.
According to a second aspect of embodiments of the present disclosure, there is provided a focusing device applied to a terminal apparatus having a lens, the focusing device including:
the first acquisition module is configured to acquire depth information of the focusing main body in an exposure time period corresponding to each frame of image;
a second acquisition module configured to acquire depth difference information between a target frame and the focus subject in a frame preceding the target frame;
the first focusing module is configured to control the lens to move to a first target position according to the depth information of the focusing main body in the target frame in the exposure time period if the depth difference information is larger than a preset difference value;
the first target position is a position at which the focusing main body in the target frame is in a clear state.
In an exemplary embodiment, the focusing device further includes:
and the recording module is configured to record the depth information of the focusing main body in the target frame if the depth difference information is smaller than or equal to the preset difference value.
In an exemplary embodiment, the focusing device further includes:
the processing module is configured to perform image processing on the target frame after the exposure time period corresponding to the target frame is over, so as to obtain phase information;
a determining module configured to determine a focal length state of the focusing body according to the phase information;
and the second focusing module is configured to control the lens to move to a second target position according to the focal length state of the focusing main body.
According to a third aspect of the embodiments of the present disclosure, there is provided a terminal device, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the focusing method as set forth in any one of the first aspects of the embodiments of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer readable storage medium, which when executed by a processor of a terminal device, causes the terminal device to perform the focusing method as set forth in any one of the first aspects of embodiments of the present disclosure.
The method has the following beneficial effects: the focusing speed can be improved, each frame can be guaranteed to be focused clearly, the technical effect of real-time focusing is achieved, and the shooting experience of a user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flowchart illustrating a method of focusing according to an exemplary embodiment;
FIG. 2 is a flowchart illustrating a method of focusing according to an exemplary embodiment;
FIG. 3 is a flowchart illustrating a method of focusing according to an exemplary embodiment;
FIG. 4 is a flowchart illustrating a method of focusing according to an exemplary embodiment;
FIG. 5 is a flowchart illustrating a method of focusing according to an exemplary embodiment;
FIG. 6 is a block diagram of a focusing device, according to an exemplary embodiment;
fig. 7 is a block diagram illustrating a terminal device for focusing according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
In the related art, when a photographing device performs focusing processing by adopting three schemes of contrast focusing (CAF), phase focusing (PDAF) and time of flight focusing (TOF), for a new photographing scene to be focused, a few frames are always in a blurred state. For example, when the PDAF with the fastest focusing speed in the three focusing schemes is adopted, after the focusing scene changes or the focusing object moves, in the most ideal case, only after the current frame finishes exposure and is processed by the ISP (Image Signal Processing ), the photographing device can know that the current frame is in the out-of-focus state through calculating the Phase Difference (PD), that is, the image obtained at the moment is the image in the out-of-focus fuzzy state, and then the motor is controlled to drive the lens to move before the next frame or the next frames, so that the definition of the next frame or the next frames is ensured as much as possible, and the photographing experience of the user is reduced to a certain extent.
In an exemplary embodiment of the present disclosure, a focusing method is provided, which is applied to a terminal device with a lens, such as a mobile phone with a lens, a tablet, and other electronic devices. Fig. 1 is a flowchart illustrating a focusing method according to an exemplary embodiment, as shown in fig. 1, including the steps of:
step S101, acquiring depth information of a focusing main body in an exposure time period corresponding to each frame of image;
step S102, obtaining depth difference information between a target frame and a focusing main body in a frame before the target frame;
step S103, if the depth difference information is larger than the preset difference value, controlling the lens to move to the first target position according to the depth information of the focusing main body in the target frame in the exposure time period;
the first target position is a position for enabling the focusing main body in the target frame to be in a clear state.
In step S101, the depth information is information capable of reflecting the distance between the object of the shooting scene and the shooting device, and the distance between each feature point of each object in the shooting scene and the shooting device can be obtained by a laser ranging method, that is, a time-of-flight ranging method, in the focusing process, so as to determine the depth information of each object in the shooting scene. In the focusing process, a focusing main body in a shooting scene can be detected through a neural network algorithm, and after the focusing main body is determined, the depth information of the focusing main body is acquired in an exposure time period corresponding to each frame of image.
It should be noted that, exposure processing is required before each frame of image is acquired, and only after exposure, the image can be obtained, the focusing method in the disclosure acquires depth information of the focusing main body by using an exposure time period, and determines whether the focusing main body is clear or not according to the depth information in a subsequent step, so that a judging process for the definition of the focusing main body is set before each frame of image is finally formed, thereby ensuring that the focusing main body in each frame of image can have enough definition, and improving photographing experience of a user.
In step S102, the target frame is the current frame, and whether the focusing body changes or moves can be determined by comparing the current frame image information of the focusing body with the previous frame image information. Since the depth information of the focusing body can reflect the distance information of the focusing body to the photographing apparatus, when the focusing body moves, the depth information of the focusing body may change, and thus, whether the focusing body moves or whether the focusing scene changes may be determined through the depth difference information between the focusing bodies in the target frame and the previous frame of the target frame. The Depth difference information is calculated by absolute value of difference between Depth information of the target frame and Depth information of a frame preceding the target frame, e.g. recording Depth information of the focus subject in the target frame as Depth 0 The Depth information of the focusing main body in the previous frame of the target frame is recorded as Depth 1 The Depth difference information is recorded as DepthDifference, and DepthDifference= |Depth 1 -Depth 0 | a. The invention relates to a method for producing a fibre-reinforced plastic composite. If the depth difference information is 0, it indicates that the focusing body is not moved, and if the depth difference information is not 0, it indicates that the depth information of the focusing body is changed and the focusing body is moved.
In step S103, since the instability of the photographing apparatus also causes slight movement of the focusing body during focusing, the depth information of the focusing body of the target frame and the focusing body of the frame preceding the target frame are different, and the slight movement does not affect the focusing effect. The preset difference value is a change value of depth information of a focusing main body which just influences the focusing effect, and the value of the preset difference value is an empirical value. When the depth difference information is larger than the preset difference value, the focusing scene is changed or the focusing main body moves, if the focusing position of the lens is unchanged, the focusing main body in the target frame is out of focus and becomes fuzzy, and therefore, refocusing is required to be performed by controlling the lens to move at the moment. When refocusing, in the exposure time period of the target frame, according to the depth information of the focusing main body in the target frame, the lens is controlled to move to a first target position by the motor, wherein the first target position is a position for enabling the focusing main body in the target frame to be in a clear state.
In the exemplary embodiment of the disclosure, in an exposure period corresponding to each frame image, depth information of a focusing main body is acquired, when the depth difference information is greater than a preset difference value, the lens is controlled to move to a first target position according to the depth information of the focusing main body in the target frame in the exposure period, so that the focusing main body in the target frame is in a clear state, namely focusing is performed again in the target frame, thereby improving focusing speed, ensuring that each frame can focus clearly, realizing the technical effect of real-time focusing, and improving shooting focusing experience of a user.
In an exemplary embodiment of the present disclosure, a focusing method is provided, which is applied to a terminal device with a lens, such as a mobile phone with a lens, a tablet, and other electronic devices. Fig. 2 is a flowchart illustrating a focusing method according to an exemplary embodiment, as shown in fig. 2, including the steps of:
step S201, acquiring depth information of a focusing main body in an exposure time period corresponding to each frame of image;
step S202, obtaining depth difference information between a target frame and a focusing main body in a frame before the target frame;
step S203, if the depth difference information is larger than the preset difference value, determining the focusing position of the focusing main body according to the depth information of the focusing main body in the target frame in the exposure time period;
step S204, determining a first target position of the lens according to the focusing position of the focusing main body;
in step S205, the lens is controlled to move to the first target position.
The first target position is a position for enabling the focusing main body in the target frame to be in a clear state.
The steps S201-S202 are the same as the steps S101-S102, and will not be described again here.
In step S203, the preset difference value is recorded as TOFTriggerThreshold, the depth difference information is recorded as DepthDifference, and when DepthDifference > TOFTriggerThreshold, the focusing position of the focusing subject is determined according to the depth information of the focusing subject in the target frame. The method for determining the focusing position may be any method capable of determining the focusing position according to the depth information of the focusing body, and the present disclosure is not limited thereto.
In step S204, the corresponding relationship between the focusing position of the focusing main body and the lens position is stored in the photographing device in advance, and according to the focusing position of the focusing main body, the lens position can be determined from the corresponding relationship, and the position is the first target position of the lens.
In step S205, according to the first target position of the lens, the distance that the motor needs to move is determined, and the lens is controlled to move to the first target position by the motor.
In an exemplary embodiment of the present disclosure, a focusing method is provided, which is applied to a terminal device with a lens, such as a mobile phone with a lens, a tablet, and other electronic devices. Fig. 3 is a flowchart illustrating a focusing method according to an exemplary embodiment, as shown in fig. 3, including the steps of:
step S301, acquiring depth information of a focusing main body in an exposure time period corresponding to each frame of image;
step S302, obtaining depth difference information between a target frame and a focusing main body in a frame before the target frame;
step S303, if the depth difference information is larger than the preset difference value, determining the focusing position of the focusing main body according to the depth information of the focusing main body in the target frame in the exposure time period;
step S304, determining a first target position of the lens according to the focusing position of the focusing main body;
step S305, controlling the lens to move to a first target position;
the first target position is a position for enabling the focusing main body in the target frame to be in a clear state.
In step S306, if the depth difference information is less than or equal to the preset difference value, the depth information of the focusing subject in the target frame is recorded.
Steps S301 to S305 are the same as steps S201 to S205, and will not be described here.
In step S306, when the depth difference information is less than or equal to the preset difference value, it is indicated that the focusing scene is not transformed or the focusing main body is not moved, and the target frame is still in a clear focusing state, and at this time, the lens position does not need to be moved. Because the target frame is not necessarily the last frame, whether the subsequent frame of the target frame is focused clearly needs to be considered, when the target frame is focused clearly, depth information of a focusing main body in the target frame needs to be recorded so as to be convenient for judging whether the subsequent frame is out of focus.
In an exemplary embodiment of the present disclosure, a focusing method is provided, which is applied to a terminal device with a lens, such as a mobile phone with a lens, a tablet, and other electronic devices. Fig. 4 is a flowchart illustrating a focusing method according to an exemplary embodiment, as shown in fig. 4, including the steps of:
step S401, obtaining depth information of a focusing main body in an exposure time period corresponding to each frame of image;
step S402, obtaining depth difference information between a target frame and a focusing main body in a frame before the target frame;
step S403, if the depth difference information is larger than the preset difference value, determining the focusing position of the focusing main body according to the depth information of the focusing main body in the target frame in the exposure time period;
step S404, determining a first target position of the lens according to the focusing position of the focusing main body;
step S405, controlling the lens to move to a first target position;
the first target position is a position for enabling the focusing main body in the target frame to be in a clear state.
Step S406, if the depth difference information is smaller than or equal to the preset difference value, recording the depth information of the focusing main body in the target frame;
step S407, after the exposure time period corresponding to the target frame is over, performing image processing on the target frame to obtain phase information;
step S408, determining the focal length state of the focusing main body according to the phase information;
step S409, if the focal length state of the focusing main body is out of focus, determining the focusing position of the focusing main body by adopting a phase focusing method and/or a contrast focusing method;
the phase focusing method has higher priority than the contrast focusing method, and when the phase focusing method cannot determine the focusing position of the focusing main body, the contrast focusing method is adopted to determine the focusing position of the focusing main body.
Step S410, determining a second target position of the lens according to the focusing position of the focusing main body;
in step S411, the lens is controlled to move to the second target position.
Steps S401 to S406 are the same as steps S301 to S306, and will not be described here.
In steps S407-S408, in phase focusing (PDAF), the light entering the lens is generally divided into two parts to form two images, focusing is completed according to the difference (phase difference) of focusing positions between the two images, and the focusing speed of phase focusing is fast and the focusing time is short. Therefore, when the exposure period corresponding to the target frame is completed, image processing (ISP processing) is performed on the target frame image, and phase information, that is, the phase difference of the two images formed in the phase focusing process is obtained. And determining the focal length state of the focusing main body according to the phase information, wherein the focusing main body is in a focusing state when the phase difference is a positive phase difference, and is in an out-of-focus state when the phase difference is a negative phase difference.
In step S409, the contrast focus (CAF) is to use the motor to control the lens movement according to the set steps, and to obtain the contrast value of the current picture at the lens position every time the lens movement is performed to a certain set position, and to fit the relationship curve between different positions and the picture contrast value through the information collection of each position, and to determine the clearest point according to the relationship curve, i.e. to determine the focus position, the contrast focus accuracy is high, but the focus time is long. When the focal length state of the focusing main body is an out-of-focus state, a phase focusing method and/or a contrast focusing method are adopted to determine the focusing position of the focusing main body. Because the focusing speed of the phase focusing is better than that of the contrast focusing, the priority of the phase focusing method is higher than that of the contrast focusing method, and when the phase focusing method cannot determine the focusing position of the focusing main body, the contrast focusing method is adopted to determine the focusing position of the focusing main body.
In steps S410 to S411, based on the correspondence between the focus position of the focus subject and the lens position stored in advance in the photographing apparatus, the lens position can be determined from the correspondence according to the focus position of the focus subject, and the position is the second target position of the lens. And determining the distance to be moved by the motor according to the second target position of the lens, and controlling the lens to move to the second target position by the motor.
In an exemplary embodiment of the present disclosure, a focusing method is provided, which is applied to a terminal device with a lens, such as a mobile phone with a lens, a tablet, and other electronic devices. Fig. 5 is a flowchart illustrating a focusing method according to an exemplary embodiment, as shown in fig. 5, including the steps of:
step S501, obtaining an image of each frame by adopting a time-of-flight focusing method;
step S502, a focusing main body is determined in an image by adopting a target detection method and/or a significance detection method;
if the image information does not have a subject, the middle area of the image is determined as a focus subject.
Step S503, obtaining depth information of a focusing main body in an exposure time period corresponding to each frame of image;
step S504, obtaining depth difference information between a target frame and a focusing main body in a frame before the target frame;
step S505, if the depth difference information is larger than the preset difference value, determining the focusing position of the focusing main body according to the depth information of the focusing main body in the target frame in the exposure time period;
step S506, determining a first target position of the lens according to the focusing position of the focusing main body;
step S507, controlling the lens to move to a first target position;
the first target position is a position for enabling the focusing main body in the target frame to be in a clear state.
Step S508, if the depth difference information is smaller than or equal to the preset difference value, recording the depth information of the focusing main body in the target frame;
step S509, after the exposure time period corresponding to the target frame is over, performing image processing on the target frame to obtain phase information;
step S510, determining the focal length state of the focusing main body according to the phase information;
step S511, if the focal length of the focusing main body is out of focus, determining the focusing position of the focusing main body by adopting a phase focusing method and/or a contrast focusing method;
the phase focusing method has higher priority than the contrast focusing method, and when the phase focusing method cannot determine the focusing position of the focusing main body, the contrast focusing method is adopted to determine the focusing position of the focusing main body.
Step S512, determining a second target position of the lens according to the focusing position of the focusing main body;
in step S513, the lens is controlled to move to the second target position.
Steps S503 to S513 are the same as steps S401 to S411, and are not described here.
In steps S501-S502, the time-of-flight focusing method (TOF focusing method) obtains the distance from the object to the photographing device in the photographed scene by using the laser ranging principle, for example, uses infrared rays to detect the distance, obtains the distance from the photographed object according to the speed and the number of photons returned by the infrared rays, and determines the focusing position according to the distance from the photographed object. Therefore, the image of each frame obtained using the time-of-flight focusing method is an image including depth information. In acquiring an image of a shooting scene, an image processing algorithm, such as a target detection method and/or a saliency detection method, is used to determine a focusing main body in the image. If the target main body does not exist in the image information, the middle area of the image is determined to be a focusing main body, so that the overall definition of the focused image is higher.
In an exemplary embodiment of the present disclosure, a focusing device is provided, which is applied to a terminal device having a lens. Fig. 6 is a block diagram of a focusing device according to an exemplary embodiment, as shown in fig. 6, the focusing device includes:
a first obtaining module 601, configured to obtain depth information of a focusing body in an exposure period corresponding to each frame image;
a second acquisition module 602 configured to acquire depth difference information between a target frame and the focus subject in a frame preceding the target frame;
the first focusing module 603 is configured to control the lens to move to a first target position according to the depth information of the focusing main body in the target frame in the exposure time period if the depth difference information is greater than a preset difference value;
the first target position is a position at which the focusing main body in the target frame is in a clear state.
In an exemplary embodiment, the focusing device further includes:
and a recording module 604 configured to record the depth information of the focusing body in the target frame if the depth difference information is less than or equal to the preset difference value.
In an exemplary embodiment, the focusing device further includes:
the processing module 606 is configured to perform image processing on the target frame after the exposure time period corresponding to the target frame is over, so as to obtain phase information;
a determining module 606 configured to determine a focal length state of the focusing body according to the phase information;
a second focusing module 607 configured to control the lens movement to a second target position according to the focal length state of the focusing body.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 7 is a block diagram illustrating a terminal device 700 for focusing according to an exemplary embodiment.
Referring to fig. 7, a terminal device 700 may include one or more of the following components: a processing component 702, a memory 704, a power component 706, a multimedia component 708, an audio component 710, an input/output (I/O) interface 712, a sensor component 714, and a communication component 716.
The processing component 702 generally controls overall operation of the terminal device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 702 may include one or more processors 720 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 702 can include one or more modules that facilitate interaction between the processing component 702 and other components. For example, the processing component 702 may include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.
The memory 704 is configured to store various types of data to support operation at the terminal device 700. Examples of such data include instructions for any application or method operating on the terminal device 700, contact data, phonebook data, messages, pictures, video, and the like. The memory 704 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 706 provides power to the various components of the terminal device 700. Power supply components 706 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for terminal device 700.
The multimedia component 708 comprises a screen between the terminal device 700 and the user providing an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 708 includes a front-facing camera and/or a rear-facing camera. The front camera and/or the rear camera may receive external multimedia data when the terminal device 700 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 710 is configured to output and/or input audio signals. For example, the audio component 710 includes a Microphone (MIC) configured to receive external audio signals when the terminal device 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 704 or transmitted via the communication component 717. In some embodiments, the audio component 710 further includes a speaker for outputting audio signals.
The I/O interface 712 provides an interface between the processing component 702 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 714 includes one or more sensors for providing status assessment of various aspects for the terminal device 700. For example, the sensor assembly 714 may detect an on/off state of the terminal device 700, a relative positioning of the components, such as a display and keypad of the terminal device 700, a change in position of the terminal device 700 or a component of the terminal device 700, the presence or absence of a user's contact with the terminal device 700, an orientation or acceleration/deceleration of the terminal device 700, and a change in temperature of the terminal device 700. The sensor assembly 714 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 714 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 717 is configured to facilitate communication between the terminal device 700 and other devices, either wired or wireless. The terminal device 700 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 717 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 717 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the terminal device 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 704, including instructions executable by processor 720 of terminal device 700 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
A non-transitory computer readable storage medium, which when executed by a processor of a terminal device, causes the terminal device to perform a focusing method, the method comprising any of the focusing methods described above.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (13)

1. A focusing method applied to a terminal device having a lens, the focusing method comprising:
acquiring depth information of a focusing main body in an exposure time period corresponding to each frame of image;
acquiring depth difference information between a target frame and the focusing main body in a frame before the target frame;
if the depth difference information is larger than a preset difference value, controlling the lens to move to a first target position according to the depth information of the focusing main body in the target frame in an exposure time period;
the first target position is a position at which the focusing main body in the target frame is in a clear state.
2. The focusing method according to claim 1, wherein controlling the lens to move to the first target position according to the depth information of the focusing body in the target frame comprises:
determining the focusing position of the focusing main body according to the depth information of the focusing main body in the target frame;
determining a first target position of the lens according to the focusing position of the focusing main body;
and controlling the lens to move to the first target position.
3. The focusing method according to claim 1, characterized in that the focusing method further comprises:
and if the depth difference information is smaller than or equal to the preset difference value, recording the depth information of the focusing main body in the target frame.
4. A focusing method according to any one of claims 1 to 3, characterized in that the focusing method further comprises:
after the exposure time period corresponding to the target frame is over, performing image processing on the target frame to obtain phase information;
determining a focal length state of the focusing main body according to the phase information;
and controlling the lens to move to a second target position according to the focal length state of the focusing main body.
5. The focusing method according to claim 4, wherein controlling the lens to move to the second target position according to the focal length state of the focusing body comprises:
if the focal length state of the focusing main body is defocused, determining the focusing position of the focusing main body by adopting a phase focusing method and/or a contrast focusing method;
determining a second target position of the lens according to the focusing position of the focusing main body;
and controlling the lens to move to the second target position.
6. The focusing method according to claim 5, wherein the determining the focus position of the focusing body using the phase focusing method and/or the contrast focusing method includes:
and when the phase focusing method cannot determine the focusing position of the focusing main body, determining the focusing position of the focusing main body by adopting the contrast focusing method.
7. The focusing method according to claim 1, wherein the acquiring depth information of the focusing body includes:
obtaining an image of each frame by adopting a time-of-flight focusing method;
the focus subject is determined in the image using a target detection method and/or a saliency detection method.
8. The focusing method according to claim 7, characterized in that the focusing method further comprises:
and if the image information does not have a subject, determining the middle area of the image as the focusing subject.
9. A focusing device applied to a terminal device having a lens, the focusing device comprising:
the first acquisition module is configured to acquire depth information of the focusing main body in an exposure time period corresponding to each frame of image;
a second acquisition module configured to acquire depth difference information between a target frame and the focus subject in a frame preceding the target frame;
the first focusing module is configured to control the lens to move to a first target position according to the depth information of the focusing main body in the target frame in the exposure time period if the depth difference information is larger than a preset difference value;
the first target position is a position at which the focusing main body in the target frame is in a clear state.
10. The focusing device of claim 9, wherein the focusing device further comprises:
and the recording module is configured to record the depth information of the focusing main body in the target frame if the depth difference information is smaller than or equal to the preset difference value.
11. Focusing device according to claim 9 or 10, characterized in that the focusing device further comprises:
the processing module is configured to perform image processing on the target frame after the exposure time period corresponding to the target frame is over, so as to obtain phase information;
a determining module configured to determine a focal length state of the focusing body according to the phase information;
and the second focusing module is configured to control the lens to move to a second target position according to the focal length state of the focusing main body.
12. A terminal device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the focusing method of any one of claims 1-8.
13. A non-transitory computer readable storage medium, which when executed by a processor of a terminal device, causes the terminal device to perform the focusing method of any one of claims 1-8.
CN202210779457.1A 2022-07-04 2022-07-04 Focusing method, device, equipment and storage medium Pending CN117412168A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210779457.1A CN117412168A (en) 2022-07-04 2022-07-04 Focusing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210779457.1A CN117412168A (en) 2022-07-04 2022-07-04 Focusing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117412168A true CN117412168A (en) 2024-01-16

Family

ID=89491252

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210779457.1A Pending CN117412168A (en) 2022-07-04 2022-07-04 Focusing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117412168A (en)

Similar Documents

Publication Publication Date Title
CN110557547B (en) Lens position adjusting method and device
EP3945494A1 (en) Video processing method, apparatus and storage medium
CN111756989A (en) Method and device for controlling focusing of lens
CN108040213B (en) Method and apparatus for photographing image and computer-readable storage medium
CN107241535B (en) Flash lamp adjusting device and terminal equipment
CN114189622B (en) Image shooting method, device, electronic equipment and storage medium
CN114666490B (en) Focusing method, focusing device, electronic equipment and storage medium
CN112702514B (en) Image acquisition method, device, equipment and storage medium
CN114339019B (en) Focusing method, focusing device and storage medium
US11252341B2 (en) Method and device for shooting image, and storage medium
CN116418975A (en) Occlusion detection method, device and medium for camera group
CN117412168A (en) Focusing method, device, equipment and storage medium
CN111461950B (en) Image processing method and device
CN107682623B (en) Photographing method and device
CN107707819B (en) Image shooting method, device and storage medium
CN114339017B (en) Distant view focusing method, device and storage medium
CN114339018B (en) Method and device for switching lenses and storage medium
CN111464753B (en) Picture shooting method and device and storage medium
CN115118950B (en) Image processing method and device
CN108471524B (en) Focusing method and device and storage medium
CN116489507A (en) Focusing method, focusing device, electronic equipment and storage medium
CN106713748B (en) Method and device for sending pictures
CN115134517A (en) Shooting control method and device and storage medium
CN116193235A (en) Shooting method, shooting device, electronic equipment and storage medium
CN117546071A (en) Zoom method, zoom device, electronic apparatus, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination