CN109089047A - Control method and apparatus, the storage medium, electronic equipment of focusing - Google Patents

Control method and apparatus, the storage medium, electronic equipment of focusing Download PDF

Info

Publication number
CN109089047A
CN109089047A CN201811150676.3A CN201811150676A CN109089047A CN 109089047 A CN109089047 A CN 109089047A CN 201811150676 A CN201811150676 A CN 201811150676A CN 109089047 A CN109089047 A CN 109089047A
Authority
CN
China
Prior art keywords
image
focus
depth data
camera
shot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811150676.3A
Other languages
Chinese (zh)
Other versions
CN109089047B (en
Inventor
陈岩
方攀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811150676.3A priority Critical patent/CN109089047B/en
Publication of CN109089047A publication Critical patent/CN109089047A/en
Application granted granted Critical
Publication of CN109089047B publication Critical patent/CN109089047B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

This application involves a kind of method and apparatus of control focusing, storage medium, electronic equipment, and the depth data of image to be captured is obtained by TOF camera, and the distance for shooting main body to TOF camera is obtained from the depth data of image to be captured.It is focused according to the distance controlling RGB camera of shooting main body to TOF camera, shooting image finally is treated to defocused RGB camera and is shot, first object image is obtained.TOF camera is to obtain the depth data of entire image simultaneously by transmitting infrared light, and speed is very fast.Therefore, also very fast from the distance for obtaining shooting main body to TOF camera in by the depth data of image to be captured acquired in TOF camera when shooting image, and then focused according to the distance controlling RGB camera of shooting main body to TOF camera.Shooting image finally is treated to defocused RGB camera to shoot, and obtains first object image.So focusing speed when finally improving shooting.

Description

Method and device for controlling focusing, storage medium and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for controlling focusing, a storage medium, and an electronic device.
Background
With the rapid development of the intelligent mobile terminal, the user uses the mobile terminal with the camera shooting function to shoot more and more frequently. When a mobile terminal is used for taking a picture, focusing operation is generally required, and the focusing operation can be performed automatically by a camera of the mobile terminal or can be realized by a user clicking a screen. The traditional focusing method has complex processing procedure, and the focusing speed is slow.
Disclosure of Invention
The embodiment of the application provides a method and a device for controlling focusing, a storage medium and electronic equipment, which can improve the focusing speed.
A method of controlling focus, comprising:
acquiring depth data of an image to be shot through a TOF camera;
acquiring the distance from a shooting subject to the TOF camera from the depth data of the image to be shot;
controlling an RGB camera to focus according to the distance from the shooting main body to the TOF camera;
and shooting the image to be shot by the focused RGB camera to obtain a first target image.
An apparatus for controlling focusing, the apparatus comprising:
the device comprises a depth data acquisition module of an image to be shot, a depth data acquisition module of the image to be shot and a depth data acquisition module of the image to be shot, wherein the depth data acquisition module is used for acquiring the depth data of the image to be shot through a TOF camera;
the distance acquisition module is used for acquiring the distance from the shooting main body to the TOF camera from the depth data of the image to be shot;
the focusing module is used for controlling the RGB camera to focus according to the distance from the shooting main body to the TOF camera;
and the shooting module is used for shooting the image to be shot by the focused RGB camera to obtain a first target image.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of controlling focusing as described above.
An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor performing the steps of the method of controlling focusing as described above when executing the computer program.
According to the focusing control method and device, the storage medium and the electronic equipment, the depth data of the image to be shot is obtained through the TOF camera, and the distance from the shooting main body to the TOF camera is obtained from the depth data of the image to be shot. And controlling the RGB camera to focus according to the distance from the shooting main body to the TOF camera, and finally shooting the image to be shot by the focused RGB camera to obtain a first target image. The TOF camera is used for simultaneously obtaining the depth data of the whole image by emitting infrared light, and the speed is very high. Therefore, when an image is shot, the distance from the shooting subject to the TOF camera is obtained from the depth data of the image to be shot obtained by the TOF camera very quickly, and the RGB camera is controlled to focus according to the distance from the shooting subject to the TOF camera. And finally, shooting the image to be shot by the focused RGB camera to obtain a first target image. The focusing speed at the time of photographing is finally improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of the internal structure of an electronic device in one embodiment;
FIG. 2 is a flow diagram of a method of controlling focus in one embodiment;
FIG. 3 is a flowchart of a method for obtaining a distance from a subject to a TOF camera from depth data of an image to be photographed in FIG. 2;
FIG. 4 is a flowchart of the method of FIG. 3 for obtaining a first focus from an image to be captured;
FIG. 5 is a flowchart of a method for controlling focusing in another embodiment;
FIG. 6 is a schematic diagram of an apparatus for controlling focusing in one embodiment;
FIG. 7 is a schematic structural diagram of an apparatus for controlling focusing in another embodiment;
FIG. 8 is a schematic diagram of an image processing circuit in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Fig. 1 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 1, the electronic device includes a processor, a memory, and a network interface connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory is used for storing data, programs and the like, and the memory stores at least one computer program which can be executed by the processor to realize the scene recognition method suitable for the electronic device provided in the embodiment of the application. The Memory may include a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random-Access-Memory (RAM). For example, in one embodiment, the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor for implementing a method for controlling focusing provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The network interface may be an ethernet card or a wireless network card, etc. for communicating with an external electronic device. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc.
In one embodiment, as shown in fig. 2, a method for controlling focusing is provided, which is described by taking the method as an example for being applied to the electronic device in fig. 1, and includes:
and step 220, acquiring the depth data of the image to be shot through the TOF camera.
TOF is an abbreviation of Time of flight, interpreted as the meaning of Time of flight. The TOF camera is a time of flight camera. TOF cameras obtain target object distance by continuously sending light pulses to the target and then receiving light returning from the object with a sensor by detecting the time of flight (round trip) of the light pulses. The technology is different from a 3D laser sensor in principle, the 3D laser sensor scans point by point, and a TOF camera obtains depth information of the whole image at the same time. The image to be captured herein may refer to a preview image presented on the display screen of the electronic device when the camera is turned on.
And step 240, acquiring the distance from the shooting subject to the TOF camera from the depth data of the image to be shot.
Generally, a subject is photographed. In auto-focusing, a camera generally selects an object closest to a lens and having the largest contrast as a subject, and sometimes a situation occurs that a subject automatically searched by the camera is not a subject required by a user, or a position of a subject does not conform to a composition of the user, and then the subject must be manually selected. For example, when the foreground in the image to be captured is a person and the background is a blue-sky white cloud, the camera may automatically use the person as the subject of the capturing. Of course, the person in the preview image may be the subject of shooting by manually clicking the screen by the user. And when the proportion of the background in the image to be shot is very small, the proportion of the people in the foreground is very high, and the number of the people is large, the shooting subject is automatically determined through the camera, so that the shooting subject is difficult, and a certain person or a plurality of persons in the preview image can be used as the shooting subject by a user through clicking a screen.
After the subject is determined, the distance from the subject to the TOF camera is acquired from the depth data of the image to be captured. Specifically, a first focus is acquired from an image to be captured, the first focus being located on a subject. And acquiring depth data corresponding to the first focus from the depth data of the image to be shot, and taking the depth data corresponding to the first focus as the distance from the shooting subject to the TOF camera.
And step 260, controlling the RGB camera to focus according to the distance from the shooting main body to the TOF camera.
After the distance from the shooting main body to the TOF camera is obtained, motor parameters of the RGB camera are calculated according to the distance from the shooting main body to the TOF camera, and then the motor of the RGB camera is controlled to move according to the calculated motor parameters of the RGB camera, so that focusing is achieved. The number of RGB cameras on the electronic device is not limited.
And step 280, shooting the image to be shot by the focused RGB camera to obtain a first target image.
And controlling the RGB camera to focus according to the depth data from the first focus on the shooting subject to the TOF camera (which can also be an imaging plane) acquired by the TOF camera. Specifically, according to the depth data, calibration data for calibrating the RGB camera and the TOF camera simultaneously, and motor parameters of the RGB camera in the current camera, the depth data from the first focus to the TOF camera is converted into the number of moving steps for the motor of the RGB camera to attempt focusing. And then moving the motor of the RGB camera by the corresponding steps, so that focusing can be realized according to the first focus. Then, the focused RGB camera shoots an image to be shot, and a first target image is obtained.
In the embodiment of the application, compared with a binocular stereo camera or a triangulation system, the TOF camera is small in size and almost the same as a common camera, and is very suitable for occasions needing light and small-size cameras. The TOF camera can rapidly calculate depth information in real time, and the depth information can reach dozens to 100 fps. Secondly, the binocular stereo camera needs to use a complex correlation algorithm, and the processing speed is low. The depth calculation accuracy of the TOF does not change along with the change of the distance and can be basically stabilized on the cm level, which is very significant for some application occasions with large-range motion. And the TOF camera can still carry out distance measurement under the dim light environment, and the problem that the RGB camera is out of focus in the dim light environment is solved.
And acquiring the depth data of the image to be shot through the TOF camera, and acquiring the distance from the shooting main body to the TOF camera from the depth data of the image to be shot. And controlling the RGB camera to focus according to the distance from the shooting main body to the TOF camera. The TOF camera is used for simultaneously obtaining the depth data of the whole image by emitting infrared light, and the speed is very high. Therefore, when an image is shot, the distance from the shooting subject to the TOF camera is obtained from the depth data of the image to be shot obtained by the TOF camera very quickly, and the RGB camera is controlled to focus according to the distance from the shooting subject to the TOF camera. Therefore, the focusing speed during shooting is finally improved, the focusing accuracy is higher, and accurate focusing can be performed in a dark light environment. And finally, shooting the image to be shot by the focused RGB camera to obtain a first target image. Obviously, the image quality of the first target image photographed after the above focusing process will be greatly improved.
In one embodiment, as shown in fig. 3, step 240, obtaining the distance from the subject to the TOF camera from the depth data of the image to be captured includes:
step 242, a first focus is obtained from the image to be photographed, and the first focus is located on the photographic subject.
First, it is necessary to determine a photographic subject from an image to be photographed, and then determine a first focus from the photographic subject. The steps of determining the subject and determining the first focus may be automatically set by the camera, or may be determined by a user manually clicking a preview image appearing on the electronic screen. The camera generally selects an object closest to the lens and having the largest contrast as a subject, and selects a point on the subject as a first focus. For example, when the foreground in the image to be captured is a person and the background is a blue sky and a white cloud, the camera may automatically use the foreground person closest to the take lens and having the largest contrast as the subject. A point (e.g., at the bridge of the nose) is then selected from the face portion of the foreground person as the first focus.
When the photographic subject and the first focus automatically determined by the camera do not meet the photographing requirement of the user, the photographic subject can be determined by manually clicking a preview image appearing on the electronic screen by the user, and the first focus is further determined from the photographic subject.
In step 244, the depth data corresponding to the first focus is obtained from the depth data of the image to be photographed.
After the shooting subject and the first focus are determined, the depth data corresponding to the first focus can be acquired from the depth data of the image to be shot. The depth data is the depth data of the image to be shot, which is acquired by the TOF camera through transmitting and receiving infrared light, and the acquisition process is high in speed and accuracy and is suitable for the environments with better light and dark light.
In step 246, the depth data corresponding to the first focus is used as the distance from the subject to the TOF camera.
The distance from the subject to the TOF camera is taken and can be understood as the focal length. And taking the depth data corresponding to the first focus as the distance from the shooting subject to the TOF camera. And subsequently, the RGB camera can be controlled to focus according to the distance from the shooting main body to the TOF camera.
In the embodiment of the present application, after the photographic subject is determined from the image to be photographed and the first focus is further determined from the photographic subject. And correspondingly acquiring the depth data of the first focus directly from the depth data of the image to be shot acquired by the TOF camera. The depth data of the first focus is the distance from the shooting subject to the TOF camera. From the depth data of the image to be photographed acquired by the TOF camera, the accuracy of the acquired distance from the photographing subject to the TOF camera is very high. Therefore, the accuracy of controlling the focusing of the RGB camera according to the acquired distance from the shooting subject to the TOF camera is further improved.
In one embodiment, as shown in fig. 4, step 242, obtaining a first focus from an image to be captured, the first focus being located on a subject, includes:
step 242a, acquiring a shooting subject from an image to be shot;
step 242b, determining a plurality of to-be-selected focuses from the shooting subject, wherein the plurality of to-be-selected focuses have different depth data;
242c, performing shooting preview according to each focus to be selected respectively to obtain a plurality of preview images;
and 242d, screening the target preview image from the plurality of preview images, acquiring a to-be-selected focus corresponding to the target preview image, and taking the to-be-selected focus as a first focus.
Specifically, after the shooting subject is acquired from the image to be shot according to the method in the above embodiment, a plurality of candidate focuses are determined from the shooting subject, and the plurality of candidate focuses have different depth data respectively. The depth data of the plurality of candidate focuses can be determined according to the arithmetic progression. For example, when it is determined from the image to be captured that the subject is a person, it is assumed that there are a plurality of persons in the image to be captured, and the plurality of persons stand together in front and behind. At this time, a plurality of persons have different depth data. And screening out depth data from the depth data of the characters according to the sequence of the arithmetic sequence, namely, the screened depth data are arranged in an arithmetic sequence. For example, the slave station may screen out one datum (e.g., 3 meters) from the depth data of a first person closest to the camera, then screen out one datum from the depth data of a second person next to the first person from the camera (e.g., 3.05 meters, where the tolerance of the arithmetic series is 0.05 meters, although other values may be used in other embodiments), and so on to obtain a set of arithmetic series. The focus corresponding to the depth values in the arithmetic progression is the focus to be selected.
And shooting and previewing according to each focus to be selected respectively to obtain a plurality of preview images. For example, an image to be photographed is photographed according to a focus to be selected, that is, the RGB camera is sequentially focused according to the depth values in the arithmetic progression, so as to obtain a group of preview images respectively. The shooting focus corresponding to each preview image is different, so the image quality of each preview image can be different. The image quality herein may include sharpness, brightness, chromaticity, and the like, and is not limited herein.
After a group of preview images are obtained, a target preview image is screened from the plurality of preview images, and the target preview image is the preview image with the best image quality in the group of preview images. And screening out a preview image with the best image quality from the plurality of preview images, wherein the screening can be automatically completed by a processor in the camera. After a preview image with the best image quality is screened out, a to-be-selected focus corresponding to shooting of the target preview image is obtained, and the to-be-selected focus is used as a first focus. Finally, when the image to be processed is formally photographed, the depth data of the first focus can be obtained according to the first focus, and the depth data corresponding to the first focus is used as the distance from the photographing main body to the TOF camera. And controlling the RGB camera to focus according to the distance from the shooting main body to the TOF camera, and shooting the image to be shot by the focused RGB camera to obtain a first target image.
In the embodiment of the application, when the first focus is determined from the image to be processed, a plurality of focuses to be selected are determined from the shooting subject, and each focus to be selected has different depth data. Thus, a group of preview images are shot according to a plurality of focuses to be selected, and the preview images are obtained by shooting according to focuses with different depth data respectively. Therefore, the image with the best image quality is screened from the plurality of preview images, so that the result after comparison and screening is more accurate, the accuracy of the determined first focus is better improved, and the image quality of the first target image shot according to the first focus is further improved. And the depth data is obtained through a TOF camera, and the TOF camera is used for transmitting infrared light and obtaining the depth data of the whole image at the same time, so that the speed is very high. Therefore, the focusing speed is improved, and meanwhile the image quality of the first target image obtained by shooting is guaranteed.
In one embodiment, as shown in fig. 5, in step 280, after capturing an image to be captured by the focused RGB camera to obtain a first target image, the method includes:
step 510, determining whether the image quality of the first target image meets a preset standard.
The preset standard may be stored in the electronic device in advance, for example, the definition of the image reaches a certain standard, or the color and brightness of the image reach a certain standard.
And step 520, if not, acquiring a point within a preset range taking the first focus as the center from the image to be shot as a second focus, wherein the second focus is positioned on the shooting subject.
After the first focus is preliminarily determined and the RGB camera is controlled to focus according to the acquired distance from the shooting main body to the TOF camera, whether the image quality of the shot first target image reaches a preset standard or not is judged, and if the judgment result is that the preset standard is reached, the second focus does not need to be re-determined.
If the judgment result is that the second focus is not in the preset standard, the second focus needs to be determined again, and certainly, the second focus is still located on the shooting subject. And acquiring a point within a preset range with the first focus as the center from the image to be shot as a second focus. For example, it may be that a certain number of pixel points are respectively moved in four directions, left or right or up or down, within a preset range centered on the first focus, so that the acquired pixel points are corresponded to points on the photographic subject as the second focus. Specifically, which direction to move and how many pixels to move can be determined according to the image quality of the first target image. For example, when the sharpness of the first target image is lower than the preset criterion, it is indicated that the determined depth data of the first focus is larger than the focal length at the time of shooting sharpness, and therefore, a point where the depth data is slightly smaller than that of the first focus may be selected as the second focus in the vicinity of the first focus when the second focus is determined again. Assuming that the depth data of the first focus is 4 meters, a point near the first focus and having a depth data of 3.95 meters is selected as the second focus at an interval of 0.05 meters. Of course, this interval may also be set to other reasonable values.
When the definition of the first target image meets the preset standard but the brightness of the first target image is lower than the preset standard, it is determined that the depth data of the first focus meets the preset standard, but the brightness of the first focus is brighter, so that the whole shot image is darker. In order to make the brightness of the image meet the preset standard, the point with the same depth data as the first focus and the darker brightness is selected as the second focus near the first focus, and the brightness of the image shot according to the second focus is improved.
Step 530, obtaining depth data corresponding to the second focus from the depth data of the image to be shot.
And 540, taking the depth data corresponding to the second focus as the distance from the shooting subject to the TOF camera.
And acquiring depth data corresponding to the second focus from the depth data of the image to be shot according to the position of the second focus on the image to be shot. And taking the depth data corresponding to the second focus as the distance from the shooting subject to the TOF camera. The depth data corresponding to the second focal point may be considered to be the focal length.
And step 550, controlling the RGB camera to focus according to the distance from the shooting subject to the TOF camera.
And 560, shooting the image to be shot by the focused RGB camera to obtain a second target image.
And converting the depth data from the second focus to the TOF camera into the number of moving steps of the motor of the RGB camera for trying to focus according to the depth data corresponding to the second focus, the calibration data for calibrating the RGB camera and the TOF camera simultaneously and the motor parameters of the RGB camera in the current camera. And then moving the motor of the RGB camera by the corresponding steps, so that focusing according to the second focus can be realized. Then, the RGB camera which is focused again shoots the image to be shot, and a second target image is obtained.
In the embodiment of the application, in the depth data acquired by the TOF camera corresponding to the first focus roughly acquired from the shooting subject for the first time, the depth data of the first focus is acquired. And focusing the RGB camera according to the depth data of the first focus, and shooting by adopting the focused RGB camera to obtain a first target image. And judging that the image quality of the first target image does not meet the preset standard, and finely adjusting the focusing effect. An attempt may be made to move a certain number of pixel points in four directions, i.e., left or right or up or down, respectively, within a preset range centered on the first focus, so as to take the acquired pixel points to points on the photographic subject as the second focus. And focusing the RGB camera according to the depth data of the second focus, and shooting by adopting the focused RGB camera to obtain a second target image. And judging whether the image quality of the second target image meets the preset standard or not. This is attempted until the resulting image quality meets a predetermined criterion. The focal position is tried to be changed for a plurality of times, so that the inaccuracy of the first focal point acquired at one time can be avoided, and the quality of the shot image cannot be improved from the source (namely, the focal position) although the depth data of the first focal point can be accurately acquired according to the TOF camera in the follow-up process.
In one embodiment, controlling the RGB camera to focus according to the distance from the subject to the TOF camera includes:
calculating motor parameters of the RGB camera according to the distance from the shooting main body to the TOF camera;
and controlling the motor of the RGB camera to move according to the calculated motor parameters of the RGB camera so as to realize focusing.
In the embodiment of the application, according to the distance from the shooting main body to the TOF camera, calibration data for calibrating the RGB camera and the TOF camera simultaneously, and motor parameters of the RGB camera in the current camera, depth data from a first focus on the shooting main body to the TOF camera is converted into the number of focusing trying steps of a motor of the RGB camera. And then moving the motor of the RGB camera by the corresponding steps, so that focusing can be realized according to the first focus. According to the depth data of the TOF camera, the motor of the RGB camera can be rapidly and accurately moved to realize focusing.
In one embodiment, acquiring depth data of an image to be shot through a TOF camera comprises:
multi-frame TOF data of different phases of an image to be shot are respectively obtained through the TOF camera, and each phase corresponds to one frame of TOF data;
and synthesizing the multi-frame TOF data to obtain the depth data of the image to be shot.
In the embodiment of the present application, one frame of depth data of an image to be captured is generally synthesized by one frame of TOF data corresponding to 4 phases or 8 phases, respectively. When the selected phase is 4, the TOF camera respectively acquires one frame of TOF data corresponding to 4 phases of the image to be shot, so that 4 frames of TOF data exist. And synthesizing the 4 frames of TOF data to obtain depth data of the image to be shot. Of course, when the selected phase is 8, the TOF camera acquires one frame of TOF data corresponding to 8 phases of the image to be shot respectively, so that 8 frames of TOF data exist. The 8 frames of TOF data are synthesized to obtain depth data of an image to be shot.
In one embodiment, after acquiring the depth data of the image to be shot through the TOF camera, the method includes:
and carrying out filtering and denoising operations on the depth data of the image to be shot.
In the embodiment of the application, the depth data of the image to be shot can be filtered and denoised by adopting a Poisson equation filtering method, a Gaussian filtering method and a bilateral filtering method. Therefore, the depth data of the image to be shot, which is obtained after the filtering and denoising treatment, is more accurate.
In one embodiment, as shown in fig. 6, there is provided an apparatus 600 for controlling focusing, including: the device comprises a depth data acquisition module 610, a distance acquisition module 620, a focusing module 630 and a shooting module 640 of the image to be shot. Wherein,
a depth data acquisition module 610 of the image to be shot, configured to acquire depth data of the image to be shot through a TOF camera;
a distance obtaining module 620, configured to obtain a distance from a shooting subject to the TOF camera from depth data of an image to be shot;
the focusing module 630 is configured to control the RGB camera to perform focusing according to a distance from the shooting subject to the TOF camera;
and the shooting module 640 is configured to shoot an image to be shot by the focused RGB camera to obtain a first target image.
In one embodiment, the distance obtaining module 620 is further configured to obtain a first focus from the image to be captured, where the first focus is located on the subject; acquiring depth data corresponding to a first focus from depth data of an image to be shot; and taking the depth data corresponding to the first focus as the distance from the shooting subject to the TOF camera.
In one embodiment, the distance obtaining module 620 is further configured to obtain a subject from an image to be captured; determining a plurality of to-be-selected focuses from a shooting subject, wherein the plurality of to-be-selected focuses have different depth data; shooting and previewing according to each focus to be selected respectively to obtain a plurality of preview images; and screening a target preview image from the plurality of preview images, acquiring a to-be-selected focus corresponding to the target preview image, and taking the to-be-selected focus as a first focus.
In one embodiment, as shown in fig. 7, there is provided an apparatus 600 for controlling focusing further comprising:
the judging module 650 is configured to judge whether the image quality of the first target image meets a preset standard;
a second focus obtaining module 660, configured to, if the second focus is not the first focus, obtain, from the image to be captured, a point within a preset range with the first focus as a center as a second focus, where the second focus is located on the capturing subject;
the refocusing module 670 is configured to obtain depth data corresponding to the second focus from the depth data of the image to be photographed; taking the depth data corresponding to the second focus as the distance from the shooting subject to the TOF camera; controlling the RGB camera to focus according to the distance from the shooting main body to the TOF camera;
and a re-shooting module 680, configured to shoot the image to be shot by the focused RGB camera to obtain a second target image.
In one embodiment, the focusing module 630 is further configured to calculate motor parameters of the RGB camera according to a distance from the subject to the TOF camera; and controlling the motor of the RGB camera to move according to the calculated motor parameters of the RGB camera so as to realize focusing.
In an embodiment, the depth data acquiring module 610 of the image to be captured is further configured to acquire, through a TOF camera, multiple frames of TOF data of different phases of the image to be captured, where each phase corresponds to one frame of TOF data; and synthesizing the multi-frame TOF data to obtain the depth data of the image to be shot.
In one embodiment, the depth data acquiring module 610 of the image to be captured is further configured to perform filtering and denoising operations on the depth data of the image to be captured.
The division of each module in the focusing control apparatus is only used for illustration, and in other embodiments, the focusing control apparatus may be divided into different modules as needed to complete all or part of the functions of the focusing control apparatus.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the steps of the method for controlling focusing provided by the above embodiments.
In one embodiment, an electronic device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the steps of the method for controlling focusing provided by the above embodiments are implemented.
Embodiments of the present application further provide a computer program product, which when run on a computer, causes the computer to execute the steps of the method for controlling focusing provided in the foregoing embodiments.
The embodiment of the application also provides the electronic equipment. The electronic device may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, a wearable device, and the like, taking the electronic device as the mobile phone as an example: the electronic device includes therein an Image processing circuit, which may be implemented using hardware and/or software components, and may include various processing units defining an ISP (Image signal processing) pipeline. FIG. 8 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 8, for convenience of explanation, only aspects of the image processing technology related to the embodiments of the present application are shown.
As shown in fig. 8, the image processing circuit includes a first ISP processor 830, a second ISP processor 840 and a control logic 850. The first camera 810 includes one or more first lenses 812 and a first image sensor 814. The first image sensor 814 may include a color filter array (e.g., a Bayer filter), and the first image sensor 814 may acquire light intensity and wavelength information captured with each imaging pixel of the first image sensor 814 and provide a set of image data that may be processed by the first ISP processor 830. The second camera 820 includes one or more second lenses 822 and a second image sensor 824. The second image sensor 824 may include a color filter array (e.g., a Bayer filter), and the second image sensor 824 may acquire light intensity and wavelength information captured with each imaging pixel of the second image sensor 824 and provide a set of image data that may be processed by the second ISP processor 840.
The first image acquired by the first camera 810 is transmitted to the first ISP processor 830 for processing, after the first ISP processor 830 processes the first image, the statistical data (such as the brightness of the image, the contrast value of the image, the color of the image, etc.) of the first image may be sent to the control logic 850, and the control logic 850 may determine the control parameter of the first camera 810 according to the statistical data, so that the first camera 810 may perform operations such as auto focus, auto exposure, etc. according to the control parameter. The first image may be stored in the image memory 860 after being processed by the first ISP processor 830, and the first ISP processor 830 may also read the image stored in the image memory 860 to process the image. In addition, the first image may be directly transmitted to the display 870 for display after being processed by the ISP processor 830, or the display 870 may read and display the image in the image memory 860.
Wherein the first ISP processor 830 processes the image data pixel by pixel in a plurality of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the first ISP processor 830 may perform one or more image processing operations on the image data, collecting statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth calculation accuracy.
The image Memory 860 may be part of a Memory device, a storage device, or a separate dedicated Memory within the electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving an interface from the first image sensor 814, the first ISP processor 830 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 860 for additional processing before being displayed. The first ISP processor 830 receives the processed data from the image memory 860 and performs image data processing in RGB and YCbCr color spaces on the processed data. The image data processed by the first ISP processor 830 may be output to a display 870 for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of the first ISP processor 830 may also be sent to an image memory 860, and the display 870 may read image data from the image memory 860. In one embodiment, image memory 860 may be configured to implement one or more frame buffers.
The statistics determined by the first ISP processor 830 may be sent to the control logic 850. For example, the statistical data may include first image sensor 814 statistical information such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, shading correction for first lens 812, and the like. Control logic 850 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters for first camera 810 and control parameters for first ISP processor 830 based on the received statistical data. For example, the control parameters of the first camera 810 may include gain, integration time of exposure control, anti-shake parameters, flash control parameters, first lens 812 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters, and the like. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as first lens 812 shading correction parameters.
Similarly, a second image acquired by the second camera 820 is transmitted to the second ISP processor 840 for processing, after the second ISP processor 840 processes the first image, the statistical data of the second image (such as the brightness of the image, the contrast value of the image, the color of the image, etc.) may be sent to the control logic 850, and the control logic 850 may determine the control parameters of the second camera 820 according to the statistical data, so that the second camera 820 may perform operations such as auto-focus and auto-exposure according to the control parameters. The second image may be stored in the image memory 860 after being processed by the second ISP processor 840, and the second ISP processor 840 may also read the image stored in the image memory 860 to perform processing. In addition, the second image may be directly transmitted to the display 870 for display after being processed by the ISP processor 840, or the display 870 may read and display the image in the image memory 860. Second camera 820 and second ISP processor 840 may also implement the processes described for first camera 810 and first ISP processor 830.
The following steps are performed to implement the image processing method using the image processing technique of fig. 8.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus (Rambus) direct RAM (RDRAM), direct bused dynamic RAM (DRDRAM), and Rambus Dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of controlling focus, comprising:
acquiring depth data of an image to be shot through a TOF camera;
acquiring the distance from a shooting subject to the TOF camera from the depth data of the image to be shot;
controlling an RGB camera to focus according to the distance from the shooting main body to the TOF camera;
and shooting the image to be shot by the focused RGB camera to obtain a first target image.
2. The method according to claim 1, wherein the obtaining of the distance from the subject to the TOF camera from the depth data of the image to be captured comprises:
acquiring a first focus from the image to be shot, wherein the first focus is positioned on a shooting subject;
acquiring depth data corresponding to the first focus from the depth data of the image to be shot;
and taking the depth data corresponding to the first focus as the distance from the shooting subject to the TOF camera.
3. The method according to claim 2, wherein the obtaining a first focus from the image to be captured, the first focus being located on a subject, comprises:
acquiring a shooting subject from the image to be shot;
determining a plurality of to-be-selected focuses from the shooting subject, wherein the plurality of to-be-selected focuses have different depth data;
shooting and previewing according to each focus to be selected respectively to obtain a plurality of preview images;
and screening a target preview image from the plurality of preview images, acquiring a to-be-selected focus corresponding to the target preview image, and taking the to-be-selected focus as a first focus.
4. The method according to claim 1, wherein after the capturing the image to be captured by the focused RGB camera to obtain a first target image, the method comprises:
judging whether the image quality of the first target image reaches a preset standard or not;
if not, acquiring a point within a preset range taking the first focus as the center from the image to be shot as a second focus, wherein the second focus is positioned on the shooting main body;
acquiring depth data corresponding to the second focus from the depth data of the image to be shot;
taking the depth data corresponding to the second focus as the distance from the shooting subject to the TOF camera;
controlling an RGB camera to focus according to the distance from the shooting main body to the TOF camera;
and shooting the image to be shot by the focused RGB camera to obtain a second target image.
5. The method of claim 1, wherein the controlling the RGB camera to focus according to the distance from the subject to the TOF camera comprises:
calculating motor parameters of the RGB camera according to the distance from the shooting main body to the TOF camera;
and controlling the motor of the RGB camera to move according to the calculated motor parameters of the RGB camera so as to realize focusing.
6. The method according to claim 1, wherein the obtaining of depth data of the image to be captured by the TOF camera comprises:
multi-frame TOF data of different phases of an image to be shot are respectively obtained through the TOF camera, and each phase corresponds to one frame of TOF data;
and synthesizing the multi-frame TOF data to obtain depth data of the image to be shot.
7. The method according to claim 1, after said acquiring depth data of the image to be captured by the TOF camera, comprising:
and carrying out filtering and denoising operations on the depth data of the image to be shot.
8. An apparatus for controlling focusing, the apparatus comprising:
the device comprises a depth data acquisition module of an image to be shot, a depth data acquisition module of the image to be shot and a depth data acquisition module of the image to be shot, wherein the depth data acquisition module is used for acquiring the depth data of the image to be shot through a TOF camera;
the distance acquisition module is used for acquiring the distance from the shooting main body to the TOF camera from the depth data of the image to be shot;
the focusing module is used for controlling the RGB camera to focus according to the distance from the shooting main body to the TOF camera;
and the shooting module is used for shooting the image to be shot by the focused RGB camera to obtain a first target image.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of controlling focusing according to any one of claims 1 to 7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of controlling focus according to any one of claims 1 to 7 when executing the computer program.
CN201811150676.3A 2018-09-29 2018-09-29 Method and device for controlling focusing, storage medium and electronic equipment Active CN109089047B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811150676.3A CN109089047B (en) 2018-09-29 2018-09-29 Method and device for controlling focusing, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811150676.3A CN109089047B (en) 2018-09-29 2018-09-29 Method and device for controlling focusing, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN109089047A true CN109089047A (en) 2018-12-25
CN109089047B CN109089047B (en) 2021-01-12

Family

ID=64842971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811150676.3A Active CN109089047B (en) 2018-09-29 2018-09-29 Method and device for controlling focusing, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN109089047B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109831609A (en) * 2019-03-05 2019-05-31 上海炬佑智能科技有限公司 TOF depth camera and its Atomatic focusing method
CN109905599A (en) * 2019-03-18 2019-06-18 信利光电股份有限公司 A kind of human eye focusing method, device and readable storage medium storing program for executing
CN110095078A (en) * 2019-05-07 2019-08-06 歌尔股份有限公司 Imaging method, equipment and computer readable storage medium based on TOF system
CN110708463A (en) * 2019-10-09 2020-01-17 Oppo广东移动通信有限公司 Focusing method, focusing device, storage medium and electronic equipment
CN110784653A (en) * 2019-11-20 2020-02-11 香港光云科技有限公司 Dynamic focusing method based on flight time and camera device thereof
CN111402314A (en) * 2019-12-30 2020-07-10 香港光云科技有限公司 Material attribute parameter obtaining method and device
CN111526282A (en) * 2020-03-26 2020-08-11 香港光云科技有限公司 Method and device for shooting with adjustable depth of field based on flight time
CN112154650A (en) * 2019-08-13 2020-12-29 深圳市大疆创新科技有限公司 Focusing control method and device for shooting device and unmanned aerial vehicle
CN112752026A (en) * 2020-12-31 2021-05-04 深圳市汇顶科技股份有限公司 Automatic focusing method, automatic focusing device, electronic equipment and computer readable storage medium
CN112770100A (en) * 2020-12-31 2021-05-07 南昌欧菲光电技术有限公司 Image acquisition method, photographic device and computer readable storage medium
CN112991439A (en) * 2019-12-02 2021-06-18 宇龙计算机通信科技(深圳)有限公司 Method, apparatus, electronic device, and medium for positioning target object
US11095902B2 (en) 2019-06-28 2021-08-17 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for image coding, electronic device and computer-readable storage medium
US11178324B2 (en) 2019-06-28 2021-11-16 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Focusing method and device, electronic device and computer-readable storage medium
CN114125434A (en) * 2021-11-26 2022-03-01 重庆盛泰光电有限公司 3D correcting unit of TOF camera
CN114244999A (en) * 2020-09-09 2022-03-25 北京小米移动软件有限公司 Automatic focusing method and device, camera equipment and storage medium
CN114982214A (en) * 2020-02-07 2022-08-30 Oppo广东移动通信有限公司 Electronic device, method of controlling electronic device, and computer-readable storage medium
US11457138B2 (en) 2019-06-28 2022-09-27 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and device for image processing, method for training object detection model
US11538175B2 (en) 2019-09-29 2022-12-27 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and apparatus for detecting subject, electronic device, and computer readable storage medium
CN117115189A (en) * 2023-07-10 2023-11-24 中铁第一勘察设计院集团有限公司 Track 3D geometric form monitoring method and system based on machine vision
US11836903B2 (en) 2019-10-16 2023-12-05 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Subject recognition method, electronic device, and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106412445A (en) * 2016-11-29 2017-02-15 广东欧珀移动通信有限公司 Control method, control device and electronic device
US20170064235A1 (en) * 2015-08-27 2017-03-02 Samsung Electronics Co., Ltd. Epipolar plane single-pulse indirect tof imaging for automotives
CN106998389A (en) * 2017-03-09 2017-08-01 广东欧珀移动通信有限公司 Control method, control device and the electronic installation of auto composition
CN207218938U (en) * 2017-09-15 2018-04-10 深圳奥比中光科技有限公司 Multi-functional 3D imaging modules and mobile terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170064235A1 (en) * 2015-08-27 2017-03-02 Samsung Electronics Co., Ltd. Epipolar plane single-pulse indirect tof imaging for automotives
CN106412445A (en) * 2016-11-29 2017-02-15 广东欧珀移动通信有限公司 Control method, control device and electronic device
CN106998389A (en) * 2017-03-09 2017-08-01 广东欧珀移动通信有限公司 Control method, control device and the electronic installation of auto composition
CN207218938U (en) * 2017-09-15 2018-04-10 深圳奥比中光科技有限公司 Multi-functional 3D imaging modules and mobile terminal

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109831609A (en) * 2019-03-05 2019-05-31 上海炬佑智能科技有限公司 TOF depth camera and its Atomatic focusing method
CN109905599A (en) * 2019-03-18 2019-06-18 信利光电股份有限公司 A kind of human eye focusing method, device and readable storage medium storing program for executing
CN110095078A (en) * 2019-05-07 2019-08-06 歌尔股份有限公司 Imaging method, equipment and computer readable storage medium based on TOF system
US11457138B2 (en) 2019-06-28 2022-09-27 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and device for image processing, method for training object detection model
US11178324B2 (en) 2019-06-28 2021-11-16 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Focusing method and device, electronic device and computer-readable storage medium
US11095902B2 (en) 2019-06-28 2021-08-17 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for image coding, electronic device and computer-readable storage medium
CN112154650A (en) * 2019-08-13 2020-12-29 深圳市大疆创新科技有限公司 Focusing control method and device for shooting device and unmanned aerial vehicle
US11538175B2 (en) 2019-09-29 2022-12-27 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and apparatus for detecting subject, electronic device, and computer readable storage medium
CN110708463A (en) * 2019-10-09 2020-01-17 Oppo广东移动通信有限公司 Focusing method, focusing device, storage medium and electronic equipment
US11836903B2 (en) 2019-10-16 2023-12-05 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Subject recognition method, electronic device, and computer readable storage medium
CN110784653A (en) * 2019-11-20 2020-02-11 香港光云科技有限公司 Dynamic focusing method based on flight time and camera device thereof
CN112991439A (en) * 2019-12-02 2021-06-18 宇龙计算机通信科技(深圳)有限公司 Method, apparatus, electronic device, and medium for positioning target object
CN112991439B (en) * 2019-12-02 2024-04-16 宇龙计算机通信科技(深圳)有限公司 Method, device, electronic equipment and medium for positioning target object
CN111402314A (en) * 2019-12-30 2020-07-10 香港光云科技有限公司 Material attribute parameter obtaining method and device
CN114982214A (en) * 2020-02-07 2022-08-30 Oppo广东移动通信有限公司 Electronic device, method of controlling electronic device, and computer-readable storage medium
CN111526282A (en) * 2020-03-26 2020-08-11 香港光云科技有限公司 Method and device for shooting with adjustable depth of field based on flight time
CN114244999A (en) * 2020-09-09 2022-03-25 北京小米移动软件有限公司 Automatic focusing method and device, camera equipment and storage medium
CN114244999B (en) * 2020-09-09 2023-11-24 北京小米移动软件有限公司 Automatic focusing method, device, image pickup apparatus and storage medium
CN112770100A (en) * 2020-12-31 2021-05-07 南昌欧菲光电技术有限公司 Image acquisition method, photographic device and computer readable storage medium
CN112752026A (en) * 2020-12-31 2021-05-04 深圳市汇顶科技股份有限公司 Automatic focusing method, automatic focusing device, electronic equipment and computer readable storage medium
CN114125434A (en) * 2021-11-26 2022-03-01 重庆盛泰光电有限公司 3D correcting unit of TOF camera
CN117115189A (en) * 2023-07-10 2023-11-24 中铁第一勘察设计院集团有限公司 Track 3D geometric form monitoring method and system based on machine vision

Also Published As

Publication number Publication date
CN109089047B (en) 2021-01-12

Similar Documents

Publication Publication Date Title
CN109089047B (en) Method and device for controlling focusing, storage medium and electronic equipment
CN107948519B (en) Image processing method, device and equipment
US11431915B2 (en) Image acquisition method, electronic device, and non-transitory computer readable storage medium
CN110225248B (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN110536057B (en) Image processing method and device, electronic equipment and computer readable storage medium
EP3480784B1 (en) Image processing method, and device
CN110213494B (en) Photographing method and device, electronic equipment and computer readable storage medium
CN110349163B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN107509044B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN110213498B (en) Image generation method and device, electronic equipment and computer readable storage medium
CN110166705B (en) High dynamic range HDR image generation method and device, electronic equipment and computer readable storage medium
CN107948617B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN109559353B (en) Camera module calibration method and device, electronic equipment and computer readable storage medium
CN110177212B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112019734B (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN111246100B (en) Anti-shake parameter calibration method and device and electronic equipment
CN109951641B (en) Image shooting method and device, electronic equipment and computer readable storage medium
CN110881103B (en) Focusing control method and device, electronic equipment and computer readable storage medium
CN112087571A (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN113298735A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110365897B (en) Image correction method and device, electronic equipment and computer readable storage medium
CN110392211B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112866547B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112866552B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN109582811B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant