WO2021175125A1 - System and method for automatically adjusting focus of a camera - Google Patents

System and method for automatically adjusting focus of a camera Download PDF

Info

Publication number
WO2021175125A1
WO2021175125A1 PCT/CN2021/077273 CN2021077273W WO2021175125A1 WO 2021175125 A1 WO2021175125 A1 WO 2021175125A1 CN 2021077273 W CN2021077273 W CN 2021077273W WO 2021175125 A1 WO2021175125 A1 WO 2021175125A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
interest
unit
preview frame
focus
Prior art date
Application number
PCT/CN2021/077273
Other languages
French (fr)
Inventor
Ravi prakash DIXIT
Sunil Kumar
Vinit kumar SHUKLA
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp., Ltd. filed Critical Guangdong Oppo Mobile Telecommunications Corp., Ltd.
Publication of WO2021175125A1 publication Critical patent/WO2021175125A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera

Definitions

  • the present invention generally relates to the field of imaging and more particularly to a system and method for automatically adjusting the focus of a rear camera using a front camera.
  • Images can be captured with a number of variations like, at different angles, positions, in various styles and facial expressions etc.
  • a user while using rear camera of his electronic device, if there are more than one objects or faces at different positions within a Field Of View (FOV) of the rear camera and the user wishes to set a focus for the rear camera, the user is required to manually set the focus on specific object (s) he wishes to focus and capture. Further, if the user wishes to change the focus to another object, the user is required to manually change the focus to another object or face using manual focus control. For instance, the user maybe required to touch on the area on the display of the rear camera preview that he wishes to focus on, thus, shifting the focus would require user/manual intervention.
  • FOV Field Of View
  • the present invention is directed towards providing a system and a method for automatically adjusting focus of a rear camera using a front camera.
  • a first aspect of the present disclosure relates to a method of automatically adjusting focus of a camera.
  • the method begins with receiving, in real-time, a camera preview frame at a first camera unit.
  • an object detection module detects an object of interest in the camera preview frame.
  • the tracking module identifies a current coordinates of the object of interest in the camera preview frame.
  • the tracking module dynamically monitors the camera preview frame to identify variation from the current coordinates of the object of interest.
  • the processing unit determines a movement of the object of interest in the camera preview frame based on the identified variation from the current coordinates of the object of interest.
  • the processing unit automatically adjusts the focus of a second camera unit based on at least one of the determined movement of the object of interest and the identified variation from the current coordinates of the object of interest.
  • the system comprises a first camera unit, a second camera unit, a processing unit, an object detection module and a tracking module, all components connected to each other.
  • the first camera unit is configured to receive, in real-time, a camera preview frame.
  • the object detection module is connected to said first camera unit, said object detection module is configured to detect an object of interest in the camera preview frame.
  • the tracking module is connected to said object detection module and the first camera unit, said tracking module is configured to identify a current coordinates of the object of interest in the camera preview frame.
  • the tracking module is further configured to dynamically monitor the camera preview frame to identify variation from the current coordinates of the object of interest.
  • the second camera unit is connected to the object detection module, the first camera unit and the tracking module, said second camera unit is configured to receive, in real-time, a second camera preview frame.
  • the processing unit connected to the first camera unit, the second camera unit, the tracking module and the object detection module, said processing unit is configured to determine a movement of the object of interest in the camera preview frame based on the variation from the current coordinates of the object of interest.
  • the processing unit is further configured to automatically adjust the focus of the second camera unit based on at least one of the determined movement of the object of interest and the identified variation from the current coordinates of the object of interest.
  • the user device comprises a first camera unit, a second camera unit, a processing unit, an object detection module and a tracking module, all components connected to each other.
  • the first camera unit is configured to receive, in real-time, a camera preview frame.
  • the object detection module is connected to said first camera unit, said object detection module is configured to detect an object of interest in the camera preview frame.
  • the tracking module is connected to said object detection module and the first camera unit, said tracking module is configured to identify a current coordinates of the object of interest in the camera preview frame.
  • the tracking module is further configured to dynamically monitor the camera preview frame to identify variation from the current coordinates of the object of interest.
  • the second camera unit is connected to the object detection module, the first camera unit and the tracking module, said second camera unit is configured to receive, in real-time, a second camera preview frame.
  • the processing unit connected to the first camera unit, the second camera unit, the tracking module and the object detection module, said processing unit is configured to determine a movement of the object of interest in the camera preview frame based on the variation from the current coordinates of the object of interest.
  • the processing unit is further configured to automatically adjust the focus of the second camera unit based on at least one of the determined movement of the object of interest and the identified variation from the current coordinates of the object of interest.
  • FIG. 1 illustrates an architecture of a system for automatically adjusting focus of a camera, in accordance with exemplary embodiments of the present disclosure.
  • FIG. 2 illustrates a high-level block diagram of system for automatically adjusting focus of a camera, in accordance with exemplary embodiment of the present disclosure.
  • FIG. 3 illustrates an exemplary method flow diagram depicting method for automatically adjusting focus of a camera, in accordance with exemplary embodiments of the present disclosure.
  • FIG. 4 illustrates exemplary directions of movement of an object of interest in a camera preview frame, in accordance with exemplary embodiments of the present disclosure.
  • FIG. 5 illustrates exemplary shifting of the focus of a second camera unit in a second camera preview frame based on at least the direction of movement of an object of interest in a camera preview frame, in accordance with exemplary embodiments of the present disclosure.
  • circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
  • well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
  • individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged.
  • a process is terminated when its operations are completed, but could have additional steps not included in a figure.
  • a process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
  • exemplary and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples.
  • any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art.
  • the terms “includes, ” “has, ” “contains, ” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive-in a manner similar to the term “comprising” as an open transition word-without precluding any additional or other elements.
  • the disclosed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter.
  • the present invention provides a method and system for automatically adjusting the focus of a camera.
  • the invention encompasses providing a technical solution for automatically adjusting the focus of a rear camera using a front camera. For instance, if there are more than one objects (or faces) at different positions within a Field Of View (FOV) of the rear camera and the user wishes to set a focus for the rear camera, the focus can be shifted on different objects at different moments by tracking the user ‘shead position and movement using the front camera in real-time.
  • the front camera may work in background, and using face detection techniques, the front camera may track the movement of the user and autofocus at desired area in the direction of the movement of the user to capture snapshots with the rear camera, without user manually intervening to shift the focus of the rear camera.
  • a ”processor or ”processing unit includes one or more processors, wherein processor refers to any logic circuitry for processing instructions.
  • a processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc.
  • the processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
  • an ”input/output (I/O) unit includes one or more data input device and one or more data output device.
  • An I/O unit may process commands and data to control I/O devices.
  • a data input device may comprise of input means for inputting character data, for example, a keyboard with a plurality of keys.
  • a data output device may comprise of output means for representing characters in response to the inputted data, for example, a display unit for displaying characters in response to the inputted data.
  • the input device and the output device may also be combined into a single unit, for example, a display unit with a virtual keyboard.
  • a memory unit or ”memory refers to a machine or computer readable medium including any mechanism for storing information in a form readable by a computer or similar machine.
  • a computer-readable medium includes read-only memory ( ”ROM” ) , random access memory ( ”RAM” ) , magnetic-disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media.
  • the ”camera preview frame comprises at least one real time preview of an event picked up by the camera sensor unit.
  • camera preview frame may refer to the preview generated by a camera and can be seen on the display of a user device when the user opens a camera application.
  • media refers to images, videos, animations, etc. and any other type of media that can be captured using a camera, as may be obvious to a person skilled in the art.
  • the system [100] comprises a processing unit [104] , a camera unit [102] , a tracking module [108] , an object detection module [106] and a memory unit [110] , all components connected to each other.
  • the camera unit [102] may further comprise of a first camera unit [102A] and the second camera unit [102B] .
  • the first camera unit [102A] is connected to the processing unit [104] , the second camera unit [102B] , the tracking module [108] , the object detection module [106] and the memory unit [110] .
  • the first camera unit [102A] is configured to receive, in real-time, a first camera preview frame.
  • the second camera unit [102B] is connected to the processing unit [104] , the first camera unit [102A] , the tracking module [108] , the object detection module [106] and the memory unit [110] .
  • the second camera unit [102A] is configured to receive, in real-time, a second camera preview frame.
  • the invention further encompasses that the first camera unit [102A] and second camera unit [102B] are configured to provide the real time data with respect to the current events occurring in the surrounding environment, known as the camera preview frame. An image/video may then be created from this real time data of the camera preview frame.
  • the invention encompasses that the first camera unit [102A] is a front camera of an electronic device (e.g., a smartphone) and the second camera unit [102B] is a rear camera of the same electronic device.
  • the object detection module [106] is connected to the first camera unit [102A] , the processing unit [104] , the second camera unit [102B] , the tracking module [108] and the memory unit [110] .
  • the object detection module [106] is configured to detect an object of interest in the first camera preview frame. For instance, a user using the front camera to shift the focus of the rear camera may look into the front camera, and accordingly, the object detection module [106] detect an object of interest (user‘ s face) in the camera preview frame received at the front camera.
  • the invention further encompasses that the object detection module [106] is configured to detect the object of interest in the camera preview frame based on automatic image analysis. For instance, a user using the front camera to shift the focus of the rear camera may look into the front camera, and accordingly, the object detection module [106] would implement face detection techniques to detect an object of interest (user ‘sface) in the camera preview frame received at the front camera.
  • the object detection module [106] would implement face detection techniques to detect an object of interest (user ‘sface) in the camera preview frame received at the front camera.
  • the tracking module [104] is connected to the object detection module [106] , the first camera unit [102A] , the processing unit [104] , the second camera unit [102B] and the memory unit [110] .
  • the tracking module [108] is configured to identify a current coordinates for the object of interest, as identified by the object detection module, in the camera preview frame. For instance, for an object of interest (e.g., a user’s face) , tracking module [104] detects the current focus point, say (x1, y1) . “x1” may be referred to the distance of focus point of the front camera from leftmost point in preview-screen of the front camera. “y1” may be referred to as the distance of focus point from topmost point in preview-screen.
  • the tracking module [108] is also configured to dynamically monitor the camera preview frame to identify any variation from the current coordinates of the object of interest. For instance, if a user moves his face (object of interest) , the tracking module [108] will detect the focus delta using face movement detection techniques, change in coordinates being (dx, dy) , where the focus delta component will serve as delta to shift focus from current focus point (x1, y1) to new focus point (x2, y2) .
  • the invention encompasses below relation between current coordinates and delta:
  • K1 &K2 are tuning factors for adjustment of focus of the front camera.
  • K1 &K2 may depend upon focus calibration, density and other equipment constraints, etc. and may be decided upon by the tracking module after its first calibration and testing in a working environment.
  • the invention encompasses that the tracking module [108] is configured to determine updated coordinates of the object of interest (user’s face) , that are the new focus points after face detection, the new coordinates being (x2, y2) .
  • the invention encompasses below relation between the updated and the current coordinates:
  • the processing unit [104] is connected to the first camera unit [102A] , the second camera unit [102B] , the tracking module [108] , the memory unit [110] and the object detection module [106] .
  • the processing unit [104] is configured to determine a movement of the object of interest in the camera preview frame based on the identified variation from the current coordinates of the object of interest. Thus, the processing unit [104] determines that a change occurred in the coordinates of the object of interest in the camera frame preview based on the current and updated coordinates of the object of interest as identified by the tracking module [108] .
  • the processing unit [104] is further configured to automatically adjust the focus of the second camera unit [102B] based on at least one of the determined movement of the object of interest and the identified variation from the current coordinates of the object of interest. For instance, if a user moves his head while looking at the front camera to signal shift for focus of the rear camera, the processing unit [104] upon determining a change in the position of the user’s head, shall shift the focus of the rear camera.
  • the invention encompasses that the processing unit [104] is further configured to detect a direction of the movement of the object of interest in the camera preview frame. As also discussed below with reference to Fig. 4, the invention further encompasses that the direction of the movement of the object of interest , for instance, may be one of a pitch up, a pitch down, a right yaw and a left yaw. Further, the processing unit [104] is configured to automatically adjust the focus of the second camera unit [102B] in the direction of the movement of the object of interest, further discussed in reference to Fig. 5.
  • the memory unit [110] is connected to the first camera unit [102A] , the second camera unit [102B] , the tracking module [108] , the processing unit [104] and the object detection module [106] .
  • the memory unit [110] is configured to temporarily store a first camera preview frame and a second camera preview frame.
  • the invention encompasses that the memory unit [110] also stores imaging parameters and detected one or more object of interest (e.g., faces) for each of the camera preview frame.
  • the memory unit [206] may be partitioned to store the pre-stored media in one part and the retrieved media in another part.
  • the present invention encompasses that the system [100] resides inside a user device.
  • the user device comprises a processing unit [104] , a camera unit [102] , a tracking module [108] , an object detection module [106] and a memory unit [110] , all components connected to each other.
  • the camera unit [102] of the user device may further comprise a first camera unit [102A] and the second camera unit [102B] .
  • the first camera unit [102A] of the user device is configured to receive, in real-time, a first camera preview frame.
  • the second camera unit [102A] of the user device is configured to receive, in real-time, a second camera preview frame.
  • the object detection module [106] of the user device is configured to detect an object of interest in the first camera preview frame. For instance, a user using the front camera to shift the focus of the rear camera may look into the front camera of the user device, and accordingly, the object detection module [106] of the user device detect an object of interest (user ‘sface) in the camera preview frame received at the front camera.
  • the object detection module [106] of the user device is configured to detect the object of interest in the camera preview frame based on automatic image analysis.
  • the tracking module [108] of the user device is configured to identify a current coordinates for the object of interest, as identified by the object detection module, in the camera preview frame.
  • the tracking module [108] of the user device is also configured to dynamically monitor the camera preview frame to identify any variation from the current coordinates of the object of interest.
  • the processing unit [104] of the user device is configured to determine a movement of the object of interest in the camera preview frame based on the identified variation from the current coordinates of the object of interest. Thus, the processing unit [104] of the user device determines that a change occurred in the coordinates of the object of interest in the camera frame preview based on the current and updated coordinates of the object of interest as identified by the tracking module [108] of the user device.
  • the processing unit [104] of the user device is further configured to automatically adjust the focus of the second camera unit [102B] of the user device, based on at least one of the determined movement of the object of interest and the identified variation from the current coordinates of the object of interest. For instance, if a user moves his head while looking at the front camera to signal shift for focus of the rear camera, the processing unit [104] upon determining a change in the position of the user’s head, shall shift the focus of the rear camera.
  • the invention encompasses that the processing unit [104] of the user device is further configured to detect a direction of the movement of the object of interest in the camera preview frame.
  • the direction of the movement of the object of interest may be one of a pitch up, a pitch down, a right yaw and a left yaw.
  • the processing unit [104] of the user device is configured to automatically adjust the focus of the second camera unit [102B] in the direction of the movement of the object of interest, further discussed in reference to Fig. 5.
  • the memory unit [110] of the user device is configured to temporarily store a first camera preview frame and a second camera preview frame.
  • the invention encompasses that the memory unit [110] of the user device also stores imaging parameters and detected one or more object of interest (e.g., faces) for each of the camera preview frame.
  • the memory unit [206] of the user device may be partitioned to store the pre-stored media in one part and the retrieved media in another part.
  • FIG. 2 refers to a high-level block diagram of a system [200] for automatically adjusting focus of a camera, in accordance with exemplary embodiment of the present disclosure.
  • the said system [200] comprises, at least one camera application [201] , at least one camera HAL [202] , at least one front camera face detection module [203] , at least one auto focus module [204] , at least one front camera driver [205] , at least one rear camera driver [206] , at least one rear camera sensor unit [208] and at least one front camera sensor unit [207] .
  • the at least one rear camera sensor unit [208] and the at least one front camera sensor unit [207] are configured to pick up the events in the surrounding of the system [200] as raw real-time data.
  • the at least one rear camera sensor unit [208] and the at least one front camera sensor unit [207] also comprise at least one light sensitive processing unit configured to measure and process the imaging parameters of the camera preview frame.
  • the at least one front camera driver [205] and the at least one rear camera driver [206] are configured to collect the raw real time data from the at least one rear camera sensor unit [208] and the at least one front camera sensor unit [207] , and provide the same to the at least one camera HAL [202] .
  • the at least one front camera face detection module [203] is configured to detect face of a user in the camera preview frame received via the at least one front camera sensor unit [207] and the at least one front camera driver [205] .
  • the at least one front camera face detection module [203] may implement face detection and face coordinates by known face detection techniques.
  • the at least one front camera face detection module [203] is also configured to dynamically monitor the camera preview frame received via the at least one front camera sensor unit [207] to identify any variation from the current coordinates of the face detected, and to provide the updated coordinates of the detected face.
  • the at least one front camera face detection module [203] is configured to provide the updated coordinates of the detected face along with other imaging parameters including, but not limited to, orientation, location, etc.
  • the at least one auto focus module [204] is configured to automatically adjust the focus of the at least one rear camera sensor unit [208] , via the at least one rear camera driver [206] , based on at least one of the updated coordinates of the detected face and the imaging parameters received from the at least one front camera face detection module [203] .
  • the at least one camera HAL [202] is configured to, provide a module to interact with the said at least one rear camera sensor unit [207] , the at least one front camera sensor unit [208] , at least one front camera driver [205] , the at least one rear camera driver [206] , the at least one front camera face detection module [203] and at least one auto focus module [204] .
  • the at least one camera HAL [202] is further configured to store files for input data, processing and the guiding mechanism.
  • the at least one camera HAL [202] is also configured to process the said collected real time data and provide the same to the at least one camera application [201] .
  • the at least one camera application [201] is configured to provide a graphical user interface to the user to provide a preview of the camera preview frame.
  • the invention encompasses that the camera the at least one camera application [201] is configured to display the camera preview frame on the display unit of the electronic device.
  • the camera application [201] is further configured to display best media capture icons and at least one media capturing mode.
  • the system [200] starts working upon the at least one camera application [201] being initiated, say, a camera application is opened in a user device (e.g., smartphone) .
  • the at least one front camera sensor unit [208] and at least one front camera driver [205] together, start providing camera preview frame to the at least one front camera face detection module [203] and at least one auto focus module [204] .
  • the at least one front camera face detection module [203] detects a user’s face. Dynamically, the at least one front camera face detection module [203] identifies any variation from the current coordinates of the face detected by the at least one front camera face detection module [203] . In an event of variation from the current coordinates, the at least one front camera face detection module [203] provides the updated coordinates of the detected face along with other imaging parameters including, but not limited to, orientation, location, etc.
  • the at least one auto focus module [204] then, automatically adjusts the focus of the at least one rear camera sensor unit [208] , via the at least one rear camera driver [206] , based on at least one of the updated coordinates of the detected face and the imaging parameters received from the at least one front camera face detection module [203] .
  • the at least one auto focus module [204] is also configured to, simultaneously, update the at least one camera HAL [202] with the adjusted focus of the at least one rear camera sensor unit [208] .
  • the at least one camera HAL [202] updates the at least one camera application [201] of the adjusted focus of the at least one rear camera sensor unit [208] , which is also reflected on the display the camera preview frame of the rear camera unit on the display unit of the electronic device.
  • FIG. 3 illustrates an exemplary method flow diagram [300] depicting a method for automatically adjusting focus of a camera, in accordance with exemplary embodiments of the present disclosure.
  • the method begins when a user starts operating at least one of the first camera unit [102A] and the second camera unit [102B] . For instance, when the user initiates a camera application on his/her electronic device to capture an image/video using the rear camera.
  • the method begins with receiving, in real-time, a camera preview frame at a first camera unit [102A] .
  • the first camera unit [102A] receives the real time data with respect to the current events occurring in the surrounding environment, known as the camera preview frame.
  • the invention encompasses that the first camera preview frame is not displayed to the user on the electronic device.
  • the object detection module [106] detects an object of interest in the camera preview frame. For instance, a user using the front camera to shift the focus of the rear camera may look into the front camera, and accordingly, the object detection module [106] detect an object of interest (user’s face) in the camera preview frame received at the front camera.
  • the method further encompasses that the object detection module [106] detects the object of interest in the camera preview frame based on an automatic image analysis. For instance, a user using the front camera to shift the focus of the rear camera may look into the front camera, and accordingly, the object detection module [106] would implement face detection techniques to detect an object of interest (user ‘sface) in the camera preview frame received at the front camera.
  • an object of interest user ‘sface
  • the tracking module [108] identifies a current coordinates of the object of interest in the camera preview frame. For instance, for an object of interest (e.g., a user’s face) , tracking module [104] detects the current center point, say (x1, y1) . “x1” may be referred to the distance of center point of the object of interest from leftmost point in the first preview-frame of the front camera. “y1” may be referred to as the distance of center point of the object of interest from topmost point in first preview-frame.
  • the coordinates (x1, y1) may be pixel coordinates or spatial coordinates.
  • the tracking module [108] dynamically monitors the camera preview frame to identify variation from the current coordinates of the object of interest. For instance, if a user moves his face (object of interest) , the tracking module [108] will detect the focus delta using face movement detection techniques, change in coordinates being (dx, dy) , where the focus delta component will serve as delta to shift focus from current point (x1, y1) to new point (x2, y2) .
  • the method also encompasses that the tracking module [108] determines an updated coordinates of the object of interest (user’s face) , that are the new center points after face detection, the new coordinates being (x2, y2) .
  • the invention encompasses below relation between the updated and the current coordinates:
  • the invention enables that as soon as a change or variation in the current coordinates of the object of interest is detected, the next steps are performed immediately.
  • the processing unit [104] determines a movement of the object of interest in the camera preview frame based on the identified variation from the current coordinates of the object of interest. Thus, the processing unit [104] determines that a change occurred in the coordinates of the object of interest in the first camera frame preview based on the current and updated coordinates of the object of interest as identified by the tracking module [108] .
  • the processing unit [104] automatically adjusts the focus of a second camera unit [102B] based on at least one of the determined movement of the object of interest and the identified variation from the current coordinates of the object of interest. For instance, if a user moves his head while looking at the front camera to signal shift for focus of the rear camera, the processing unit [104] upon determining a change in the position of the user’s head, shall shift the focus of the rear camera. The method ends at step 313.
  • the invention encompasses determining an initial focus point of the rear camera, wherein the initial focus point may be determined based on one of a manual input and an automatic identification. For instance, when the user initiates a camera application to capture an image/video using the rear camera, the user may provide a manual input for initial focus point by, for example, touching on a specific area on the display of the user device where the second camera preview frame from the second camera unit [102B] is displayed. In another example, the initial focus point may be determined by the second camera unit [102B] automatically using one or more auto-focus mechanisms known in the art.
  • the invention also encompasses that the processing unit [104] detects a direction of the movement of the object of interest in the camera preview frame. As also discussed below with reference to Fig. 4, the invention further encompasses that the direction of the movement of the object of interest , for instance, may be one of a pitch up, a pitch down, a right yaw and a left yaw. Further, the processing unit [104] is configured to automatically adjust the focus of the second camera unit [102B] in the direction of the movement of the object of interest, further discussed in reference to Fig. 5.
  • the invention encompasses providing an option to the user at the camera application to enable or disable the functionality of automatic focus shifting of the rear camera based on the inputs from the front camera. For instance, when the user initiates a camera application, the display of the camera application may show an icon to enable/disable the invention in the device such that the user always has control of whether or not the invention is implemented in the device.
  • FIG. 4 illustrates exemplary directions of movement of an object of interest in a camera preview frame, in accordance with exemplary embodiments of the present disclosure.
  • the invention encompasses that the direction of the movement of the object of interest maybe one of a pitch up, a pitch down, a right yaw and a left yaw.
  • a user may need to roll his face in either of these directions-up, down, left, right.
  • the user may alternatively roll his face in either of these directions-left-up, left-down, right-up, right-down.
  • rolling the face up &down is known as Pitch.
  • a user may roll his head in either pitch up or pitch down direction.
  • rolling the face left &right is called as Yaw.
  • a user may roll his head in either right yaw or left yaw directions.
  • FIG. 5 illustrates exemplary shifting of the focus of a second camera unit in a second camera preview frame based on at least the direction of movement of an object of interest in a camera preview frame, in accordance with exemplary embodiments of the present disclosure.
  • Fig. 5A illustrates shifting of the focus of a second camera unit in a second camera preview frame based on a user moving his head in pitch up direction (as shown in Fig. 4A) upside, thus, the focus of the second camera unit is adjusted on upside of preview screen.
  • Fig. 5B illustrates shifting of the focus of a second camera unit in a second camera preview frame based on a user moving his head in pitch down direction (as shown in Fig. 4A) upside, thus, the focus of the second camera unit is adjusted on downside of preview screen.
  • Fig. 5C illustrates shifting of the focus of a second camera unit in a second camera preview frame based on a user moving his head in left yaw direction (as shown in Fig. 4B) upside, thus, the focus of the second camera unit is adjusted on left side of preview screen.
  • Fig. 5D illustrates shifting of the focus of a second camera unit in a second camera preview frame based on a user moving his head in right yaw direction (as shown in Fig. 4B) upside, thus, the focus of the second camera unit is adjusted on right side of preview screen.
  • the solution provided by present invention effectively solves the problem of automatically adjusting focus of a camera without a user’s manual intervention. It provides technical advancement, of automatically adjusting focus of a rear camera with the use of a front camera, over existing solution that would otherwise require the user to touch on the area on rear camera preview that he wishes to focus on requiring manual intervention.
  • the technical effect intended to produce by the present invention is a system and method that can automatically adjust the focus of a rear camera based on signaling received at the front camera. Thus, both the cameras when used together provides automatically adjusting the focus of the rear camera.

Abstract

The present invention relates to method and system for automatically adjusting focus of a camera. The system comprises a first camera unit, a second camera unit, a processing unit, an object detection module and a tracking module. The first camera unit receives a camera preview frame. The object detection module identifies an object of interest in the camera preview frame. The tracking module determines a current coordinates of the object of interest within the camera preview frame, and dynamically monitors the camera preview frame to identify any variation in the current coordinates. The processing unit determines a movement of the object of interest in the camera preview frame based on the identified variation from the current coordinates, and also automatically adjusts the focus of the second camera unit based on at least one of the determined movement and the identified variation.

Description

SYSTEM AND METHOD FOR AUTOMATICALLY ADJUSTING FOCUS OF A CAMERA FIELD OF THE INVENTION
The present invention generally relates to the field of imaging and more particularly to a system and method for automatically adjusting the focus of a rear camera using a front camera.
BACKGROUND
The following description of related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
In today's era, with the advancement of technology, capturing and sharing images, videos, etc. has been established as a form of expression and has become the most used feature in electronic devices. Often, captured images are intended to present a flattering image of the person whose image is being clicked. These images may be captured on trips, during activities that are considered interesting or as a group photograph with people of same interest or in any other suitable situation.
With the increased trend of image capturing, various technologies have evolved that help the users take better pictures. Images can be captured with a number of variations like, at different angles, positions, in various styles and facial expressions etc.
For a user, while using rear camera of his electronic device, if there are more than one objects or faces at different positions within a Field Of View (FOV) of the rear camera and the user wishes to set a focus for the rear camera, the user is required to manually set the focus on specific object (s) he wishes to focus and capture. Further, if the user wishes to change the focus to another object, the user is required to manually change the focus to another object or face using manual focus control. For instance, the user maybe required to touch on the area on the display of the rear camera preview that he wishes to focus on, thus, shifting the focus would require user/manual intervention.
While it may not always be possible for a user to manually shift the focus of the rear camera, there exists a need to provide a solution for the user to be able to automatically adjust the focus of the rear camera by signaling into the front camera for change in focus of the rear camera. Thus, there exists a need for a system and a method to automatically adjust the focus of the rear camera by the user of the front camera. Accordingly, in order to overcome the aforementioned limitations inherent in the existing/outgoing solutions, the present invention is directed towards providing a system and a method for automatically adjusting focus of a rear camera using a front  camera.
SUMMARY
This section is intended to introduce certain objects and aspects of the disclosed method and system in a simplified form and is not intended to identify the key advantages or features of the present disclosure. In order to overcome the existing limitations of the known solutions, it is an object of the present invention to provide a system and method to automatically adjust focus of a camera. Particularly, it is an object of the present invention to provide a system and method to automatically adjust focus of a rear camera using a front camera. It is another object of the present invention to provide a system and method to reduce user’s efforts in automatically adjusting focus of a rear camera by signaling to the front camera. It is also an object of the present invention to provide a system and method to allow users to shift the focus of the rear camera in the direction of signaling the user provides to the front camera.
In order to achieve the afore-mentioned objectives, the present disclosure provides a method and system for automatically adjusting the focus of a camera. A first aspect of the present disclosure relates to a method of automatically adjusting focus of a camera. The method begins with receiving, in real-time, a camera preview frame at a first camera unit. Subsequently, an object detection module detects an object of interest in the camera preview frame. Next, the tracking module identifies a current coordinates of the object of interest in the camera preview frame. Further, the tracking module dynamically monitors the camera preview frame to identify variation from the current coordinates of the object of interest. Further, the processing unit determines a movement of the object of interest in the camera preview frame based on the identified variation from the current coordinates of the object of interest. Thereafter, the processing unit automatically adjusts the focus of a second camera unit based on at least one of the determined movement of the object of interest and the identified variation from the current coordinates of the object of interest.
Another aspect of the present disclosure relates to a system for automatically adjusting focus of a camera. The system comprises a first camera unit, a second camera unit, a processing unit, an object detection module and a tracking module, all components connected to each other. The first camera unit is configured to receive, in real-time, a camera preview frame. The object detection module is connected to said first camera unit, said object detection module is configured to detect an object of interest in the camera preview frame. The tracking module is connected to said object detection module and the first camera unit, said tracking module is configured to identify a current coordinates of the object of interest in the camera preview frame. The tracking module is further configured to dynamically monitor the camera preview frame to identify variation from the current coordinates of the object of interest. The second camera unit is connected to the object detection module, the first camera unit and the tracking module, said second camera unit is configured to receive, in real-time, a second camera preview frame. The processing unit connected to the first  camera unit, the second camera unit, the tracking module and the object detection module, said processing unit is configured to determine a movement of the object of interest in the camera preview frame based on the variation from the current coordinates of the object of interest. The processing unit is further configured to automatically adjust the focus of the second camera unit based on at least one of the determined movement of the object of interest and the identified variation from the current coordinates of the object of interest.
Another aspect of the present disclosure relates to a user device. The user device comprises a first camera unit, a second camera unit, a processing unit, an object detection module and a tracking module, all components connected to each other. The first camera unit is configured to receive, in real-time, a camera preview frame. The object detection module is connected to said first camera unit, said object detection module is configured to detect an object of interest in the camera preview frame. The tracking module is connected to said object detection module and the first camera unit, said tracking module is configured to identify a current coordinates of the object of interest in the camera preview frame. The tracking module is further configured to dynamically monitor the camera preview frame to identify variation from the current coordinates of the object of interest. The second camera unit is connected to the object detection module, the first camera unit and the tracking module, said second camera unit is configured to receive, in real-time, a second camera preview frame. The processing unit connected to the first camera unit, the second camera unit, the tracking module and the object detection module, said processing unit is configured to determine a movement of the object of interest in the camera preview frame based on the variation from the current coordinates of the object of interest. The processing unit is further configured to automatically adjust the focus of the second camera unit based on at least one of the determined movement of the object of interest and the identified variation from the current coordinates of the object of interest.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
FIG. 1 illustrates an architecture of a system for automatically adjusting focus of a camera, in accordance with exemplary embodiments of the present disclosure.
FIG. 2 illustrates a high-level block diagram of system for automatically adjusting focus of a camera, in accordance with exemplary embodiment of the present disclosure.
FIG. 3 illustrates an exemplary method flow diagram depicting method for automatically adjusting focus of a camera, in accordance with exemplary embodiments of the present disclosure.
FIG. 4 illustrates exemplary directions of movement of an object of interest in a camera preview frame, in accordance with exemplary embodiments of the present disclosure.
FIG. 5 illustrates exemplary shifting of the focus of a second camera unit in a second camera preview frame based on at least the direction of movement of an object of interest in a camera preview frame, in accordance with exemplary embodiments of the present disclosure.
The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent, however, that embodiments of the present invention may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein. Example embodiments of the present invention are described below, as illustrated in various drawings in which like reference numerals refer to the same parts throughout the different drawings.
The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the invention as set forth.
Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms,  structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed, but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes, ” “has, ” “contains, ” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive-in a manner similar to the term “comprising” as an open transition word-without precluding any additional or other elements. In addition, the disclosed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter.
The present invention provides a method and system for automatically adjusting the focus of a camera. The invention encompasses providing a technical solution for automatically adjusting the focus of a rear camera using a front camera. For instance, if there are more than one objects (or faces) at different positions within a Field Of View (FOV) of the rear camera and the user wishes to set a focus for the rear camera, the focus can be shifted on different objects at different moments by tracking the user ‘shead position and movement using the front camera in real-time. Thus, the front camera may work in background, and using face detection techniques, the front camera may track the movement of the user and autofocus at desired area in the direction of the movement of the user to capture snapshots with the rear camera, without user manually intervening to shift the focus of the rear camera.
As used herein, a ”processor” or ”processing unit” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any  other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
As used herein, an ”input/output (I/O) unit” includes one or more data input device and one or more data output device. An I/O unit may process commands and data to control I/O devices. A data input device may comprise of input means for inputting character data, for example, a keyboard with a plurality of keys. A data output device may comprise of output means for representing characters in response to the inputted data, for example, a display unit for displaying characters in response to the inputted data. The input device and the output device may also be combined into a single unit, for example, a display unit with a virtual keyboard.
As used herein, ”memory unit” or ”memory” refers to a machine or computer readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable medium includes read-only memory ( ”ROM” ) , random access memory ( ”RAM” ) , magnetic-disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media.
As used herein, the ”camera preview frame” comprises at least one real time preview of an event picked up by the camera sensor unit. For instance, camera preview frame may refer to the preview generated by a camera and can be seen on the display of a user device when the user opens a camera application.
As used herein, ”media” refers to images, videos, animations, etc. and any other type of media that can be captured using a camera, as may be obvious to a person skilled in the art.
Referring to FIG. 1, an architecture of a system [100] for automatically adjusting the focus of a camera, is disclosed in accordance with exemplary embodiments of the present invention. The system [100] comprises a processing unit [104] , a camera unit [102] , a tracking module [108] , an object detection module [106] and a memory unit [110] , all components connected to each other. The camera unit [102] may further comprise of a first camera unit [102A] and the second camera unit [102B] .
The first camera unit [102A] is connected to the processing unit [104] , the second camera unit [102B] , the tracking module [108] , the object detection module [106] and the memory unit [110] . The first camera unit [102A] is configured to receive, in real-time, a first camera preview frame. Similarly, the second camera unit [102B] is connected to the processing unit [104] , the first camera unit [102A] , the tracking module [108] , the object detection module [106] and the memory unit [110] . The second camera unit [102A] is configured to receive, in real-time, a second camera preview frame.
The invention further encompasses that the first camera unit [102A] and second camera unit [102B] are configured to provide the real time data with respect to the  current events occurring in the surrounding environment, known as the camera preview frame. An image/video may then be created from this real time data of the camera preview frame. For instance, the invention encompasses that the first camera unit [102A] is a front camera of an electronic device (e.g., a smartphone) and the second camera unit [102B] is a rear camera of the same electronic device.
The object detection module [106] is connected to the first camera unit [102A] , the processing unit [104] , the second camera unit [102B] , the tracking module [108] and the memory unit [110] . The object detection module [106] is configured to detect an object of interest in the first camera preview frame. For instance, a user using the front camera to shift the focus of the rear camera may look into the front camera, and accordingly, the object detection module [106] detect an object of interest (user‘ s face) in the camera preview frame received at the front camera.
The invention further encompasses that the object detection module [106] is configured to detect the object of interest in the camera preview frame based on automatic image analysis. For instance, a user using the front camera to shift the focus of the rear camera may look into the front camera, and accordingly, the object detection module [106] would implement face detection techniques to detect an object of interest (user ‘sface) in the camera preview frame received at the front camera.
The tracking module [104] is connected to the object detection module [106] , the first camera unit [102A] , the processing unit [104] , the second camera unit [102B] and the memory unit [110] . The tracking module [108] is configured to identify a current coordinates for the object of interest, as identified by the object detection module, in the camera preview frame. For instance, for an object of interest (e.g., a user’s face) , tracking module [104] detects the current focus point, say (x1, y1) . “x1” may be referred to the distance of focus point of the front camera from leftmost point in preview-screen of the front camera. “y1” may be referred to as the distance of focus point from topmost point in preview-screen.
The tracking module [108] is also configured to dynamically monitor the camera preview frame to identify any variation from the current coordinates of the object of interest. For instance, if a user moves his face (object of interest) , the tracking module [108] will detect the focus delta using face movement detection techniques, change in coordinates being (dx, dy) , where the focus delta component will serve as delta to shift focus from current focus point (x1, y1) to new focus point (x2, y2) . For instance, the invention encompasses below relation between current coordinates and delta:
dx = K1 * fX
dy = K2 * fY,
where K1 &K2 are tuning factors for adjustment of focus of the front camera. K1 &K2 may depend upon focus calibration, density and other equipment constraints, etc. and may be decided upon by the tracking module after its first calibration and testing in a working environment.
Further to the above instance, the invention encompasses that the tracking module [108] is configured to determine updated coordinates of the object of interest (user’s face) , that are the new focus points after face detection, the new coordinates being (x2, y2) . The invention encompasses below relation between the updated and the current coordinates:
(x2, y2) = (x1+dx, y1+dy)
The processing unit [104] is connected to the first camera unit [102A] , the second camera unit [102B] , the tracking module [108] , the memory unit [110] and the object detection module [106] . The processing unit [104] is configured to determine a movement of the object of interest in the camera preview frame based on the identified variation from the current coordinates of the object of interest. Thus, the processing unit [104] determines that a change occurred in the coordinates of the object of interest in the camera frame preview based on the current and updated coordinates of the object of interest as identified by the tracking module [108] .
The processing unit [104] is further configured to automatically adjust the focus of the second camera unit [102B] based on at least one of the determined movement of the object of interest and the identified variation from the current coordinates of the object of interest. For instance, if a user moves his head while looking at the front camera to signal shift for focus of the rear camera, the processing unit [104] upon determining a change in the position of the user’s head, shall shift the focus of the rear camera.
In an instance, the invention encompasses that the processing unit [104] is further configured to detect a direction of the movement of the object of interest in the camera preview frame. As also discussed below with reference to Fig. 4, the invention further encompasses that the direction of the movement of the object of interest , for instance, may be one of a pitch up, a pitch down, a right yaw and a left yaw. Further, the processing unit [104] is configured to automatically adjust the focus of the second camera unit [102B] in the direction of the movement of the object of interest, further discussed in reference to Fig. 5.
The memory unit [110] is connected to the first camera unit [102A] , the second camera unit [102B] , the tracking module [108] , the processing unit [104] and the object detection module [106] . The memory unit [110] is configured to temporarily store a first camera preview frame and a second camera preview frame. The invention encompasses that the memory unit [110] also stores imaging parameters and detected one or more object of interest (e.g., faces) for each of the camera preview frame. The memory unit [206] may be partitioned to store the pre-stored media in one part and the retrieved media in another part.
In another instance, the present invention encompasses that the system [100] resides inside a user device. Accordingly, the user device comprises a processing unit [104] , a camera unit [102] , a tracking module [108] , an object detection module [106] and a memory unit [110] , all components connected to each other. The camera unit [102] of  the user device may further comprise a first camera unit [102A] and the second camera unit [102B] .
The first camera unit [102A] of the user device is configured to receive, in real-time, a first camera preview frame. Similarly, the second camera unit [102A] of the user device is configured to receive, in real-time, a second camera preview frame. The object detection module [106] of the user device is configured to detect an object of interest in the first camera preview frame. For instance, a user using the front camera to shift the focus of the rear camera may look into the front camera of the user device, and accordingly, the object detection module [106] of the user device detect an object of interest (user ‘sface) in the camera preview frame received at the front camera.
The object detection module [106] of the user device is configured to detect the object of interest in the camera preview frame based on automatic image analysis. The tracking module [108] of the user device is configured to identify a current coordinates for the object of interest, as identified by the object detection module, in the camera preview frame. The tracking module [108] of the user device is also configured to dynamically monitor the camera preview frame to identify any variation from the current coordinates of the object of interest.
The processing unit [104] of the user device is configured to determine a movement of the object of interest in the camera preview frame based on the identified variation from the current coordinates of the object of interest. Thus, the processing unit [104] of the user device determines that a change occurred in the coordinates of the object of interest in the camera frame preview based on the current and updated coordinates of the object of interest as identified by the tracking module [108] of the user device.
The processing unit [104] of the user device is further configured to automatically adjust the focus of the second camera unit [102B] of the user device, based on at least one of the determined movement of the object of interest and the identified variation from the current coordinates of the object of interest. For instance, if a user moves his head while looking at the front camera to signal shift for focus of the rear camera, the processing unit [104] upon determining a change in the position of the user’s head, shall shift the focus of the rear camera.
In an instance, the invention encompasses that the processing unit [104] of the user device is further configured to detect a direction of the movement of the object of interest in the camera preview frame. The direction of the movement of the object of interest, for instance, may be one of a pitch up, a pitch down, a right yaw and a left yaw. Further, the processing unit [104] of the user device is configured to automatically adjust the focus of the second camera unit [102B] in the direction of the movement of the object of interest, further discussed in reference to Fig. 5.
The memory unit [110] of the user device is configured to temporarily store a first camera preview frame and a second camera preview frame. The invention encompasses that the memory unit [110] of the user device also stores imaging parameters and detected one or more object of interest (e.g., faces) for each of the  camera preview frame. The memory unit [206] of the user device may be partitioned to store the pre-stored media in one part and the retrieved media in another part.
Although a limited number of components are shown in Fig. 1, however, it will be appreciated by those skilled in the art that the invention encompasses use of multiple such components.
FIG. 2 refers to a high-level block diagram of a system [200] for automatically adjusting focus of a camera, in accordance with exemplary embodiment of the present disclosure. The said system [200] comprises, at least one camera application [201] , at least one camera HAL [202] , at least one front camera face detection module [203] , at least one auto focus module [204] , at least one front camera driver [205] , at least one rear camera driver [206] , at least one rear camera sensor unit [208] and at least one front camera sensor unit [207] .
The at least one rear camera sensor unit [208] and the at least one front camera sensor unit [207] are configured to pick up the events in the surrounding of the system [200] as raw real-time data. The at least one rear camera sensor unit [208] and the at least one front camera sensor unit [207] also comprise at least one light sensitive processing unit configured to measure and process the imaging parameters of the camera preview frame. The at least one front camera driver [205] and the at least one rear camera driver [206] are configured to collect the raw real time data from the at least one rear camera sensor unit [208] and the at least one front camera sensor unit [207] , and provide the same to the at least one camera HAL [202] .
The at least one front camera face detection module [203] is configured to detect face of a user in the camera preview frame received via the at least one front camera sensor unit [207] and the at least one front camera driver [205] . In an instance, the at least one front camera face detection module [203] may implement face detection and face coordinates by known face detection techniques. The at least one front camera face detection module [203] is also configured to dynamically monitor the camera preview frame received via the at least one front camera sensor unit [207] to identify any variation from the current coordinates of the face detected, and to provide the updated coordinates of the detected face. In an event of variation from the current coordinates, the at least one front camera face detection module [203] is configured to provide the updated coordinates of the detected face along with other imaging parameters including, but not limited to, orientation, location, etc.
The at least one auto focus module [204] is configured to automatically adjust the focus of the at least one rear camera sensor unit [208] , via the at least one rear camera driver [206] , based on at least one of the updated coordinates of the detected face and the imaging parameters received from the at least one front camera face detection module [203] .
Further, the at least one camera HAL [202] is configured to, provide a module to interact with the said at least one rear camera sensor unit [207] , the at least one front camera sensor unit [208] , at least one front camera driver [205] , the at least one rear  camera driver [206] , the at least one front camera face detection module [203] and at least one auto focus module [204] . The at least one camera HAL [202] is further configured to store files for input data, processing and the guiding mechanism. The at least one camera HAL [202] is also configured to process the said collected real time data and provide the same to the at least one camera application [201] .
The at least one camera application [201] is configured to provide a graphical user interface to the user to provide a preview of the camera preview frame. The invention encompasses that the camera the at least one camera application [201] is configured to display the camera preview frame on the display unit of the electronic device. The camera application [201] is further configured to display best media capture icons and at least one media capturing mode.
In operation, the system [200] starts working upon the at least one camera application [201] being initiated, say, a camera application is opened in a user device (e.g., smartphone) . The at least one front camera sensor unit [208] and at least one front camera driver [205] , together, start providing camera preview frame to the at least one front camera face detection module [203] and at least one auto focus module [204] .
The at least one front camera face detection module [203] detects a user’s face. Dynamically, the at least one front camera face detection module [203] identifies any variation from the current coordinates of the face detected by the at least one front camera face detection module [203] . In an event of variation from the current coordinates, the at least one front camera face detection module [203] provides the updated coordinates of the detected face along with other imaging parameters including, but not limited to, orientation, location, etc.
The at least one auto focus module [204] , then, automatically adjusts the focus of the at least one rear camera sensor unit [208] , via the at least one rear camera driver [206] , based on at least one of the updated coordinates of the detected face and the imaging parameters received from the at least one front camera face detection module [203] . The at least one auto focus module [204] is also configured to, simultaneously, update the at least one camera HAL [202] with the adjusted focus of the at least one rear camera sensor unit [208] . And lastly, the at least one camera HAL [202] updates the at least one camera application [201] of the adjusted focus of the at least one rear camera sensor unit [208] , which is also reflected on the display the camera preview frame of the rear camera unit on the display unit of the electronic device.
FIG. 3 illustrates an exemplary method flow diagram [300] depicting a method for automatically adjusting focus of a camera, in accordance with exemplary embodiments of the present disclosure. The method begins when a user starts operating at least one of the first camera unit [102A] and the second camera unit [102B] . For instance, when the user initiates a camera application on his/her electronic device to capture an image/video using the rear camera. At step 302, the method begins with receiving, in real-time, a camera preview frame at a first camera unit [102A] . For instance, in operation, the first camera unit [102A] receives the real time data with respect to the current events occurring in the surrounding environment,  known as the camera preview frame. The invention encompasses that the first camera preview frame is not displayed to the user on the electronic device.
Subsequently, at step 304, the object detection module [106] detects an object of interest in the camera preview frame. For instance, a user using the front camera to shift the focus of the rear camera may look into the front camera, and accordingly, the object detection module [106] detect an object of interest (user’s face) in the camera preview frame received at the front camera.
The method further encompasses that the object detection module [106] detects the object of interest in the camera preview frame based on an automatic image analysis. For instance, a user using the front camera to shift the focus of the rear camera may look into the front camera, and accordingly, the object detection module [106] would implement face detection techniques to detect an object of interest (user ‘sface) in the camera preview frame received at the front camera.
Further, at step 306, the tracking module [108] identifies a current coordinates of the object of interest in the camera preview frame. For instance, for an object of interest (e.g., a user’s face) , tracking module [104] detects the current center point, say (x1, y1) . “x1” may be referred to the distance of center point of the object of interest from leftmost point in the first preview-frame of the front camera. “y1” may be referred to as the distance of center point of the object of interest from topmost point in first preview-frame. The coordinates (x1, y1) may be pixel coordinates or spatial coordinates.
Next, at step 308, the tracking module [108] dynamically monitors the camera preview frame to identify variation from the current coordinates of the object of interest. For instance, if a user moves his face (object of interest) , the tracking module [108] will detect the focus delta using face movement detection techniques, change in coordinates being (dx, dy) , where the focus delta component will serve as delta to shift focus from current point (x1, y1) to new point (x2, y2) .
The method also encompasses that the tracking module [108] determines an updated coordinates of the object of interest (user’s face) , that are the new center points after face detection, the new coordinates being (x2, y2) . The invention encompasses below relation between the updated and the current coordinates:
(x2, y2) = (x1+dx, y1+dy)
Due to this dynamic and continuous monitoring, the invention enables that as soon as a change or variation in the current coordinates of the object of interest is detected, the next steps are performed immediately.
Further, at step 310, the processing unit [104] determines a movement of the object of interest in the camera preview frame based on the identified variation from the current coordinates of the object of interest. Thus, the processing unit [104] determines that a change occurred in the coordinates of the object of interest in the first camera frame preview based on the current and updated coordinates of the object of interest as  identified by the tracking module [108] .
Lastly, at step 312, the processing unit [104] automatically adjusts the focus of a second camera unit [102B] based on at least one of the determined movement of the object of interest and the identified variation from the current coordinates of the object of interest. For instance, if a user moves his head while looking at the front camera to signal shift for focus of the rear camera, the processing unit [104] upon determining a change in the position of the user’s head, shall shift the focus of the rear camera. The method ends at step 313.
The invention encompasses determining an initial focus point of the rear camera, wherein the initial focus point may be determined based on one of a manual input and an automatic identification. For instance, when the user initiates a camera application to capture an image/video using the rear camera, the user may provide a manual input for initial focus point by, for example, touching on a specific area on the display of the user device where the second camera preview frame from the second camera unit [102B] is displayed. In another example, the initial focus point may be determined by the second camera unit [102B] automatically using one or more auto-focus mechanisms known in the art.
Once the initial focus point is determined, the invention encompasses performing the steps 302 to 310. Thereafter, step 312 encompasses adjusts the initial focus of a second camera unit [102B] based on at least one of the determined movement of the object of interest and the identified variation from the current coordinates of the object of interest. For instance, if the initial focus point is determined to be (a1, b1) and the variation from the current coordinates of the object of interest is (dx, dy) , the focus is automatically adjusted and shifted to (a’1, b’1) based on (dx, dy) . In one instance, the new focus point (a’1, b’1) = (a1+dx, b1+dy) .
The invention also encompasses that the processing unit [104] detects a direction of the movement of the object of interest in the camera preview frame. As also discussed below with reference to Fig. 4, the invention further encompasses that the direction of the movement of the object of interest , for instance, may be one of a pitch up, a pitch down, a right yaw and a left yaw. Further, the processing unit [104] is configured to automatically adjust the focus of the second camera unit [102B] in the direction of the movement of the object of interest, further discussed in reference to Fig. 5.
The invention encompasses providing an option to the user at the camera application to enable or disable the functionality of automatic focus shifting of the rear camera based on the inputs from the front camera. For instance, when the user initiates a camera application, the display of the camera application may show an icon to enable/disable the invention in the device such that the user always has control of whether or not the invention is implemented in the device.
FIG. 4 illustrates exemplary directions of movement of an object of interest in a camera preview frame, in accordance with exemplary embodiments of the present disclosure. The invention encompasses that the direction of the movement of the  object of interest maybe one of a pitch up, a pitch down, a right yaw and a left yaw. For instance, to achieve face movement orientation detection, a user may need to roll his face in either of these directions-up, down, left, right. The user may alternatively roll his face in either of these directions-left-up, left-down, right-up, right-down. Referring to Fig. 4A, rolling the face up &down is known as Pitch. Thus, a user may roll his head in either pitch up or pitch down direction. Referring to Fig. 4B, rolling the face left &right is called as Yaw. Thus, a user may roll his head in either right yaw or left yaw directions.
FIG. 5 illustrates exemplary shifting of the focus of a second camera unit in a second camera preview frame based on at least the direction of movement of an object of interest in a camera preview frame, in accordance with exemplary embodiments of the present disclosure. Referring to Fig. 5A, illustrates shifting of the focus of a second camera unit in a second camera preview frame based on a user moving his head in pitch up direction (as shown in Fig. 4A) upside, thus, the focus of the second camera unit is adjusted on upside of preview screen. Similarly, Referring to Fig. 5B, illustrates shifting of the focus of a second camera unit in a second camera preview frame based on a user moving his head in pitch down direction (as shown in Fig. 4A) upside, thus, the focus of the second camera unit is adjusted on downside of preview screen.
Furthermore, referring to Fig. 5C, illustrates shifting of the focus of a second camera unit in a second camera preview frame based on a user moving his head in left yaw direction (as shown in Fig. 4B) upside, thus, the focus of the second camera unit is adjusted on left side of preview screen. Similarly, referring to Fig. 5D, illustrates shifting of the focus of a second camera unit in a second camera preview frame based on a user moving his head in right yaw direction (as shown in Fig. 4B) upside, thus, the focus of the second camera unit is adjusted on right side of preview screen.
Thus, the solution provided by present invention effectively solves the problem of automatically adjusting focus of a camera without a user’s manual intervention. It provides technical advancement, of automatically adjusting focus of a rear camera with the use of a front camera, over existing solution that would otherwise require the user to touch on the area on rear camera preview that he wishes to focus on requiring manual intervention. The technical effect intended to produce by the present invention is a system and method that can automatically adjust the focus of a rear camera based on signaling received at the front camera. Thus, both the cameras when used together provides automatically adjusting the focus of the rear camera.
While considerable emphasis has been placed herein on the disclosed embodiments, it will be appreciated that many embodiments can be made and that many changes can be made to the embodiments without departing from the principles of the present invention. These and other changes in the embodiments of the present invention will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.

Claims (17)

  1. A method of automatically adjusting focus of a camera, the method comprising:
    receiving, in real-time, a camera preview frame at a first camera unit [102A] ;
    detecting, by an object detection module [106] , an object of interest in the camera preview frame ;
    identifying, by a tracking module [108] , current coordinates of the object of interest in the camera preview frame;
    dynamically monitoring, by the tracking module [108] , the camera preview frame to identify variation from the current coordinates of the object of interest;
    determining, by a processing unit [104] , a movement of the object of interest in the camera preview frame based on the identified variation from the current coordinates of the object of interest; and
    automatically adjusting, by the processing unit [104] , the focus of a second camera unit [102B] based on at least one of the determined movement of the object of interest and the identified variation from the current coordinates of the object of interest.
  2. The method as claimed in claim 1, further comprising:
    detecting, by the processing unit [104] , a direction of the movement of the object of interest in the camera preview frame; and
    automatically adjusting, by the processing unit [104] , the focus of the second camera unit [102B] in the direction of the movement of the object of interest.
  3. The method as claimed in claim 2, wherein the direction of the movement of the object of interest is one of a pitch up, a pitch down, a right yaw and a left yaw.
  4. The method as claimed in claim 1, wherein the first camera unit [102A] is a front camera and the second camera unit [102B] is a rear camera.
  5. The method as claimed in claim 1, wherein detecting an object of interest further comprises detecting at least one face from said camera preview frame.
  6. A system for automatically adjusting focus of a camera, the system comprising:
    a first camera unit [102A] configured to receive, in real-time, a camera preview frame;
    an object detection module [106] connected to said first camera unit [102A] , said object detection module [106] configured to detect an object of interest in the camera preview frame;
    a tracking module [108] connected to said object detection module [106] and the first camera unit [102A] , said tracking module [108] configured to:
    identify a current coordinates of the object of interest in the camera preview frame; and
    dynamically monitor the camera preview frame to identify variation from the current coordinates of the object of interest; and
    a second camera unit [102B] connected to the object detection module [106] , the first camera unit [102A] and the tracking module [108] , said second camera unit [102B] configured to receive, in real-time, a second camera preview frame;
    a processing unit [104] connected to the first camera unit [102A] , the second camera unit [102B] , the tracking module [108] and the object detection module [106] , said processing unit [104] configured to:
    determine a movement of the object of interest in the camera preview frame based on the variation from the current coordinates of the object of interest; and
    automatically adjust the focus of the second camera unit [102B] based at least on one of the determined movement of the object of interest and the identified variation from the current coordinates of the object of interest.
  7. The system as claimed in claim 6 further comprising a storage unit [110] connected to the first camera unit [102A] , the second camera unit [102B] , the tracking module [108] , the processing unit [104] and the object detection module [106] , wherein the storage unit [110] is configured to store camera preview frame.
  8. The system as claimed in claim 6 wherein the first camera unit [102A] is a front camera and the second camera unit [102B] is a rear camera.
  9. The system as claimed in claim 6, wherein the processing unit [104] is further configured to:
    detect a direction of the movement of the object of interest in the camera preview frame; and
    automatically adjust the focus of the second camera unit [102B] in the direction of the movement of the object of interest.
  10. The system as claimed in claim 9, wherein the direction of the movement of the object of interest is one of a pitch up, a pitch down, a right yaw and a left yaw.
  11. The system as claimed in claim 6, wherein detecting an object of interest further comprises detecting at least one face from said camera preview frame.
  12. A user device, comprising:
    a first camera unit [102A] configured to receive, in real-time, a camera preview frame;
    an object detection module [106] connected to said first camera unit [102A] , said object detection module [106] configured to detect an object of interest in the camera preview frame;
    a tracking module [108] connected to said object detection module [106] and the first camera unit [102A] , said tracking module [108] configured to:
    identify a current coordinates of the object of interest in the camera preview frame; and
    dynamically monitor the camera preview frame to identify variation from the current coordinates of the object of interest; and
    a second camera unit [102B] connected to the object detection module [106] , the first camera unit [102A] and the tracking module [108] , said second camera unit [102B] configured to receive, in real-time, a second camera preview frame;
    a processing unit [104] connected to the first camera unit [102A] , the second camera unit [102B] , the tracking module [108] and the object detection module [106] , said processing unit [104] configured to:
    determine a movement of the object of interest in the camera preview frame based on the variation from the current coordinates of the object of interest; and
    automatically adjust the focus of the second camera unit [102B] based at least on one of the determined movement of the object of interest and the identified variation from the current coordinates of the object of interest.
  13. The user device as claimed in claim 12, further comprising a storage unit [110] connected to the first camera unit [102A] , the second camera unit [102B] , the tracking module [108] , the processing unit [104] and the object detection module [106] , wherein the storage unit [110] is configured to store camera preview frame.
  14. The user device as claimed in claim 12, wherein the first camera unit [102A] is a front camera and the second camera unit [102B] is a rear camera.
  15. The user device as claimed in claim 12, wherein the processing unit [104] is further configured to:
    detect a direction of the movement of the object of interest in the camera preview frame; and
    automatically adjust the focus of the second camera unit [102B] in the direction of the movement of the object of interest.
  16. The user device as claimed in claim 15, wherein the direction of the movement of the object of interest is one of a pitch up, a pitch down, a right yaw and a left yaw.
  17. The user device as claimed in claim 12, wherein detecting an object of interest further comprises detecting at least one face from said camera preview frame.
PCT/CN2021/077273 2020-03-06 2021-02-22 System and method for automatically adjusting focus of a camera WO2021175125A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202041009666 2020-03-06
IN202041009666 2020-03-06

Publications (1)

Publication Number Publication Date
WO2021175125A1 true WO2021175125A1 (en) 2021-09-10

Family

ID=77612957

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/077273 WO2021175125A1 (en) 2020-03-06 2021-02-22 System and method for automatically adjusting focus of a camera

Country Status (1)

Country Link
WO (1) WO2021175125A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104333690A (en) * 2013-07-22 2015-02-04 奥林巴斯映像株式会社 Photographing apparatus and photographing method
WO2016164859A1 (en) * 2015-04-10 2016-10-13 Bespoke, Inc. Systems and methods for creating eyewear with multi-focal lenses
CN108604128A (en) * 2016-12-16 2018-09-28 华为技术有限公司 a kind of processing method and mobile device
CN108860153A (en) * 2017-05-11 2018-11-23 现代自动车株式会社 System and method for determining the state of driver
CN110225252A (en) * 2019-06-11 2019-09-10 Oppo广东移动通信有限公司 Camera control method and Related product

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104333690A (en) * 2013-07-22 2015-02-04 奥林巴斯映像株式会社 Photographing apparatus and photographing method
WO2016164859A1 (en) * 2015-04-10 2016-10-13 Bespoke, Inc. Systems and methods for creating eyewear with multi-focal lenses
CN108604128A (en) * 2016-12-16 2018-09-28 华为技术有限公司 a kind of processing method and mobile device
CN108860153A (en) * 2017-05-11 2018-11-23 现代自动车株式会社 System and method for determining the state of driver
CN110225252A (en) * 2019-06-11 2019-09-10 Oppo广东移动通信有限公司 Camera control method and Related product

Similar Documents

Publication Publication Date Title
US8531557B2 (en) Method, apparatus and system for performing a zoom operation
US7450756B2 (en) Method and apparatus for incorporating iris color in red-eye correction
US11102413B2 (en) Camera area locking
US20180204097A1 (en) Automatic Capture and Refinement of a Digital Image of a Group of People without User Intervention
US20210406532A1 (en) Method and apparatus for detecting finger occlusion image, and storage medium
CN101477616A (en) Human face detecting and tracking process
CN110290299B (en) Imaging method, imaging device, storage medium and electronic equipment
US10455144B2 (en) Information processing apparatus, information processing method, system, and non-transitory computer-readable storage medium
CN112532881A (en) Image processing method and device and electronic equipment
CN110365905B (en) Automatic photographing method and device
US10965858B2 (en) Image processing apparatus, control method thereof, and non-transitory computer-readable storage medium for detecting moving object in captured image
EP2200275B1 (en) Method and apparatus of displaying portrait on a display
CN112367465A (en) Image output method and device and electronic equipment
WO2021175125A1 (en) System and method for automatically adjusting focus of a camera
WO2021179969A1 (en) System and method for automatically adjusting focus of a camera
WO2015141185A1 (en) Imaging control device, imaging control method, and storage medium
CN108495038B (en) Image processing method, image processing device, storage medium and electronic equipment
CN112653841B (en) Shooting method and device and electronic equipment
JP2008211534A (en) Face detecting device
CN111654623B (en) Photographing method and device and electronic equipment
CN113012085A (en) Image processing method and device
CN112367464A (en) Image output method and device and electronic equipment
CN107197155B (en) Method and system for focusing after photographing, mobile terminal and storage device
WO2021259063A1 (en) Method and system for automatically zooming one or more objects present in a camera preview frame
CN114143442B (en) Image blurring method, computer device, and computer-readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21765510

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21765510

Country of ref document: EP

Kind code of ref document: A1