US11726392B2 - System, method, and computer-readable medium for autofocusing a videophone camera - Google Patents
System, method, and computer-readable medium for autofocusing a videophone camera Download PDFInfo
- Publication number
- US11726392B2 US11726392B2 US17/009,567 US202017009567A US11726392B2 US 11726392 B2 US11726392 B2 US 11726392B2 US 202017009567 A US202017009567 A US 202017009567A US 11726392 B2 US11726392 B2 US 11726392B2
- Authority
- US
- United States
- Prior art keywords
- focus
- lens
- distance
- distances
- camera lens
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B13/00—Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
- G03B13/32—Means for focusing
- G03B13/34—Power focusing
- G03B13/36—Autofocus systems
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B13/00—Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
- G03B13/18—Focusing aids
- G03B13/20—Rangefinders coupled with focusing arrangements, e.g. adjustment of rangefinder automatically focusing camera
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/57—Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/671—Focus control based on electronic image sensor signals in combination with active ranging signals, e.g. using light or sound signals emitted toward objects
-
- H04N5/2257—
-
- H04N5/23212—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
Definitions
- the application relates generally to the focusing of cameras in videophone communication systems and devices, and more specifically, to systems, methods, and computer-readable media for autofocusing cameras in videophone communication systems and devices, in a shorter amount of time.
- Video telephony comprises the technologies used to receive and transmit audio-video signals by users at different locations.
- a videophone is a telephone with a video display, capable of simultaneous video and audio signals for communication between people in real-time.
- One important application of videophones allows deaf, hard-of-hearing and speech-impaired (“hearing impaired”) people to place telephone calls to hearing-capable or other hearing impaired people using sign language through a Video Relay Service (“VRS”).
- Videophones are also used to do on-site sign language translation via Video Remote Interpreting (“VRI”). The hearing impaired person or persons place VRS or VRI calls using a videophone, which are routed through an interpreting center.
- An interpreter fluent in ASL or a foreign sign language and the equivalent spoken language, appears on the hearing impaired individual's videophone.
- the hearing impaired caller signs the first part of a conversation to the interpreter, who may simultaneously translate and speak that first part to the call recipient over a phone line.
- the call recipient may respond by voice to the interpreter with a response.
- the interpreter translates the response and signs that part to the hearing impaired individual who sees the signed response on their videophone screen.
- the call recipient may also have a videophone in order to see the hearing impaired caller.
- This 24/7 Video Relay Service is provided in the United States by the U.S. government's Telecommunications Relay Service (TRS) fund. Other countries also provide such services free of charge to the hearing impaired community.
- the measure of success when using videophones and translation services for the hearing impaired community often depends on how closely the call experience can approximate “real time.” Anything that slows down the process detracts from the illusion of a “real time” call. Waiting for slow autofocus is such a detraction. Additionally, lulls or lag times created by slower videophone autofocus times can disrupt the natural pace of conversation by those needing to use videophones to communicate. Slower focus times often cause frustration by certain users.
- passive autofocus works by finding an area of sharpest focus in the image captured by an image sensor for a particular camera lens position.
- the camera lens is moved in or out using a lens actuator, while an algorithm calculates the sharpness of image for that location of the lens and comparing the sharpness against the prior sharpness, the best focus for the scene in the image is achieved by repeating this procedure.
- Active autofocus generally speaking, works by sending a beam of light at the object of focus, and using the response to determine the distance to the object of focus.
- the problem with most passive autofocus systems is that moving the lens, determining an image contrast, and making a comparison is time consuming and slow.
- active autofocus systems is that they can inaccurately determine distance to an object of focus if there is too much ambient light, too many reflective surfaces, or any number of other circumstances.
- the problem with both ways of autofocusing is that the camera doesn't know whether or not the object of focus is actually in optimal focus at a particular distance. To know whether or not the object of focus is in optimal focus requires the eye of a human being. This is how manual focus is accomplished. However, in certain scenarios, such as hearing impaired usage of a videophone camera to send sign language image, focusing the camera with the aid of human eye would be impractically slow as the user moved from the front to the back of the camera lens.
- lens variance When lens variance combines with a lens actuator variance, the overall camera unit variance can create a slightly difference focus ability at certain distances across camera units of the same make and model. Accordingly, autofocus techniques that do not account for the uniqueness in each individual lens assembly may not produce an optimal focus for each camera in a batch of cameras or camera parts a given distance.
- a videophone system may include a camera lens, a camera lens actuator configured to move the lens, and an image sensor configured to capture images received through the camera lens.
- the videophone system may also include a distance sensor configured to determine a distance between the camera lens and an object of focus.
- the system includes a digital to analog converter (DAC) configured to convert a plurality of DAC digital numbers to a plurality of analog power values, each power value used to power the lens actuator and move the lens to a certain lens position.
- DAC digital to analog converter
- each DAC digital number corresponds to camera lens position or distance from the image sensor, driven into such position by an analog power value converted from the DAC digital number
- a DAC digital number may be used herein throughout synonymously with the camera lens position corresponding to that DAC digital number.
- the video phone system may also include a memory for storing instructions that can be executed by a videophone system processor.
- the instructions cause the camera lens to automatically focus on the object of focus according to various embodiments disclosed herein.
- the memory includes a lookup table configured to correlate a distance to an object of focus calculated by the distance sensor with a DAC digital number corresponding to a lens position where the object of focus is in optimal focus at the calculated distance.
- the lookup table is configured to correlate a DAC digital number with a range of distances in front of the camera lens such that if an object of focus lies within the range of distances, the object of focus will be in in optimal focus. It will be appreciated that if the lookup table is set up in advance and verified using human eyes, then optimal focus can be achieved without the need for human verification of sharpness by a user. This allows sharp focus can be achieved in a shorter amount of time.
- one or more processors of the videophone system are configured to execute instructions to verify that the object of focus is in focus using contrast detection analysis on a predetermined region of interest of the image captured by the image sensor at the lens position.
- the one or more processors may also be configured to facilitate a distance determination by the distance sensor.
- the one or more processors in one embodiment are configured to determine an optimal focus difference between a particular lens assembly in a batch of lens assemblies and a baseline optimal focus representing the batch of lens assemblies. In this way, the autofocus methods and processes of embodiments disclosed herein can account for lens assembly manufacturing variances and factor them into the autofocus algorithms.
- a method of autofocusing includes determining an object of focus distance between the camera and an object of focus using the distance sensor.
- the object of focus distance may be correlated to a digital number using a lookup table.
- the digital number represents a position of a lens within the camera relative to an image sensor.
- the digital number may then be converted to an analog power value using a digital to analog converter. Power in the amount of the power value may then be applied to a lens actuator to move the lens to the position represented by the digital number.
- the lookup table is set up such that a range of object of focus distances all reside within the same depth of field of the lens position's focal plane and in one embodiment, all of the object of focus distances returned by the distance sensor that would lie within the same depth of field are correlated to the same digital number representing the position, and thus the focal plane, of the lens.
- the method in one embodiment includes verifying that the object of focus is focused to withing a threshold focus level by performing one or more of contrast detection analysis and phase detection analysis on the image captured by the image sensor at the lens position.
- passive autofocus techniques are used to verify and/or adjust the autofocus done by active autofocus techniques.
- the verification step may be accomplished by analyzing a region of interest that is a subset of the overall image captured by the image sensor. This region of interest may be determined by the user of the videophone system during a self-framing step.
- correlating the object of focus distance determined by the distance sensor to a digital number using a lookup table comprises determining a difference between a camera focus value and a baseline focus value and adjusting the values of the lookup table to account for the difference.
- the lookup table may be configured to correlate an object of focus distance returned by the distance sensor with a digital number representing a lens position, such that the object of focus falls within a depth of field about a focal plane corresponding to the lens position.
- Embodiments disclosed herein also include non-transitory computer-readable medium configured to store program instructions that, when executed on one or more processors, cause the instructions to perform operations.
- the operations include determining the distance to an object of focus using a distance sensor.
- the operation may also include correlating the object of focus distance determined by the distance sensor to a digital number using a lookup table.
- the digital number may represent a unique position of the lens within the camera relative to the image sensor, or in other words, a unique distance from the image sensor to the lens.
- the digital number is converted to a corresponding an analog power value using the digital to analog converter.
- the power value may be used to power the lens actuator to move the lens to the position represented by the digital number.
- the lookup table is configured such that this position places the object of focus in optimal focus.
- the instructions stored in the non-transitory computer-readable medium may include those instructions necessary to carry out the autofocus methods and functionality of the videophone embodiments disclosed herein throughout.
- FIG. 1 is a schematic diagram of a videophone autofocusing camera system in accordance with one or more embodiments of the disclosure
- FIG. 2 is a graph of representative focus curves for three lens assemblies from a batch of lens assemblies and a baseline focus curve representing the batch in accordance with one or more embodiments of the disclosure;
- FIG. 3 is a table showing exemplary values of a lookup table in accordance with one or more embodiments of the disclosure
- FIG. 3 A is a schematic diagram of a plurality of distance ranges of FIG. 3 in accordance with one or more embodiment of the disclosure
- FIG. 4 is schematic diagram of a time-of-flight beam area and a camera field of view with an object of focus positioned in different areas;
- FIG. 4 A is a schematic diagram of the field of view of FIG. 4 containing a designated region of interest
- FIG. 5 is a schematic block diagram of method of autofocusing a camera in videophone system in accordance with one or more embodiments of the disclosure.
- FIG. 6 is a schematic block diagram of a non-transitory computer-readable medium in accordance with one or more embodiments of the disclosure.
- the embodiments may be described in terms of a process that is depicted as method steps, a flowchart, a flow diagram, a schematic diagram, a block diagram, a function, a procedure, a subroutine, a subprogram and the like. Although the process may describe operational steps in a particular sequence, it is to be understood that some or all of such steps may be performed in a different sequence. In certain circumstances, the steps be performed concurrently with other steps.
- a or B at least one of A and B,” “one or more of A and B”, or “A and/or B” as used herein include all possible combinations of items enumerated with them. For example, use of these terms, with A and B representing different items, means: (1) including at least one A; (2) including at least one B; or (3) including both at least one A and at least one B.
- the articles “a” and “an” as used herein should generally be construed to mean “one or more” unless specified otherwise or clear from the context to be directed to a singular form.
- first “first,” “second,” and so forth are used herein to distinguish one component from another without limiting the components and do not necessarily reflect importance, quantity, or an order of use.
- a first user device and a second user device may indicate different user devices regardless of the order or importance.
- a reference to first and second elements does not mean that only two elements may be employed there or that the first element must precede the second element in some manner.
- a set of elements may comprise one or more elements.
- connection or communication may be direct, or there may be an intervening element between the two or more elements.
- connection or communication may be direct, or there may be an intervening element between the two or more elements.
- two or more elements are described as being “directly” coupled with or to another element or in “direct communication” with or to another element, there is no intervening element between the first two or more elements.
- connections or “communication” between elements may be, without limitation, wired, wireless, electrical, mechanical, optical, chemical, electrochemical, comparative, by sensing, or in any other way two or more elements interact, communicate, or acknowledge each other. It will further be appreciated that elements may be “connected” with or to each other, or in “communication” with or to each other by way of local or remote processes, local or remote devices or systems, distributed devices or systems, or across local or area networks, telecommunication networks, the Internet, other data communication networks conforming to a variety of protocols, or combinations of any of these.
- units, components, modules, elements, devices and the like may be “connected”, or “communicate” with each other locally or remotely by means of a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), shared chipset or wireless technologies such as infrared, radio, and microwave.
- DSL digital subscriber line
- the expression “configured to” as used herein may be used interchangeably with “suitable for,” “having the capacity to,” “designed to,” “adapted to,” “made to,” or “capable of” according to a context.
- the term “configured” does not necessarily mean “specifically designed to” in a hardware level. Instead, the expression “apparatus configured to . . . ” may mean that the apparatus is “capable of . . . ” along with other devices or parts in a certain context.
- Information and signals described herein may be represented using any of a variety of different technologies and techniques.
- data, instructions, commands, information, messages, signals, bits, symbols, and chips that may be referenced throughout the description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, lasers or any combination thereof.
- Some drawings may illustrate signals as a single signal for clarity of presentation and description. It should be understood by a person of ordinary skill in the art that the signal may represent a bus of signals, wherein the bus may have a variety of bit widths and the embodiments disclosed herein may be implemented on any number of data signals including a single data signal.
- examplementing structures include hardware, software, firmware or a combination of these.
- examplementing structures include, without limitation, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), or a programmable-logic device.
- FPGA field programmable gate array
- ASIC application specific integrated circuit
- An “implementing structure” may be configured to be in an addressable storage medium or to execute one or more processors.
- “Implementing structures” may include, without limitation, elements such as software elements, object-oriented software elements, class elements, and task elements, processes, functions, attributes, procedures, sub-routines, segments of program codes, drivers, firmware, micro-codes, circuits, data, database, data structures, tables, arrays, logic blocks and variables.
- a “module” may be any type of executable or implementing structure capable of carrying out instructions to perform operations.
- a “processor,” as may be referenced herein throughout, may be any processor, controller, microcontroller, state machine or combination of the foregoing suitable for carrying out the processes of the disclosure.
- a processor may also be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
- Reference to a “processor” should not be limited to residing to any one particular device but includes all processors or a subset of all processors whether they reside on a single or multiple devices.
- references to “memory” are intended to comprise, without being limited to, any structure that can store, transmit, and/or receive data or information related to the embodiments described herein, or components, modules or units of the embodiments described herein.
- “Memory,” as referenced herein, can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory.
- “Memory” includes computer-readable media and medium.
- videophones include any video-capable equipment that can process, convey, reproduce, capture, detect, transmit, identify, or otherwise utilize multi-frame images.
- video-capable equipment includes conventional cellular telephones containing a camera, and desktop computers or mobile computing devices with a camera and image display.
- inventions of the disclosure provide improvements in the technical field of telecommunications, relay services for the audibly-impaired, and in particular developing new communication devices that include new features and functionality for the user devices as well as the relay service devices.
- the videophone system 100 includes a camera 102 having a camera lens 104 , a camera lens actuator 106 configured to move the lens 104 , and an image sensor 108 configured to capture images received through the camera lens 104 .
- the videophone system 100 may also include a distance sensor 110 configured to calculate a distance 112 between the camera lens 104 and an object of focus 114 .
- the videophone system 100 includes a digital to analog converter (DAC) 128 configured to convert a plurality of DAC digital numbers to a plurality of analog power values. Each power value may be used to power the lens actuator 106 to move the lens 104 to a lens position 107 along a lens displacement path 120 .
- DAC digital to analog converter
- Each lens position 107 represents a distance between the lens 104 and the lens sensor 108 .
- the lens 104 causes light 105 from an object of focus 114 to refract and converge at a single point on the image sensor 108 , the object of focus is said to be in focus.
- the refracted light 105 a does not converge on the image sensor 108
- the lens 104 may need to be moved in or out in until the refracted light 105 a does converge on the image sensor 108 , at which lens position, the object of focus will be in optimal focus.
- the lens 104 of the camera 102 may consist of a single lens or multiple lens elements.
- the lens 104 may be part of a larger lens assembly 104 .
- usually one of the lens 104 is a primary lens 104 .
- a smaller, dedicated lens group may be the only part to be moved when focusing is performed. This allows for faster internal focusing of the lens 104 because there is less weight to be moved.
- an array of individual lenses combines to a group and performs like a single lens 104 in principle. Accordingly, the term “lens,” as used herein throughout, shall include multiple camera lenses and camera lens assemblies.
- the lens 104 may be defined by any number of characteristics known to those of skill in the art, including without limitation, a minimum focus distance, a refractive index, a radius of curvature for both lens surfaces, a thickness, and the like.
- the purpose of lens 104 is to redirect light 105 onto the image sensor 108 in order to focus the camera 102 on the object of focus 114 .
- the focal length of the lens 104 is the distance between the lens 104 and the image sensor 108 when the object of focus 114 is in optimal focus.
- This relationship demonstrates that every lens position 107 will have a unique optimal focus for an object of focus 114 at a given distance 112 .
- the perfect point of focus for a camera lens 104 in a given position relative to the image sensor 108 is referred herein throughout as the focal plane 130 .
- the focal plane can be thought of as the distance between the camera lens 104 and the perfect point of focus in an image of an object of focus 114 .
- the focal plane 130 is in front of the camera lens 104 spanning horizontally. For a lens 104 , set to the same aperture setting, there is only a single object of focus distance 112 that will be in the sharpest focus for that particular lens. If one were to move the lens 104 , all other things being equal, one effectively move the focal plane 130 .
- the total depth of field 136 is the distance between the farthest and nearest objects in a scene that appear acceptably sharp in an image where the lens is focused, or in other words, where the lens is positioned relative to the image sensor 108 .
- Objects of focus residing in a depth of field for a given focal plane of a lens position 107 may be referred to herein throughout as being in “optimal focus.” Additionally, an object of focus may be referred to as being in “optimal focus” herein throughout when the object of focus is in the area 132 in front of any focal plane 130 and/or the area 134 beyond such focal plane 130 where the human eye cannot tell that an image of an object of focus in these areas 132 , 134 is not in focus.
- the near limit 138 can be defined as the distance between the camera lens 104 and the first object of focus that is considered to be acceptably sharp.
- the far limit 140 can be defined as the distance between the camera lens 104 and the farthest object of focus 114 that is considered to be acceptably sharp.
- the rear limit 140 of the depth of field 136 may reach all the way to infinity as the lens 104 moves away from the image sensor 108 and field of view angle narrows.
- the shortest object of focus distance 112 at which the rear limit 140 reaches infinity is referred to as the hyperfocal distance.
- the hyperfocal distance is the closest distance at which a lens 104 can be focused while keeping objects at infinity acceptably sharp.
- the near limit 138 and the far limit 140 can be calculated using the hyperfocal distance, the focal length, the F-number and the object of focus distance 112 according to formulas known in the art.
- the aperture is fixed and thus the F-number value for this embodiment can be ignored.
- the videophone manufacturer or assembler can position an object of focus 114 , such as, for example a focus chart, at varying known distances in front of the camera to approximate a focal plane distance 112 , and the corresponding near limit 138 , far limit 140 , and depth of field 136 for that focal plane 130 .
- the camera 102 also includes a lens actuator 106 for moving the lens 104 toward or away from the image sensor 108 .
- the lens actuator 106 is part of the lens assembly 104 .
- the lens actuator 106 and the lens 104 may also be part of the image processor circuit.
- the lens actuator 106 may be any number of devices or components know to move camera lens including, without limitation, a micro-electro-mechanical-system (MEMS) or other micro- or nano-sized motors, a linear electromagnetic focusing motor (LEM), a piezo electronic motor, a stepper motor, and ultrasonic motor, a supersonic motor, a DC motor, a gear or screw type motor, and the like.
- MEMS micro-electro-mechanical-system
- LEM linear electromagnetic focusing motor
- ultrasonic motor a supersonic motor
- DC motor a gear or screw type motor, and the like.
- the lens actuator 106 may include a single, dual or multiple drives.
- the drives may be direct drives or indirect drives. It will be appreciated by those of skill in the art that depending on size or other constraints, the lens actuator 106 can be any device that can provide power to move a lens or lens assembly in response to a digital control.
- the lens actuator 106 is a voice coil motor (VCM) mounted directly to a printed circuit board over the image sensor 108 .
- VCMs voice coil motor
- the lens actuator 106 is without hysteresis and therefore has a direct current-vs.-position relationship that allows for precise lens positions 107 and precise focusing.
- the camera 102 also includes an image sensor 108 .
- the image sensor 108 may be a Complementary Metal Oxide Semiconductors (CMOS), Charge-Coupled-Device (CCD), or any other suitable imaging device.
- CMOS Complementary Metal Oxide Semiconductors
- CCD Charge-Coupled-Device
- the image sensor 108 may be coupled to an analog front-end processor (AFE), which amplifies and conditions the raw video signal, and converts it to digital. Once the image is in digital form, it can be analyzed using passive autofocus techniques including, without limitation, contrast detection and phase detection, for focus verification and/or focus determination. The image could also be analyzed and digital data could be used for lens adjustment, and other processes.
- the image sensor 108 can include film.
- the videophone system 100 also includes a distance sensor 110 .
- the distance sensor 110 in one embodiment allows the videophone system 100 to use active autofocus techniques.
- the distance sensor 110 may be any sensor of a type known in the art to determine or measure distance. These may include, without limitation, a sonar system, an infrared triangulation system, an infrared light reflection system, and the like.
- the distance sensor 110 is a Time of Flight (ToF) sensor 110 .
- the distance sensor 110 may be separate from the optical system and not in communication with the image capturing lens.
- the distance sensor 110 is in operable communication with the optical system.
- ToF Time of Flight
- the ToF sensor 110 may use the phase shift of an amplitude-modulated wave to determine or approximate a distance 112 between the lens 104 and an object of focus 114 .
- the distance sensor 110 may use continuous infrared light waves to detect the phase shift of the reflected light to determine depth and distance.
- the distance sensor 110 uses timed pulses to determine the distance 112 of an object of focus by starting a timer during the exit of the infrared beam and measuring the time it takes for a receiver to receive the beam after is has reflected off of the object of focus and returned. Using equation (4), points on the object surface can be determined and the depth or distance of the object calculated.
- the distance analyzer 122 or set of instructions stored in memory 116 and executable by the processor 118 to analyze distance using the distance sensor 110 , computes the time difference between the time the outbound infrared light pulses are sent and the inbound infrared pulses are received. In this way, the processor and/or distance sensor can compute the distance 112 .
- the distance sensor 110 can be configured in many ways, including having the light emitter and/or receiver configured as part of the camera 102 or camera lens assembly 104 .
- the distance sensor 110 in some embodiments may include its own processor for making the depth or distance calculations.
- the distance returned by the distance sensor 110 can be used to move the lens 104 into a different position to assist in focusing the lens 104 on an object of focus.
- the processor 118 provides a digital output or signal that must be converted into an analog power amount before it can be used by the VCM 106 to reposition the lens 104 .
- This can be accomplished using a digital to analog converter (DAC) 128 .
- the DAC is configured to convert a plurality of DAC digital numbers into a corresponding plurality of analog power values.
- the DAC 128 may be any DAC known in the art to convert digital numbers to analog power values for the purpose of powering lens actuators 106 .
- the DAC 128 in one embodiment is part of the processor 118 .
- the DAC 128 may be part of the lens actuator 106 , itself.
- the DAC 128 is a 10-bit current-output DAC with output-current-sink capability.
- the DAC 128 is a 12-bit current-output DAC configured with appropriate resistors and/or diodes to generate the current required to drive the VCM lens actuator 106 .
- the choice of DAC 128 can depend on a number of factors, including without limitation, videophone space requirements or limitations and the size and power needs of the lens actuator. It will be appreciated by those of skill in the art that a 12-bit DAC 128 allows for up to 4,096 digital inputs, or in other words, DAC digital numbers.
- the DAC 128 in combination with lens actuator 106 , repositions the lens 104 in response to a signal or input from the distance sensor 110 to provide sharper focus on an object of focus 114 .
- One problem with present autofocus apparatus and methods is that the camera doesn't know whether its in focus. When the lens is focused manually, a human user looks through a viewfinder or a screen and verifies that the object of focus is optimally sharp. So even if the camera has an approximation of the object of focus distance 112 , it still can't know for certain whether the object of focus is in optimal focus without human intervention.
- Embodiments of the present invention overcome this problem using a novel lookup table.
- the memory 116 in one embodiment, includes a lookup table 126 configured to correlate a DAC digital number, or in other words a lens position 107 , to a distance 112 between the camera lens 104 and an object of focus 114 where the object of focus 114 is in optimal focus at the distance 112 .
- the lookup table 126 may be configured to correlate a DAC digital number to a range of distances between the camera lens 104 and an object of focus 114 wherein the object of focus 114 is in optimal focus when the object of focus 114 is within the range of distances.
- the range of distances correlating to a DAC digital number is within a depth of field 136 about a focal plane 130 for a lens position 107 associated with the DAC digital number.
- the lookup table 126 includes ranges of optimal focus determined by human eyes or verified by human eyes after utilizing the equations discussed above to provide approximations.
- the lookup table 126 acts as a database of human optimal focus verification for a desired object of focus 114 positioned at any desired or predetermined distance between the minimum focus distance of the lens 104 and the hyperfocal distance of the lens 104 . This is accomplished because the lookup table is configured to correlate an object of focus distance 112 returned by the distance sensor 110 with a digital number representing a unique lens position 107 , such that the object of focus 114 falls within a depth of field 136 about a focal plane 130 corresponding to the lens position 107 .
- the lookup table 126 values can be determined for any lens being used in the production or manufacture of a particular videophone system 100 and programmed into the videophone's software, firmware, hardware, and the like. It will be appreciated that in this embodiment, the human eye influence can be introduces or factored into the autofocus process.
- the videophone system 100 includes a processor operatively coupled with the memory 116 and configured to execute the instructions to perform operations.
- the operations include calculating a distance 112 between the camera lens 104 and an object of focus 114 using the distance sensor. This may be accomplished by a distance analyzer module 112 .
- the operations further include correlating the calculated distance to a DAC digital number using the lookup table 126 . In one embodiment, this is accomplished using a DAC digital number (DN) correlator 124 .
- DN DAC digital number
- the DAC DN correlator 124 may include a module to perform operations for determining a difference between a DAC digital number associated with the lens 104 positioned for optimal focus at a desired or predetermined distance from the lens and a baseline DAC digital number representing a lens positioned for optimal focus at the same desired or predetermined distance.
- the processor 118 is further configured to adjust the lookup table 126 to account for the difference.
- the processor 118 is so configured to execute instructions in memory to power the lens actuator 106 using the analog power value converted from the correlated DAC digital number move the lens to a lens position 107 wherein the object of focus is in optimal focus at the calculated distance. In one embodiment, this is accomplished using a lens positioner module 125 .
- the processor 118 may also be configured to perform operations to verify that the object of focus 114 is in optimal focus at the calculated distance by performing one or more of contrast detection analysis and phase detection analysis on the image captured by the image sensor at the lens position 107 . This may be done using a focus detector module 127 .
- verifying that the object of focus 114 is in optimal focus includes selecting a region of interest for the image captured by the image sensor 108 at the lens position 107 and performing contrast detection analysis on the region of interest.
- the ToF infrared sensor 110 may produce inaccurate distances under certain circumstances. This may occur for example when light from an open flame or bright ambient light such as sunshine scatters to much light into the sensor's receiver and confuses the infrared sensor.
- the infrared beam may be partially or fully absorbed by a black surface or bounce off an object that is not the object of focus 114 .
- Other environmental factors such as color, gloss, scene complexity, or multiple reflections may also affect the accuracy of a ToF distance sensor 110 . Accordingly, as a verification measure, even though the active autofocus combined with a lookup table may have optimally focused on the object of focus 114 , passive auto focus techniques such as contrast detection and phase matching detection may be used.
- contrast detection may be accomplished using one or more of a “hill climbing” algorithm, a global search algorithm, a Fibonacci search algorithm and the like.
- the focusing lens is driven in a predefined direction while successively taking snapshots of the contrast situation in order to determine the orientation of a gradient. If the contrast decreases, the system changes the direction of the lens movement immediately. As long as the contrast increases, the lens keeps moving until the contrast data has a peak value. To confirm whether the contrast data is actually at its peak, the focusing lens is driven so as to surpass the focused position, detecting lower contrast again. Consequently, as a last step, the focusing lens is driven back to the position that has produced the peak signal, representing optimal focus.
- the term “optimal focus” may be used to describe an object of focus that is within ten DAC digital numbers above or below the DAC digital number for the lens position 107 where the contrast data associated with an image at the DAC digital number lens position 107 is at its peak.
- the threshold for a contrast analysis focus verification or determination is within five DAC digital numbers above or below the DAC digital number for the lens position 107 where the contrast data associated with an image at the DAC digital number lens position 107 is at its peak. It will be appreciated that the videophone functionality or operation need not cease while the verification process is being performed. Indeed, if the lookup table autofocus method has performed effectively, the verification step would not be needed.
- only a subset of the entire image is analyzed for contrast.
- the user of the videophone system is given an opportunity to frame their face and/or torso by manipulating a frame graphic over their face. This graphic frame identifies the region of interest that the processor 118 will use contrast detection analysis on in order to determine optimal focus.
- the processor 118 may be in operable communication with the lens 104 , the lens actuator 106 , the image sensor 108 , the distance sensor 110 , the memory 116 , and the digital to analog converter 128 directly or indirectly.
- Each of these components or structures 104 , 106 , 108 , 110 , 116 , and 128 may also be in operable communication with one or more of the other components 104 , 106 , 108 , 110 , 116 , and 128 , either directly or indirectly.
- an image signal processor associated with the image sensor 108 may by the same as an overall system processor 118 .
- individual components distance such as the distance sensor 110 , lens actuator 106 , DAC 128 , image sensor 108 may have their own processor or processor that may communicate directly or indirectly with processor 118 or through processor 118 to another processor or processors within the videophone system 100 .
- lookup table or other code written for distance analysis, contrast or phase detection, table conversions and correlations, digital to analog conversions, or any other code or instructions used to perform any of the functions or processes disclosed herein throughout need not necessarily reside in the memory 116 , and may take the form or firmware or hardware residing in one or more of the processors or components of the videophone system 100 .
- the memory 116 although depicted as being part of the processor 118 may reside anywhere or in multiple places, including without limitation, outside of the physical videophone system 100 .
- system 100 may include I/O devices in addition to cameras, including without limitation, microphones, remote controls, touch screens, keyboards, video displays, speakers, and any other I/O devices used in videophones and that may be necessary to accomplish the teachings of embodiments disclosed herein.
- I/O devices in addition to cameras, including without limitation, microphones, remote controls, touch screens, keyboards, video displays, speakers, and any other I/O devices used in videophones and that may be necessary to accomplish the teachings of embodiments disclosed herein.
- FIG. 2 a graph 200 plots DAC digital numbers against object of focus distances in millimeters.
- the graph illustrates three representative focus curves 202 , 204 , and 206 for three different camera lens assemblies respectively.
- “lens assembly” as used in describing graph 200 includes, without limitation, the lens, the lens actuator, the image sensor, and the printed circuit board on which these may reside, either alone or in combination.
- lens assemblies regardless of care taken in manufacturing and design, are not perfect and introduce some degree of distortion or aberration that makes the image an imperfect replica of the object of focus.
- similar lens assemblies received from a supplier may have a slightly different focus at various object of focus distances.
- a representative or “baseline” focus curve 208 is plotted. This baseline curve 208 can then be used as a point of comparison to a focus curve for any lens assembly provided by of supplier when assembling or manufacturing a camera for a videophone or videophone system.
- the baseline curve 208 is determined by analyzing a representative sample of lens assemblies to be used in the desired autofocus application.
- the representative lens assemblies may all be from the same batch or from different batches.
- the representative lens assemblies may each be focused at various distances in front of the camera lens where an object of focus might be for the desired application.
- each of the representative lens assemblies is focused at a distance and image sharpness is verified with the human eye to establish that each representative lens is in optimal focus at the distance.
- the DAC digital number for the lens position is noted and plotted against each distance for each lens assembly.
- Graph 200 is a representation of such a distance interval focus exercise for three representative camera lens yielding focus curves 202 , 204 , and 206 respectively.
- the DAC digital numbers in the graph correspond to optimally focused lens assemblies at a given distance for each such lens assembly.
- An average or baseline focus curve 208 can be determined by plotting the average DAC digital number of the three lens assemblies at multiple distances. It will be appreciated that the baseline focus curve 208 is an optical focus curve representation for the lens assemblies used in the videophone system.
- representative lens assemblies might be focused every 75 millimeters along a desired distance range of object of focus distances that are useful for a particular application.
- a desired distance range for videophone autofocus applications used by hearing-impaired persons might be between about 22 mm and 2.3 meters or 2300 mm, as is represented on graph 200 .
- the DAC digital number representing the lens position's optimal focus (as verified or determined by the human eye) at that distance could be plotted against the distance.
- a focus curve representing optimal focus positions for each representative lens assembly could be established.
- a focus determination interval distance of 150 mm may be used to create the representative lens assembly focus curve.
- 300 mm or 450 mm focus interval distances may be used to establish a representative focus curve for a particular lens assembly. Combinations of various interval focus distances may also be used. It will be appreciated by those of skill in the art that different lenses for different lens applications will produce different representative curves. However, where all variables other than distance are fixed, optimal focus curves can be established for most lenses by moving an object of focus at certain distance intervals away from the lens, having a human verify the sharpness of the focus, or in other words optimal focus, and noting the DAC digital associated with the lens position.
- a gaussian distribution average of each representative lens assembly can be determined for that distance.
- an average focus curve 208 can be established representing optimally focused lens positions at object of focus distances over the length of desired distance. It will be appreciated by those of skill in the art that if representative lens to be analyzed using the human observed interval focus method over a range from a minimal desired distance in front of the lens to the hyperfocal distance of the lens, then a baseline Gaussian distributed focus curve for optimal focus can be determine from that minimal distance to infinity. In one embodiment, a baseline representative optimal focus curve is used to determine the lookup table values for the lens assembly used in the videophone system.
- curves 202 , 204 and 206 could be said to represent the results of the human interval distance focusing described above for three representative lens assemblies.
- a representative baseline focus curve 208 could be the average of the three curves 202 , 204 , and 206 .
- Each of the curves 204 , 206 , and 208 could also represent different lens assemblies acquired or tested after a baseline 208 has been established.
- the optimal focus of a newly acquired lens assembly could be compared to the baseline 208 at a particular point and the difference could be used to determine a correction coefficient for the focus curve associated with that lens assembly.
- the comparison of a lens assembly to the baseline may be made at the hyperfocal distance.
- a representative comparative difference for the three focus curves 202 , 204 , and 206 respectively is illustrated on the graph 200 by parenthesis 214 , 216 , and 218 .
- the difference 214 represents the difference between points on curve 202 and the baseline curve 208 at the hyperfocal distance of the representative lens used in one embodiment of the disclosure.
- the difference 216 represents the difference between curve 204 and the baseline curve 208 taken at the hyperfocal distance.
- the difference 218 represents difference between curve 204 and the baseline curve 208 taken at the hyperfocal distance.
- the hyperfocal distance was chosen because all lens are in optimal focus at the hyperfocal distance, thus this distance represents a true variance offset for a particular lens.
- the differences 214 , 216 , and 218 at the hyperfocal curve end 212 can essentially viewed as the amount that each focus curve 202 , 204 , and 206 is “pivoted” about the minimum focus distance.
- the minimum focus distance for a particular lens is relevant in this situation because that distance often represents a physical limit fixed by the lens assembly manufacturer after the lens, lens actuator and/or image sensor has been assembled. Accordingly, at this minimum distance, there is typically very little focus variances between different representative lens assemblies.
- the differences between subsequently analyzed lens assembly optimal focus curves and the baseline curve taken at the hyperfocal distance are a good indicators of the correction coefficient that needs to be made all along a subsequently analyzed lens assembly values.
- the application of the correction coefficient will be discussed in greater detail in conjunction with FIG. 3 below. It will be appreciated that because optimal focus curves 204 and 206 lie beneath baseline curve 208 , the correction coefficient established by the difference 216 and 218 respectively, may be a negative value. Similarly, because focus curve 202 lies above the baseline curve 208 , the correction coefficient established by the difference 214 may have a positive value and vice versa.
- Differences similar to 214 , 216 , and 218 in graph 200 can be determined for lens assemblies prior to assembling them into finished products. Any such differences can then be used as the correction coefficient for the respective lens assemblies.
- the correction coefficient is determined prior the processor using the lookup table. The correction coefficient can then be used to modify the lookup table using that particular lens' correction coefficient.
- instructions executable by the processor to determine a difference between a camera focus value and a baseline focus value are stored in memory. These instruction are also executable by the processor to adjust the values of the lookup table to account for the focus difference.
- the correction coefficient for every lens assembly can be determined and stored in memory to be used at any point in the autofocus process by the processor.
- This variance normalization or focus adjustment feature allows company using a autofocus camera to determine a manufacturing variance or aberration for any particular lens assembly, such that all products using the lens assemblies perform substantially the same. It also offers the advantage that the videophone system's hardware, firmware and software can be established while giving the system the flexibility to easily change values depending upon the focus variance of a lens assembly.
- the range of object of focus distance values along the horizontal axis in graph 200 were chosen to span a desirable distance for using the lens assembly in a finished product.
- one range or area of distance in front of the camera that is known to be useful to the hearing-impaired community is between about 6 inches or about 150 mm to about 15 feet, or about 4.6 meters. Distances outside this range have been determined to be less useful for the purpose of having the camera pick up sign language motions or for a user to see sign language motions from further than this distance.
- a range of digital numbers can be used in the graph along the vertical axis that represent corresponding lens positions that span an area of importance or desired distance range for the particular use of the lens assembly.
- the DAC is chosen and/or configured to have the ability to convert a range of digital numbers into analog power amounts that can power the lens actuator to position the lens such that optimal focuses can occur anywhere between 150 millimeters in front of the camera to about 4.6 meters in front of the camera.
- a DAC needs to be chosen with a digital range sufficient to provide converted analog power for a range of lens positions, such that when the lens is moved, it can be focused on any distance between a desired minimum object of focus distance for a particular camera application to the near limit depth of field about the lens' hyperfocal distance. This assures an optimal focus for all such distances.
- the DAC and focus components need to also be chosen with lens manufacturer's electromechanical range of motion limits for the lens in mind.
- a 12-bit DAC was chosen, which provided 4,096 digital number options that could be converted and used to power the lens actuator of one embodiment of the present invention to all positions needed to optimally focus on a distance between 150 millimeters and the lens hyperfocal distance of around 2300 millimeters.
- the DAC digital number 2000 correlates to a distance of 22 mm, or less than an inch, which is well below the minimum focus distance for the lens.
- the DAC digital number for all curves 202 , 204 , and 206 is above DAC digital number 1600 as the distance is at or near the hyperfocal distance. Accordingly, for the illustrated embodiment, a DAC was chosen that could support digital numbers between about 1600 and about 2000, which corresponded to analog power sufficient to move the lens such that every distance in the desired range of 150 millimeters to about 2300 millimeters could be in optimal focus.
- the DAC digital numbers between about 2000 and about 1600 cover a focus area of between 22 mm (or less then the minimum focus distance for the lens), and infinity.
- the 12-bit DAC used for the illustrated embodiment easily fits the desired range of 1600-2000 digital number within the DACs 1-4096 digital number capacity.
- FIGS. 3 and 3 A an exemplary lookup table 300 and corresponding physical world representation 300 a of the exemplary lookup table 300 are shown respectively.
- the lookup table 300 utilizes depth of field to establish a point-to-range relationship between a DAC digital number 309 and a range of distances 312 in front of the lens where an object of focus would be in optimal focus.
- the lookup table 300 correlates multiple DAC digital numbers 309 to multiple ranges of distances 312 in a one-to-one correspondence. These ranges 312 may extend sequentially away from the camera lens 304 . In one embodiment of the lookup table 300 , DAC digital numbers 309 are chosen such that their corresponding distance ranges 312 , overlap 313 . For example, the near end boundary 321 of range of distances 312 d , is closer to the lens 304 than the far end boundary 323 of range of distances 312 d . In one embodiment, each range of distances 312 in the plurality of ranges of distances overlaps an immediately adjacent range of distances 312 . See for example distance ranges 312 c , 312 d , and 312 e .
- the lookup table 300 includes a set of DAC digital numbers 309 that correlate to a range of distances 312 that span or include the entire desired or predetermined distance range 315 .
- a set of DAC digital numbers 309 can be chosen such that any object of focus 314 would be in optimal focus anywhere within the desired range 315 .
- the lookup table correlate between 9 and 17 ranges of distances 312 to between 9 and 17 DAC digital numbers 309 in a one-to-one correspondence.
- the desired range 315 is between the lens's 304 minimum focus distance 317 and the lens' 304 hyperfocal distance 319 .
- a first or closest range of distances 312 a would have a near boundary distance 311 that is less than or equal to the minimum focus distance and last or farthest range of distances 312 e would have a far boundary distance 325 that is greater than or equal to the hyperfocal distance 319 .
- the ratio of the area 332 of the depth of field 336 before the focal plane 330 and the area 334 of the depth of field 336 after the focal plane 330 is 1: ⁇ .
- the far limit 340 of the depth of field 336 about the focal plane 330 when the lens 304 if focused at the hyperfocal is infinity and the far end distance boundary 325 of the farthest range of distances 312 e could just as well be infinity as it is some value at or near the hyperfocal distance of 2300 mm shown in the lookup table.
- the plurality of ranges of distances spans a distance between the lens' minimum focus distance 317 and a distance at least halfway between the lens and the lens' hyperfocal distance. In yet another embodiment, the plurality of ranges of distances 312 spans a distance between the lens' minimum focus distance 317 and at least a near limit 338 of the depth of field 336 about a focal plane 330 at the hyperfocal distance 319 . In another embodiment, the plurality of distance ranges 312 spans at least 90% of the distance between the lens' minimum focus distance and the lens' hyperfocal distance. In another embodiment, the plurality of distance ranges 312 spans at least 90% of the distance between about 150 mm in front of the lens 304 to about 2300 mm in front of the lens 304 .
- the lookup table 300 is configured to correlate a distance 315 a from the lens to an object of focus, with a DAC digital number 309 representing a lens position 307 a , such that the object of focus 314 a falls within a depth of field 336 a about a focal plane 330 a corresponding to the lens position 307 a . It will be appreciated that embodiments of the present invention allow a relative small lookup table to be used to allow lens 304 to optimally focuses on an object of focus 314 anywhere in the desired distance range 315 , thus allowing autofocus in a shorter amount of time.
- the amount of the correction coefficient is determine for each of the DAC digital numbers in the lookup table 300 .
- the amount of correction coefficient to apply to each DAC digital number in the lookup table is dependent upon the number of ranges of distances.
- a fraction of the correction coefficient is added or subtracted to each DAC digital number in the lookup table.
- the fraction if referred to in one embodiment as the HDCC modification factor 350 .
- the modification is equal to the position of the range of distances, (with position 1 being the closest to the lens) divided by the total number of ranges of distances. In the illustrated embodiment, there are 16 ranges of distances 312 .
- a modification factor 1/16 th of the correction coefficient is added or subtracted (depending on whether the value of the lens focus curve at the hyperfocal distance is less then or greater than the value of the baseline focus curve at the same distance) to the DAC digital number 309 a associated with closest range of distances 312 a to the lens 304 a .
- a modification factor of 2/16ths of the correction coefficient is added or subtracted to the DAC digital number 309 b associated with the second closest range of distances 312 b and so on until a modification factor of 16/16 th (or in other words, the entire correction coefficient) is added or subtracted to the DAC digital number 309 e associated with the farthest range of distances 312 e.
- FIGS. 4 and 4 A a schematic of a Time of Flight sensor area 402 is shown over a field of view 404 and a region of interest 406 a is shown in a field of view 404 a respectively.
- the ToF sensor are 402 will be less than the field view 404 a . Accordingly, it is possible for a user 408 to position themselves outside the ToF sensor where active autofocus techniques aren't available because the distance sensor can't determine the distance between the camera lens and an object of focus outside the ToF sensor area. In this scenario, or the scenario where the infrared beam of the distance sensor is disturbed and give a false distance, the verification process may need to determine optimal focus.
- the system has the user 408 , 408 a determine a region of interest to perform contrast analysis by allowing the user to “frame” themselves. This may be accomplished by the user using a remote control to move a frame 406 a over the captured image in the field of view until it rests over the face of the user 408 , 408 a . This action allows the user to predetermine a region of interest 410 a.
- a source of infrared light from an open flame can confuse the infrared sensor.
- the videophone system is one or more of the videophone system embodiments described herein.
- the videophone system includes a camera lens, a camera lens actuator configured to move the camera lens, and an image sensor configured to capture an image received through the camera lens.
- the videophone system may also include a distance sensor configured to determine a distance from the camera lens to an object of focus.
- the videophone system may further include a digital to analog converter (DAC) configured to convert a plurality of DAC digital numbers to a plurality of analog power values, where each power value may be used to power the lens actuator to move the lens to a lens position.
- DAC digital to analog converter
- the method 500 may include the step 502 of calculating a distance between the camera lens and an object of focus using the distance sensor.
- the method 500 may also include the step 504 of correlating the calculated distance to a DAC digital number using a lookup table.
- the lookup table may be any of the lookup table embodiments described herein.
- the method 500 may also include the step 506 of converting the correlated DAC digital number to an analog power value.
- the method 500 may also include the step 508 of powering the lens actuator using the analog power value converted from the correlated DAC digital number to move the lens to a lens position wherein the object of focus is in optimal focus at the calculated distance.
- the method 500 further includes a step 510 of verifying an optimal focus for a lens position by performing contrast analysis on a predetermined region of interest.
- the method 500 wherein the step 504 further includes determining a difference between a DAC digital number associated with the lens positioned for optimal focus at a desired distance or predetermined from the lens and a baseline DAC digital number representing a lens positioned for optimal focus at the same desired or predetermined distance. This determination may be accomplished using the processes described in connection with FIG. 2 above.
- the desired or predetermined distance for determining the difference between a lens and a baseline is the hyperfocal distance for the lens.
- the baseline is one of the baseline embodiments described herein.
- the correlating step 504 may further include adjusting the lookup table to account for the difference. This adjustment may be accomplished as discussed in connection with FIG. 3 .
- a non-transitory computer-readable medium 600 storing program instructions that, when executed on one or more processors cause the instructions to perform operations to accomplish the method steps of FIG. 5 and any of the videophone system functionality, processes, functions, features, aspects and algorithms described herein throughout.
- the program instructions may be in the form of computer code, software, or other implementing structures stored in memory.
- the instructions may be in the form of modules configured to perform certain functions or aspects of the videophone system.
- the non-transitory computer-readable medium 600 includes a distance calculation module 602 to calculate a distance between the camera lens and an object of focus using the distance sensor.
- the distance calculation module 602 may include interface instructions to allow the one or more processors to interface with the distance sensor.
- the module 602 may also prompt the distance sensor for data and facilitate the transfer of data the distance sensor and one or more processors (not shown).
- the non-transitory computer-readable medium 600 may also include a correlation module 604 for correlating the distance calculated using the distance sensor to a DAC digital number using a lookup table.
- the correlation module 604 may include instructions for determining a difference between a DAC digital number associated with the lens positioned for optimal focus at a desired or predetermined distance from the lens and a baseline DAC digital number representing a lens positioned for optimal focus at the same desired or predetermined distance.
- the desired distance for determining the difference between a lens and a baseline is the hyperfocal distance for the lens.
- the correlating module 604 may include instructions for adjusting the lookup table to account for the difference.
- the non-transitory computer-readable medium 600 may also include a movement module 606 with coded instructions to power the lens actuator using the analog power value converted from the correlated DAC digital number to move the lens to a lens position wherein the object of focus is in optimal focus at the calculated distance.
- the movement module 606 may include an interface with the DAC and with the lens actuator in order to make the necessary conversion and provide the power value to move the lens.
- the non-transitory computer-readable medium 600 includes a verification module 608 for performing passive autofocus techniques such as contrast detection and phase detection to verify that the object of focus is in optimal focus.
- the verification module 608 includes coded instruction to prompt for and/or determine a region of interest for the image captured by the image sensor at the lens position.
- the module 608 may further include instruction to perform contrast detection analysis on the region of interest to verify that the object of focus is in optimal focus.
- the non-transitory computer-readable medium 600 includes an I/O module 610 to facilitate the use of the system by a user.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Automatic Focus Adjustment (AREA)
Abstract
Description
1/f=1/d o+1/d i (1)
where f is the focal length of the camera, do is the object of
Depth of Field=2u 2 Nc/f 2 (2)
where c is a given circle of confusion, f is the focal length, N is the F-number (f/D for aperture diameter D), and u is the object of focus distance.
Hyperfocal Distance=(f×F/N)÷C (3)
where f is the focal length, N is the F-number (f/D for aperture diameter D), and c is the circle of confusion size. Because the hyperfocal distance does not depend on object of
C=λ×f (4)
where c is the speed of light (c=3×108 m/s), λ is one wavelength (λ=15 m), and f is the frequency.
Claims (28)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/009,567 US11726392B2 (en) | 2020-09-01 | 2020-09-01 | System, method, and computer-readable medium for autofocusing a videophone camera |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/009,567 US11726392B2 (en) | 2020-09-01 | 2020-09-01 | System, method, and computer-readable medium for autofocusing a videophone camera |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20220066286A1 US20220066286A1 (en) | 2022-03-03 |
| US11726392B2 true US11726392B2 (en) | 2023-08-15 |
Family
ID=80358493
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/009,567 Active 2041-03-02 US11726392B2 (en) | 2020-09-01 | 2020-09-01 | System, method, and computer-readable medium for autofocusing a videophone camera |
Country Status (1)
| Country | Link |
|---|---|
| US (1) | US11726392B2 (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115802161B (en) * | 2023-02-09 | 2023-05-09 | 杭州星犀科技有限公司 | Focusing method, system, terminal and medium based on self-learning |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170017136A1 (en) * | 2015-07-13 | 2017-01-19 | Htc Corporation | Image capturing device and auto-focus method thereof |
| US20170212236A1 (en) | 2011-07-15 | 2017-07-27 | Softkinetic Sensors Nv | Method and time-of-flight camera for providing distance information |
| US20180343444A1 (en) * | 2017-05-25 | 2018-11-29 | Fotonation Limited | Method for dynamically calibrating an image capture device |
| US20190342491A1 (en) * | 2018-05-02 | 2019-11-07 | Qualcomm Incorporated | Subject priority based image capture |
| CN112866542A (en) * | 2019-11-12 | 2021-05-28 | Oppo广东移动通信有限公司 | Focus tracking method and apparatus, electronic device, and computer-readable storage medium |
-
2020
- 2020-09-01 US US17/009,567 patent/US11726392B2/en active Active
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170212236A1 (en) | 2011-07-15 | 2017-07-27 | Softkinetic Sensors Nv | Method and time-of-flight camera for providing distance information |
| US20180164438A1 (en) | 2011-07-15 | 2018-06-14 | Softkinetic Sensors Nv | Method and time-of-flight camera for providing distance information |
| US20170017136A1 (en) * | 2015-07-13 | 2017-01-19 | Htc Corporation | Image capturing device and auto-focus method thereof |
| US20180343444A1 (en) * | 2017-05-25 | 2018-11-29 | Fotonation Limited | Method for dynamically calibrating an image capture device |
| US20190342491A1 (en) * | 2018-05-02 | 2019-11-07 | Qualcomm Incorporated | Subject priority based image capture |
| CN112866542A (en) * | 2019-11-12 | 2021-05-28 | Oppo广东移动通信有限公司 | Focus tracking method and apparatus, electronic device, and computer-readable storage medium |
Non-Patent Citations (1)
| Title |
|---|
| English translation of CN-112866542-A (Year: 2019). * |
Also Published As
| Publication number | Publication date |
|---|---|
| US20220066286A1 (en) | 2022-03-03 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9473868B2 (en) | Microphone adjustment based on distance between user and microphone | |
| EP2031442B1 (en) | Auto-focusing apparatus and method for camera | |
| EP3907544B1 (en) | Lens barrel, camera system, and imaging device | |
| US12085777B2 (en) | Zoom assembly, lens module, and electronic device | |
| US20110261252A1 (en) | Imaging system and method of operating the same | |
| KR101824936B1 (en) | Focus error estimation in images | |
| US20210352215A1 (en) | Camera device | |
| JP2013536610A (en) | Scene background blur with distance measurement | |
| CN108900763B (en) | Shooting device, electronic equipment and image acquisition method | |
| WO2017051605A1 (en) | Image capturing system and image capture control method | |
| JP2011055246A (en) | Telescopic imaging apparatus | |
| US9759994B1 (en) | Automatic projection focusing | |
| CN105163010B (en) | Camera module and electronic device | |
| CN105163061A (en) | Remote video interactive system | |
| CN111263106A (en) | Picture tracking method and device for video conference | |
| US11726392B2 (en) | System, method, and computer-readable medium for autofocusing a videophone camera | |
| CN112584001A (en) | Camera module and terminal equipment | |
| CN116055869B (en) | Video processing method and terminal | |
| US11792518B2 (en) | Method and apparatus for processing image | |
| WO2022267574A1 (en) | Zoom lens, zoom camera, and electronic device | |
| CN107147848B (en) | Automatic focusing method and real-time video acquisition system adopting same | |
| CN111133745B (en) | Camera and image display apparatus including the same | |
| CN203705786U (en) | Camera device achieving optical zooming, 3D camera device achieving optical zooming, and mobile terminal | |
| US10379338B2 (en) | Mobile terminal with a periscope optical zoom lens | |
| US20190253590A1 (en) | Camera Module |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| AS | Assignment |
Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, NEW YORK Free format text: JOINDER NO. 1 TO THE FIRST LIEN PATENT SECURITY AGREEMENT;ASSIGNOR:SORENSON IP HOLDINGS, LLC;REEL/FRAME:056019/0204 Effective date: 20210331 |
|
| AS | Assignment |
Owner name: SORENSON IP HOLDINGS, LLC, UTAH Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MAXWELL, CONRAD;NELSON, MARK;SIGNING DATES FROM 20211008 TO 20211013;REEL/FRAME:057930/0128 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| AS | Assignment |
Owner name: OAKTREE FUND ADMINISTRATION, LLC, AS COLLATERAL AGENT, CALIFORNIA Free format text: SECURITY INTEREST;ASSIGNORS:SORENSON COMMUNICATIONS, LLC;INTERACTIVECARE, LLC;CAPTIONCALL, LLC;REEL/FRAME:067573/0201 Effective date: 20240419 Owner name: CAPTIONALCALL, LLC, UTAH Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT;REEL/FRAME:067190/0517 Effective date: 20240419 Owner name: SORENSON COMMUNICATIONS, LLC, UTAH Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT;REEL/FRAME:067190/0517 Effective date: 20240419 Owner name: SORENSON IP HOLDINGS, LLC, UTAH Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT;REEL/FRAME:067190/0517 Effective date: 20240419 Owner name: SORENSON IP HOLDINGS, LLC, UTAH Free format text: RELEASE OF SECURITY INTEREST;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT;REEL/FRAME:067190/0517 Effective date: 20240419 Owner name: SORENSON COMMUNICATIONS, LLC, UTAH Free format text: RELEASE OF SECURITY INTEREST;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT;REEL/FRAME:067190/0517 Effective date: 20240419 Owner name: CAPTIONALCALL, LLC, UTAH Free format text: RELEASE OF SECURITY INTEREST;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT;REEL/FRAME:067190/0517 Effective date: 20240419 |
|
| AS | Assignment |
Owner name: CAPTIONCALL, LLC, UTAH Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE RECEIVING PARTY DATA THE NAME OF THE LAST RECEIVING PARTY SHOULD BE CAPTIONCALL, LLC PREVIOUSLY RECORDED ON REEL 67190 FRAME 517. ASSIGNOR(S) HEREBY CONFIRMS THE RELEASE OF SECURITY INTEREST;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT;REEL/FRAME:067591/0675 Effective date: 20240419 Owner name: SORENSON COMMUNICATIONS, LLC, UTAH Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE RECEIVING PARTY DATA THE NAME OF THE LAST RECEIVING PARTY SHOULD BE CAPTIONCALL, LLC PREVIOUSLY RECORDED ON REEL 67190 FRAME 517. ASSIGNOR(S) HEREBY CONFIRMS THE RELEASE OF SECURITY INTEREST;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT;REEL/FRAME:067591/0675 Effective date: 20240419 Owner name: SORENSON IP HOLDINGS, LLC, UTAH Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE RECEIVING PARTY DATA THE NAME OF THE LAST RECEIVING PARTY SHOULD BE CAPTIONCALL, LLC PREVIOUSLY RECORDED ON REEL 67190 FRAME 517. ASSIGNOR(S) HEREBY CONFIRMS THE RELEASE OF SECURITY INTEREST;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT;REEL/FRAME:067591/0675 Effective date: 20240419 |