CN108702457B - Method, apparatus and computer-readable storage medium for automatic image correction - Google Patents
Method, apparatus and computer-readable storage medium for automatic image correction Download PDFInfo
- Publication number
- CN108702457B CN108702457B CN201780012963.5A CN201780012963A CN108702457B CN 108702457 B CN108702457 B CN 108702457B CN 201780012963 A CN201780012963 A CN 201780012963A CN 108702457 B CN108702457 B CN 108702457B
- Authority
- CN
- China
- Prior art keywords
- image
- imaging device
- user
- lens
- blocks
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000003702 image correction Methods 0.000 title claims abstract description 77
- 238000000034 method Methods 0.000 title claims abstract description 76
- 238000003384 imaging method Methods 0.000 claims abstract description 86
- 230000000007 visual effect Effects 0.000 claims abstract description 15
- 230000015654 memory Effects 0.000 claims description 18
- 238000004891 communication Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 11
- 241001465754 Metazoa Species 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 4
- 230000001788 irregular Effects 0.000 description 3
- 230000003936 working memory Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 229910001285 shape-memory alloy Inorganic materials 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- GVVPGTZRZFNKDS-JXMROGBWSA-N geranyl diphosphate Chemical compound CC(C)=CCC\C(C)=C\CO[P@](O)(=O)OP(O)(O)=O GVVPGTZRZFNKDS-JXMROGBWSA-N 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 229920006395 saturated elastomer Polymers 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000001429 visible spectrum Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/633—Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
- H04N23/635—Region indicators; Field of view indicators
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/675—Focus control based on electronic image sensor signals comprising setting of focusing regions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10141—Special mode during image acquisition
- G06T2207/10148—Varying focus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Studio Devices (AREA)
Abstract
Methods, apparatus, and computer-readable storage media for automatic image correction are disclosed. In one aspect, the method is operable by an imaging device including a touch sensor for performing image correction. The method may include obtaining a first image of a scene, and receiving, via the touch sensor, a touch input indicative of a selected area of the first image and having a shape corresponding to a shape of the selected area. The method may also include determining statistics indicative of visual properties of the selected region, adjusting at least one image correction parameter of the imaging device based on the determined statistics and the shape of the touch input, and obtaining a second image of the scene based on the adjusted at least one image correction parameter of the imaging device.
Description
Technical Field
The present application relates generally to digital image processing and, more particularly, to methods and systems for improving digital image correction.
Background
An imaging device, such as a digital camera, may perform automatic image correction on a captured image in order to increase the quality of the captured image without significant user intervention. The automatic image correction may involve, for example, 3A image correction functions (i.e., auto-exposure, auto-white balance, and auto-focus). For example, the 3A image correction may be based on the entire captured image, automatic detection of objects within the image, or selection of points within the image by the user. Such image correction methods may be influenced by the way the region of interest is selected for 3A image correction. In this case, a need remains for further control of image correction based on the improvement of the selection of the region of interest.
Disclosure of Invention
The systems, methods, and devices of the present disclosure each have several innovative aspects, no single one of which is solely responsible for the desirable attributes disclosed herein.
In one aspect, a method operable by an imaging device including a touch sensor is provided for performing image correction. The method may include obtaining a first image of a scene; receiving, via the touch sensor, a touch input indicative of a selected area of the first image and having a shape corresponding to a shape of the selected area; determining statistics indicative of visual properties of the selected region; adjusting at least one image correction parameter of the imaging device based on the determined statistics and the shape of the touch input; and obtaining a second image of the scene based on the adjusted at least one image correction parameter of the imaging device.
In another aspect, an imaging device is provided, including an image sensor; a display; a touch sensor; at least one processor; and a memory storing computer-executable instructions for controlling the at least one processor to: obtaining a first image of a scene from the image sensor; and controlling the display to display the first image; receiving a touch input from the touch sensor indicating a selected area of the first image and having a shape corresponding to a shape of the selected area; determining statistics indicative of visual properties of the selected region; adjusting at least one image correction parameter of the imaging device based on the determined statistics and the shape of the touch input; and obtaining a second image of the scene based on the adjusted at least one image correction parameter of the imaging device.
In yet another aspect, an apparatus is provided that includes means for obtaining a first image of a scene; means for receiving a touch input indicative of a selected area of the first image and having a shape corresponding to a shape of the selected area; means for determining statistics indicative of visual properties of the selected region; means for adjusting at least one image correction parameter of an imaging device based on the determined statistics and the shape of the touch input; and means for obtaining a second image of the scene based on the adjusted at least one image correction parameter of the imaging means.
In yet another aspect, a non-transitory computer-readable storage medium is provided having instructions stored thereon that, when executed, cause a processor of a device to: obtaining a first image of a scene; receiving, via a touch sensor, a touch input indicative of a selected area of the first image and having a shape corresponding to a shape of the selected area; determining statistics indicative of visual properties of the selected region; adjusting at least one image correction parameter of an imaging device based on the determined statistics and the shape of the touch input; and obtaining a second image of the scene based on the adjusted at least one image correction parameter of the imaging device.
Drawings
Fig. 1A illustrates an example of an apparatus (e.g., a mobile communication device) of an imaging system including an image of a recordable scene in accordance with aspects of the present disclosure.
Fig. 1B is a block diagram illustrating an example of an imaging device according to aspects of the present disclosure.
Fig. 2 is an example of a first image captured by an imaging device according to aspects of the present disclosure.
Fig. 3 is an example of a region selected by a user for image correction, according to aspects of the present disclosure.
Fig. 4 is another example of a region selected by a user for image correction, according to aspects of the present disclosure.
Fig. 5 is yet another example of a region selected by a user for image correction, according to aspects of the present disclosure.
FIG. 6 shows an example of a region including multiple objects selected by a user, in accordance with aspects of the present disclosure.
Fig. 7 illustrates an example method of determining statistics for a selected region in accordance with aspects of the present disclosure.
Fig. 8 is a flow diagram illustrating an example method operable by an imaging device in accordance with aspects of the present disclosure.
Fig. 9 is a flow diagram illustrating another example method operable by an imaging device in accordance with aspects of the present disclosure.
Detailed Description
Digital camera systems or other imaging devices may perform various automated processes to correct or adjust the visual properties of captured images. Image correction may include, for example, 3A image correction functions (i.e., auto-exposure, auto-white balance, and auto-focus). The digital camera may determine visual statistics related to the current image correction function and use the determined statistics as feedback in determining correction values to adjust image correction parameters of the digital camera system. For example, in the case of auto-focus, the statistics may relate to a focus value of the captured image. The digital camera may then adjust the position of the lens of the camera, re-determine the focus value, and re-adjust the position of the lens until the optical focus value has been obtained.
The autofocus algorithm may involve optimizing the focus value for all captured images. Since the focus of an object within an image may be based on the respective distance of the object from the lens of the camera (also referred to as the depth of the object), when the object is at different distances, it may not all be within the focus of a given lens position. An auto-focus method that optimizes focus values for all images may produce an acceptable focus for scenes where most objects are at a similar depth (e.g., the main focus depth). However, a user of the camera may not be able to focus on objects that are not deep in the main focus using this auto-focus method.
There are many variations of auto-focus algorithms that can address the above limitations. In one such approach, the autofocus algorithm may focus more on the center of the captured image. Thus, the user may be able to select the depth of focus by positioning a desired object at the center of the image captured by the camera. However, this approach does not enable the user to automatically focus on an image on an object that is not at the center of the image.
In another implementation, the camera may accept input from the user indicating the location of the image at which autofocus is performed. The user may select a location within the image, and the camera may automatically focus based only on, or heavily weighted based on, the region corresponding to the user's selected location. In one example, the user may input the selected location via a touch input. This may enable a user to select objects that are not at the center of the image or at the main focal depth of the image for autofocusing. However, this embodiment may have the limitation that the user may only be able to select a single location of fixed size and shape. Certain object or objects with irregular shapes cannot be selected for autofocusing using this method.
In yet another implementation, the camera may accept multiple positions from the user to find one or more optimal focus values based on the selected multiple positions. Accordingly, the user may select a plurality of regions in which the camera performs auto-focusing. In order for the camera to focus on each of the selected regions, the camera may be required to capture multiple images at each of the focal depths corresponding to the selected regions, or the camera may include redundant hardware components for simultaneously capturing images at different focal depths.
Each of the autofocus embodiments described above may be limited in the manner in which a user may select a location of an image for autofocusing. For example, the user may only be able to select a fixed region and a fixed-shape region that may be used by the camera in performing the autofocus process. Thus, this limited information prevents the processor of the camera system from performing more advanced auto-focusing techniques that can more accurately focus on objects of interest to the user. It may also be difficult for a user to select an object having a size larger than the defined size and shape of the selected location. For example, when using touch input sensors, it may be awkward for a user to select multiple locations close together in order to select a larger object. In addition, the combination of selected locations may be larger than the object desired to be selected by the user, which may result in inaccurate autofocus processing.
Although the above has been discussed with respect to autofocus implementations of image correction, the present disclosure is also applicable to other automatic image correction techniques, such as automatic exposure and automatic white balancing. The statistical and feedback values determined by the processor may correspond to the particular automatic image correction applied. For example, in automatic exposure, the statistics determined by the processor may relate to brightness, contrast, etc. of the captured image. The statistics may be used by the processor as feedback to control at least one of an aperture size or a shutter speed of the camera in order based on the statistics to perform the automatic exposure. Similarly, in automatic white balancing, the processor may determine a color temperature of the image based on the selected location. The processor may alter the captured image based on the determined color temperature to compensate for the illumination of the scene. Other image correction algorithms may also be implemented within the scope of the present disclosure.
The following detailed description is directed to certain specific embodiments. However, the described techniques may be embodied in a number of different ways. It should be apparent that the aspects herein may be embodied in a wide variety of forms and that any specific structure, function, or both being disclosed herein is merely representative. Based on the teachings herein one skilled in the art should appreciate that an aspect disclosed herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. Further, such an apparatus may be implemented or such a method may be practiced using other structure, functionality, or structure and functionality in addition to or other than one or more of the aspects set forth herein.
Additionally, the systems and methods described herein may be implemented on a variety of different computing devices hosting cameras. These include mobile phones, tablets, dedicated cameras, portable computers, photo kiosks or photo printers, personal digital assistants, ultra-mobile personal computers, and mobile internet devices. They may use general purpose or special purpose computing system environments or configurations. Examples of computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, Personal Computers (PCs), server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
Fig. 1A illustrates an example of an apparatus (e.g., a mobile communication device) of an imaging system including an image of a recordable scene in accordance with aspects of the present disclosure. The device 100 includes a display 120. The apparatus 100 may also include a camera, not shown, on the opposite side of the apparatus. The display 120 may display images captured within the field of view 130 of the camera. FIG. 1A shows an object 150 (e.g., a person) within the field of view 130 that may be captured by a camera. A processor within device 100 may perform automatic image correction of a captured image of a scene based on measured values associated with the captured image.
The device 100 may perform various automated processes to correct the visual properties of the image. In one aspect, apparatus 100 may perform automatic image correction, including one or more of auto focus, auto exposure, and auto white balance, based on a region of an image selected by a user. Aspects of the present disclosure may relate to techniques that allow a user of device 100 to select irregular regions of an image (e.g., regions having a shape and size determined by the user at the time of selection) for use as feedback during automatic image correction.
Fig. 1B depicts a block diagram illustrating an example of an imaging device according to aspects of the present disclosure. The imaging device 200 (also referred to interchangeably herein as a camera) may include a processor 205 operatively connected to an image sensor 214, an optional depth sensor 216, a lens 210, an actuator 212, an aperture 218, a shutter 220, a memory 230, an optional storage device 275, a display 280, an input device 290, and an optional flash 295. In this example, the illustrated memory 230 may store instructions to configure the processor 205 to perform functions with respect to the imaging device 200. In this example, memory 230 may include instructions for directing processor 205 to perform image correction according to aspects of the present disclosure.
In the illustrative embodiment, light enters the lens 210 and is focused on the image sensor 214. In some embodiments, lens 210 is part of an autofocus lens system that may include multiple lenses and adjustable optical elements. In one aspect, the image sensor 214 utilizes a Charge Coupled Device (CCD). In another aspect, the image sensor 214 utilizes a Complementary Metal Oxide Semiconductor (CMOS) or CCD sensor. The lens 210 is coupled to an actuator 212 and is movable by the actuator 212 relative to an image sensor 214. The actuator 212 is configured to move the lens 210 in a series of one or more lens movements during an autofocus operation, e.g., to adjust a lens position to change a focus of an image. When the lens 210 reaches the boundary of its range of movement, the lens 210 or actuator 212 may be referred to as being saturated. In the illustrative embodiment, the actuator 212 is an open-loop Voice Coil Motor (VCM) actuator. However, the lens 210 may be actuated by any method known in the art, including closed-loop VCM, micro-electro-mechanical systems (MEMS), or Shape Memory Alloy (SMA).
In certain embodiments, the imaging device may include a plurality of image sensors 214. Each image sensor 214 may have a corresponding lens 210 and/or aperture 218. In one embodiment, the plurality of image sensors 214 may be the same type of image sensor (e.g., a Bayer sensor). In this embodiment, the imaging device 200 may capture multiple images simultaneously via multiple image sensors 214 that may be focused at different focal depths. In other embodiments, image sensor 214 may include different image sensor types that generate different information about the captured scene. For example, different image sensors 214 may be configured to capture different wavelengths of light (infrared, ultraviolet, etc.) than the visible spectrum.
The depth sensor 216 is configured to estimate a depth of an object in the image to be captured by the imaging device 200. The object may be selected by a user via user input via input device 290 corresponding to the area of the object. The depth sensor 216 may be configured to perform depth estimation using any technique (including auto-focusing techniques for estimating depth, e.g., phase detection auto-focusing, time-of-flight auto-focusing, laser auto-focusing, or dual camera auto-focusing) that may be suitable for determining or estimating the depth of an object or scene with respect to the imaging device 200. The techniques may also be applied using depth or position information received by the imaging device 200 from or about objects within the scene.
The display 280 is configured to display images captured via the lens 210 and the image sensor 214, and may also be used to implement configuration functions of the imaging device 200. In one implementation, the display 280 may be configured to display one or more regions of the captured image selected by a user of the imaging device 200 via the input device 290. In some embodiments, the imaging device 200 may not include the display 280.
The input device 290 may take many forms depending on the implementation. In some implementations, the input device 290 may be integrated with the display 280 so as to form a touch screen display. In other implementations, the input device 290 may include a separate key or button on the imaging device 200. These keys or buttons may provide input for navigation of menus displayed on the display 280. In other embodiments, the input device 290 may be an input port. For example, the input device 290 may provide an operative coupling of another device to the imaging device 200. The imaging device 200 may then receive input from an attached keyboard or mouse via the input device 290. In still other embodiments, the input device 290 may be remote from the imaging device 200 and communicate with the imaging device 200 over a communication network, such as a wireless network or a hardwired network. In still other embodiments, the input device 290 may be a motion sensor that may receive input in three dimensions via tracking changes in the position of the input device (e.g., the motion sensor is used as input for a virtual reality display). The input device 290 may allow a user to select a region of an image via input of continuous or substantially continuous lines/curves that may form a curve (e.g., a line), a closed loop, or an open loop.
The memory 230 may be used by the processor 205 to store data that is dynamically created during operation of the imaging device 200. In some examples, memory 230 may include a separate working memory in which dynamically created data is stored. For example, instructions stored in memory 230 may be stored in a working memory when executed by the processor 205. The working memory may also store data that runs dynamically, e.g., stack or stack data utilized by programs executing on the processor 205. The storage device 275 may be used to store data created by the imaging device 200. For example, images captured via the image sensor 214 may be stored on the storage 275. Like the input device 290, the storage device 275 may also be remotely located, i.e., not integral to the imaging device 200, and may receive captured images over the communication network.
The memory 230 may be considered a computer-readable medium and stores instructions for directing the processor 205 to perform various functions in accordance with the present disclosure. For example, in some aspects, the memory 230 may be configured to store instructions that cause the processor 205 to perform the method 400, the method 500, or portions thereof, as described below and as illustrated in fig. 8 and 9.
In one implementation, the instructions stored in memory 230 may include instructions for performing autofocus that configure processor 205 to determine a lens position of lens 210 in a series of lens positions that may include a desired lens position for capturing an image. The determined lens position may not include every possible lens position within the range of lens positions, but may include only a subset of the possible lens positions within the range of lens positions. The determined lens positions may be separated by a step size of one or more possible lens positions between the determined lens positions. For example, the determined lens position may include: a first lens position at one end of a range of lens positions, the first lens position representing a first focus distance; and a second lens position at the other end of the range of lens positions, the second lens position representing a second focus distance. The determined lens position may further include one or more intermediate lens positions, each intermediate lens position representing a focus distance between the first focus distance and the second focus distance, wherein the determined lens positions are separated by a step size of one or more possible lens positions between the determined lens positions in the first range of lens positions. In an illustrative embodiment, the processor 205 may determine a lens position in a range of lens positions based at least in part on the estimate of the depth of the object. The instructions may also configure the processor 205 to determine or generate a focus value for an image captured at one or more lens positions within a range of lens positions. The desired lens position for capturing the image may be the lens position having the largest focus value. The instructions may also configure the processor 205 to determine or generate a focus value profile or data representing a focus value profile based on the determined or generated focus value. The instructions may also configure the processor 205 to determine a lens position in a search range of lens positions based at least in part on the generated focus value, or to determine a focus value profile or data representing a focus value profile based on a previous search range of lens positions.
Examples of various regions that may be selected by a user for performing automatic image correction according to aspects of the present disclosure will now be described with respect to fig. 2-7. Fig. 2 is an example of a first image captured by an imaging device according to aspects of the present disclosure. The image of fig. 2 contains a center object 305 held by a man 310 shown on the right side of the image. The central object 305 partially obstructs the woman 315 in the background of the image. Additionally, another woman's face 320 may be seen in the foreground on the left side of the image. In the following description with respect to fig. 2-7, the selection of regions by the user may be described with respect to an embodiment in which the input device 290 is a touch sensor. However, those skilled in the art will appreciate that the user may also select regions of the image via other input devices 290, such as via a motion sensor, separate keys or buttons, or via predetermined input received from a network connection (hardwired or wireless).
The image of fig. 2 may be an image captured using an automatic image correction technique. For example, a device located at the center of the image may be automatically selected as the subject about which to perform automatic image correction. In the case of auto-focusing, the processor 205 may determine the depth of focus of the object 305 and adjust the position of the lens based on the determined depth of focus in order to focus the captured image on the center object 305.
In accordance with one or more aspects of the present disclosure, fig. 3 is an example of a region selected by a user for image correction in accordance with aspects of the present disclosure. In the embodiment of fig. 3, the user may select a man 310 shown on the right side of the image for automatic image correction. This may be accomplished by the user drawing a closed loop 325 around the man 310. Because closed loop 325 does not contain a center object 305, processor 205 may perform automatic image correction based on excluding or reducing the effects of statistics determined from center object 305 and/or other regions of the captured image. Thus, the processor 205 may perform automatic image correction based on the man 310 whose area is contained to the right of the image. In one example of autofocus, processor 205 may determine a main depth of focus within a selected region of closed loop 325, or may determine a range of depths for capture of multiple images within a full depth of focus of closed loop 325. This will be described in more detail below with respect to fig. 8.
Fig. 4 is another example of a region selected by a user for image correction, according to aspects of the present disclosure. As shown in fig. 4, the user may select a woman 315 in the background as the area for automatic image correction. Because the center object 305 blocks a large portion of the woman 315 from view, it may be difficult for the user to draw a closed loop around the woman 315 while excluding the center object 305 from selection. Thus, in the selected area illustrated in FIG. 4, the user may draw a curve or line 330 that overlaps the woman 315 in the background. Since curve 330 does not overlap center object 305, the user may be able to easily select woman 315 without including center object 305 in the selected area.
Referring to fig. 5, a region selected by a user for image correction is shown, according to aspects of the present disclosure. The image of fig. 5 contains a number of stuffed animals at different focal depths. In the example of fig. 5, the user may attempt to select a number of filled animals via multi-touch inputs 335, 340, 345, 350, and 355 (e.g., by placing five fingers on the desired filled animal, respectively). It may be difficult for the user to accurately and simultaneously place five fingers on the populated animal because of their proximity in position within the image.
In accordance with one or more aspects of the present disclosure, fig. 6 shows an example of a region including a plurality of objects selected by a user. As shown in fig. 6, the user may draw a closed loop 360 around the animal to be filled. This may indicate to the processor 205 the area in respect of which automatic image correction is performed. The processor 205 may be able to automatically detect each of the five filled animals within the selected area for more accurate image correction. For example, the processor 205 may perform facial recognition only within the selected area to identify the stuffed animal. Depending on the content of the scene, the processor 205 may perform other methods of automatically detecting objects within the selected area, which may then be used in determining statistics indicative of visual properties of the objects as discussed below.
In a related aspect, fig. 7 illustrates an example method of determining statistics for a selected area of interest, wherein the selected area of interest has an open curved shape 370. As shown in fig. 7, the processor 205 may divide the captured image into a plurality of blocks 365 or a grid of blocks 365. The processor 205 may determine statistics for each of the blocks 365. The statistics may depend on the type of automatic image correction being performed. For example, during autofocus, the processor 205 may determine the focus value as a statistic for each block 365. The focus value may be a digital representation of the total or average focus for block 365 (e.g., the distance of the block from the best focus). In the auto-exposure example, the statistics may be a digital representation of the total or average brightness, or saturation of block 365. In a white balance example, the statistics may be a digital representation of the total or average color temperature of block 365.
The processor 205 may use the statistics as feedback in an automatic image correction method. In the example of autofocus, the processor 205 may determine a corrected lens position based on the focus value determined for each block 365. The corrected lens position may be used to determine an amount to move the lens to position the lens 210 in the corrected or optimal lens position. In the example of automatic exposure, the processor 205 may determine a corrected aperture size or a corrected shutter speed based on the determined digital representation of the total or average brightness, or saturation of the block 365. Similarly, in an automatic white balance example, the processor 205 may determine a white balance compensation parameter or a color compensation parameter based on the determined digital representation of the total or average color temperature of the block 365.
In one implementation, processor 205 may weight the statistics for block 365 in the selected region higher than the statistics for block 365 not in the selected region. This may allow automatic image correction to focus on the selected area when performing automatic image correction. In one implementation, processor 205 may give a statistical weight of zero for blocks 365 that are not in the selected region. In this implementation, processor 205 may not be required to calculate statistics for blocks 365 that are not in the selected region.
When the user touch input is a curve 370 as shown in fig. 7, the processor 205 may weight the statistics corresponding to the block 365 that overlaps the curve 370 higher than the statistics corresponding to the remaining blocks 365 of the image. When the user touch input is closed loop 325 (see FIG. 3), processor 205 may weight the statistics corresponding to block 365 within closed loop 325 or corresponding to the region defined by closed loop 325 higher than the statistics corresponding to the remaining blocks 365 of the image. For example, processor 205 may give a weight of 0.8 (e.g., 80%) to the statistics corresponding to the blocks within closed loop 325 or corresponding to the region defined by closed loop 325, and a weight of 0.2 (e.g., 20%) to the statistics corresponding to the remaining blocks 365. In some implementations, as discussed above with respect to fig. 5, the processor 205 may automatically find the location of objects within the selected area. In these embodiments, the processor 205 may weight the statistics for the block 365 corresponding to the detected object higher than the rest of the selected region.
In one example, when the imaging device 200 has not received a user touch input, the processor 205 may determine the final statistical value according to equation 1:
where N is the total number of statistical regions (e.g., blocks) in the image, statsiIs a statistic of the ith region of the image, and weightiIs the weight assigned to the statistics of the ith region.
When the imaging device 200 has received a user touch input, the processor 205 may determine a final statistic by weighting the statistics according to equation 2:
where M is the total number of user-selected statistical regions, N is the total number of statistical regions (e.g., blocks) in the image, statsiIs an imageStatistics of the ith region, and weightiIs the weight assigned to the statistics of the ith region. In equation 2, the value of M is less than the value of N. Additionally, the user-selected statistical region may be selected as discussed above (e.g., as a block that overlaps or is enclosed by the user touch input).
In another example, the final statistics from all user-selected regions may be weighted equally. For example, a weight equal to 1/M may be usediTo weight the user selected area. In this example, the region not selected by the user may be given a weight of zero. This is shown by equation 3:
although the above has been described with respect to a single image sensor 214, aspects of the present disclosure may also be adapted for use by an imaging device 200 that includes multiple image sensors 214. For example, the imaging device 200 may include a plurality of image sensors 214 of the same type. In this example, when the imaging device 200 receives a touch input (e.g., an open curve, line, or closed loop) from a user, the touch input may indicate an area of the image that includes multiple focal depths. The processor 205 may determine a plurality of primary depths based on the regions, and the image sensor 214 may respectively capture images of the scene at the determined primary depths. In one embodiment, this may be accomplished by the processor 205 determining a weighted final statistic for each of the blocks in the selected region and determining a number of main depths for the selected region based on the weighted final statistic. The image sensor 214 may capture images at each of the primary depths simultaneously.
In another example, the imaging device 200 may include multiple image sensors 214 that capture different types of information from a scene. The different image sensors 214 may be configured to capture additional information of the selected area based on the spectrum of light that may be captured by the image sensors 214. One implementation of an imaging device 200 that includes different image sensors 214 may be an unmanned aircraft that may perform feature extraction and focus on an area selected by a user. For example, the processor 205 may determine a weighted final statistic for each of the blocks in the selected region. The image sensor 214 may zoom in on the selected area and capture a new set of images. The processor 205 may determine more detailed weighted final statistics for each of the blocks in the selected region based on the magnified captured image. The drone may reposition itself and the image sensor 214 for a better view of the selected area based on the detailed statistics and capture subsequent images of the selected area.
Example flow diagrams for irregular area autofocus
Exemplary embodiments of the present disclosure will now be described in the context of an autofocus procedure. It should be noted, however, that autofocus is described only as an exemplary automatic image correction procedure, and the method 400 described with respect to fig. 8 may be modified to be applicable to other automatic image correction procedures, such as automatic exposure and automatic white balancing.
Fig. 8 is a flow diagram illustrating an example method operable by the imaging device 200 or components thereof for autofocus, according to aspects of the present disclosure. For example, the steps of the method 400 illustrated in fig. 8 may be performed by the processor 205 of the imaging device 200. For convenience, the method 400 is described as being performed by the processor 205 of the imaging device 200.
The method 400 begins at block 401. At block 405, the processor 205 captures a first image of a scene. At block 401, the processor 205 displays the first image on the display 280 and prompts the user of the imaging device 200 to select an area of the first image for which to perform autofocus. The processor 205 receives an input from the user indicating a selected region of the first image at block 415. The processor 205 may receive input from a user via an input device 290, such as a touch sensor. At block 420, the processor 205 may determine a corrected lens position based on the selected region. In some implementations, this may involve the processor 205 dividing the first image into a plurality of blocks, determining a focus value for each of the blocks, and/or determining a corrected lens position based on weighting focus values for blocks in the selected area more heavily than blocks not in the selected area.
At block 425, the processor 205 adjusts the position of the lens to a corrected position. At block 430, the processor 205 captures a second image of the scene at the corrected lens position. In some implementations, this may include a feedback loop in which the processor 205 captures an intermediate image, re-determines the focus value, and re-determines the corrected lens position if the focus value is not greater than the threshold focus value. Once the processor 205 determines that the focus value for the intermediate image is greater than the threshold focus value, the processor 205 may determine that the selected area is at an optimal or acceptable focus level. The method ends at block 435.
In certain implementations, the selected region may include multiple focal depths (e.g., objects within the selected region may be located at different depths from the imaging device 200). In this case, the processor 205 may capture a plurality of second images of the scene at a plurality of intervals within the depth range of the selected area. This may allow the user to select one of the images from the plurality of captured second images to be saved to memory 230. Alternatively, the processor 205 may perform post-processing on the second image to create a composite image in which all or most of the selected area is in focus.
Fig. 9 is a flow diagram illustrating another example method operable by an imaging device in accordance with aspects of the present disclosure. The steps illustrated in fig. 9 may be performed by imaging device 200 or components thereof. For example, the method 500 may be performed by the processor 205 of the imaging device 200. For convenience, the method 500 is described as being performed by the processor 205 of the imaging device.
The method 500 begins at block 501. At block 505, the processor 205 obtains a first image of a scene. The processor 205 obtaining the first image may include receiving the first image from an image sensor 214 of the imaging device 200. The image sensor 214 may generate an image of the scene based on light received via the lens 210. At block 510, the processor 205 receives a touch input via a touch sensor (e.g., input device 290) that is indicative of a selected area of the first image and has a shape that corresponds to a shape of the selected area. The touch input may be an input to a touch sensor by a user of the imaging device 200. The user touch input may be an open curve (e.g., a line) or a closed loop drawn by the user to the touch input.
At block 515, the processor 205 determines statistics indicative of the visual properties of the selected region. The processor 205 may also determine statistics indicative of visual properties of the remaining region of the first image. At block 520, the processor 205 adjusts image correction parameters of the imaging device 200 based on the determined statistics and the shape of the touch input. At block 530, the processor obtains a second image of the scene based on the adjusted image correction parameters of the imaging device 200. Obtaining the second image by the processor 205 may include receiving the second image from the image sensor 214 of the imaging device 200 or generating the second image via image processing. The method ends at block 535.
Other considerations
In some embodiments, the circuits, processes, and systems discussed above may be used in a wireless communication device, such as apparatus 100. The wireless communication device may be an electronic device for wirelessly communicating with other electronic devices. Examples of wireless communication devices include cellular telephones, smart phones, Personal Digital Assistants (PDAs), e-readers, gaming systems, music players, netbooks, wireless modems, laptop computers, tablet devices, and the like.
The wireless communication device may include: one or more image sensors; two or more image signal processors; and memory including instructions or modules for carrying out the processes discussed above. The device may also have a processor to load data, instructions and/or data from memory, one or more communication interfaces, one or more input devices, one or more output devices (e.g., display devices), and a power supply/interface. The wireless communication device may additionally include a transmitter and a receiver. The transmitter and receiver may be collectively referred to as a transceiver. The transceiver may be coupled to one or more antennas for transmitting and/or receiving wireless signals.
The wireless communication device may be wirelessly connected to another electronic device (e.g., a base station). A wireless communication device may alternatively be referred to as a mobile device, a mobile station, a subscriber station, a User Equipment (UE), an access terminal, a mobile terminal, a user terminal, a subscriber unit, or the like. Examples of wireless communication devices include laptop or desktop computers, cellular telephones, smart phones, wireless modems, e-readers, tablet devices, gaming systems, and the like. The wireless communication device may operate in accordance with one or more industry standards, such as the third generation partnership project (3 GPP). Thus, the generic term "wireless communication device" can include wireless communication devices (e.g., access terminals, User Equipment (UE), remote terminals, etc.) described with different nomenclature of industry standards.
The functions described herein may be stored as one or more instructions on a processor-readable or computer-readable medium. The term "computer-readable medium" refers to any available medium that can be accessed by a computer or a processor. By way of example, and not limitation, such media can comprise Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk andoptical disks, where disks usually reproduce data magnetically, while optical disks reproduce data optically with lasers. It should be noted that computer-readable media may be tangible and non-transitory. The term "computer program product" refers to a computing device or processor in combination with code or instructions (e.g., a "program") that may be executed, processed, or computed by the computing device or processor. As used herein, the term "code" may refer to software, instructions, code or data that is executable by a computing device or processor.
The methods disclosed herein include one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is required for proper operation of the method being described, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.
It should be noted that as used herein, the terms "coupled (coupled, coupling, coupled)" or other variations of the word coupled (coupled) may indicate an indirect connection or a direct connection. For example, if a first component is "coupled" to a second component, the first component may be indirectly connected to the second component or directly connected to the second component. As used herein, the term "plurality" means two or more. For example, a plurality of components indicates two or more components.
The term "determining" encompasses a wide variety of actions, and thus "determining" can include calculating, computing, processing, deriving, studying, looking up (e.g., looking up in a table, a database, or another data structure), ascertaining, and the like. Also, "determining" can include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory), and the like. Also, "determining" may include resolving, selecting, choosing, establishing, and the like.
The phrase "based on" does not mean "based only on," unless explicitly specified otherwise. In other words, the phrase "based on" describes that "is based only on" and "is based on at least" both.
In the preceding description, specific details are given to provide a thorough understanding of the examples. However, it will be understood by those of ordinary skill in the art that the examples may be practiced without these specific details. For example, electrical components/devices may be shown in block diagrams in order not to obscure the examples in unnecessary detail. In other examples, such components, other structures and techniques may be shown in detail to further explain the examples.
Headings are included herein for reference and to aid in locating various sections. These headings are not intended to limit the scope of the concepts described with respect thereto. Such concepts may have applicability throughout the present specification.
It is also noted that the examples may be described as a process which is depicted as a flowchart, a flow diagram, a finite state diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently and the process can be repeated. In addition, the order of the operations may be rearranged. A process terminates when its operations are completed. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a software function, its termination corresponds to a return of the function to the calling function or the main function.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (22)
1. A method operable by an imaging device including a touch sensor for performing image correction, the method comprising:
obtaining a first image of a scene, the first image divided into a plurality of blocks;
receiving, via the touch sensor, a user-drawn input indicative of a selected area of the first image and having a shape corresponding to a shape of the selected area;
determining statistics indicative of visual properties of each of the blocks;
determining that the shape of the user-drawn input comprises an opening curve;
weighting the statistics for blocks in the first image that overlap the opening curve higher than blocks in the first image that do not overlap the opening curve;
adjusting at least one image correction parameter of the imaging device based on the weighted statistics; and
obtaining a second image of the scene based on the adjusted at least one image correction parameter of the imaging device.
2. The method of claim 1, wherein the at least one image correction parameter of the imaging device comprises one or more of: lens position, aperture size, shutter speed, and white balance compensation parameters.
3. The method of claim 1, wherein the method further comprises defining the selected region as a region of the first image corresponding to the user-drawn input based on the shape of the user-drawn input.
4. The method of claim 1, further comprising:
receiving, via the touch sensor, a second user-drawn input indicative of a second selected area of the first image and having a shape corresponding to a shape of the second selected area;
determining that the shape of the second user-drawn input comprises a closed loop; and
weighting the statistics for blocks in the first image enclosed by the closed loop higher than blocks in the first image not enclosed by the selected region.
5. The method of claim 1, wherein the at least one image correction parameter of the imaging device comprises a position of a lens, the method further comprising:
determining a plurality of positions of the lens that respectively correspond to different focal depths of the selected region; and
receiving a third image at each of the determined positions of the lens.
6. The method of claim 1, wherein the at least one image correction parameter of the imaging device comprises a position of a lens, the method further comprising:
detecting at least one object within the selected area;
determining the position of the lens corresponding to a depth of focus of the object; and
adjusting the position of the lens based on the determined position of the lens.
7. An image forming apparatus comprising:
an image sensor;
a display;
a touch sensor;
at least one processor; and
a memory storing computer-executable instructions for controlling the at least one processor to:
obtaining a first image of a scene from the image sensor, the first image divided into a plurality of blocks;
controlling the display to display the first image;
receiving, from the touch sensor, a user-drawn input indicative of a selected area of the first image and having a shape corresponding to a shape of the selected area;
determining statistics indicative of visual properties of each of the blocks;
determining that the shape of the user-drawn input comprises an opening curve;
weighting the statistics for blocks in the first image that overlap the opening curve higher than blocks in the first image that do not overlap the opening curve;
adjusting at least one image correction parameter of the imaging device based on the weighted statistics; and
obtaining a second image of the scene based on the adjusted at least one image correction parameter of the imaging device.
8. The imaging device of claim 7, wherein the at least one image correction parameter of the imaging device comprises one or more of: lens position, aperture size, shutter speed, and white balance compensation parameters.
9. The imaging device of claim 7, wherein the computer-executable instructions are further for controlling the at least one processor to define the selected region as a region of the first image corresponding to the user-drawn input based on the shape of the user-drawn input.
10. The imaging device of claim 7, wherein the computer-executable instructions are further for controlling the at least one processor to:
receiving, via the touch sensor, a second user-drawn input indicative of a second selected area of the first image and having a shape corresponding to a shape of the second selected area;
determining that the shape of the second user-drawn input comprises a closed loop; and
weighting the statistics for blocks in the first image enclosed by the closed loop higher than blocks in the first image not enclosed by the selected region.
11. The imaging device of claim 7, wherein the at least one image correction parameter of the imaging device comprises a position of a lens, the computer-executable instructions further to control the at least one processor to:
determining a plurality of positions of the lens that respectively correspond to different focal depths of the selected region; and
receiving a third image at each of the determined positions of the lens.
12. The imaging device of claim 7, wherein the at least one image correction parameter of the imaging device comprises a position of a lens, the computer-executable instructions further to control the at least one processor to:
detecting at least one object within the selected area;
determining the position of the lens corresponding to a depth of focus of the object; and
adjusting the position of the lens based on the determined position of the lens.
13. An apparatus, comprising:
means for obtaining a first image of a scene, the first image divided into a plurality of blocks;
means for receiving a user-drawn input indicating a selected region of the first image and having a shape corresponding to a shape of the selected region;
means for determining statistics indicative of visual properties of each of the blocks;
means for determining that the shape of the user-drawn input comprises an opening curve;
means for weighting the statistics for blocks in the first image that overlap the opening curve higher than blocks in the first image that do not overlap the opening curve;
means for adjusting at least one image correction parameter of an imaging device based on the weighted statistics; and
means for obtaining a second image of the scene based on the adjusted at least one image correction parameter of the imaging device.
14. The apparatus of claim 13, wherein the at least one image correction parameter of the imaging device comprises one or more of: lens position, aperture size, shutter speed, and white balance compensation parameters.
15. The apparatus of claim 13, wherein the apparatus further comprises means for defining the selected region as a region of the first image corresponding to the user-drawn input based on the shape of the user-drawn input.
16. The apparatus of claim 13, further comprising:
means for receiving, via the touch sensor, a second user-drawn input indicative of a second selected area of the first image and having a shape corresponding to a shape of the second selected area;
means for determining that the shape of the second user-drawn input comprises a closed loop; and
means for weighting the statistics for blocks in the first image that are enclosed by the closed loop higher than blocks in the first image that are not enclosed by the selected region.
17. The apparatus of claim 13, wherein the at least one image correction parameter of the imaging device comprises a position of a lens, the apparatus further comprising:
means for determining a plurality of positions of the lens that respectively correspond to different depths of focus of the selected region; and
means for receiving a third image at each of the determined positions of the lens.
18. The apparatus of claim 13, wherein the at least one image correction parameter of the imaging device comprises a position of a lens, the apparatus further comprising:
means for detecting at least one object within the selected area;
means for determining the position of the lens corresponding to a depth of focus of the object; and
means for adjusting the position of the lens based on the determined position of the lens.
19. A non-transitory computer-readable storage medium having stored thereon instructions that, when executed, cause a processor of a device to:
obtaining a first image of a scene, the first image divided into a plurality of blocks;
receiving, via a touch sensor, a user-drawn input indicative of a selected area of the first image and having a shape corresponding to a shape of the selected area;
determining statistics indicative of visual properties of each of the blocks;
determining that the shape of the user-drawn input comprises an opening curve;
weighting the statistics for blocks in the first image that overlap the opening curve higher than blocks in the first image that do not overlap the opening curve;
adjusting at least one image correction parameter of an imaging device based on the weighted statistics; and
obtaining a second image of the scene based on the adjusted at least one image correction parameter of the imaging device.
20. The non-transitory computer-readable storage medium of claim 19, wherein the at least one image correction parameter of the imaging device comprises one or more of: lens position, aperture size, shutter speed, and white balance compensation parameters.
21. The non-transitory computer-readable storage medium of claim 19, wherein the non-transitory computer-readable storage medium further has stored thereon instructions that, when executed, cause the processor to define the selected region as a region of the first image corresponding to the user-drawn input based on the shape of the user-drawn input.
22. The non-transitory computer-readable storage medium of claim 19, further having stored thereon instructions that, when executed, cause the processor to:
receiving, via the touch sensor, a second user-drawn input indicative of a second selected area of the first image and having a shape corresponding to a shape of the second selected area;
determining that the shape of the second user-drawn input comprises a closed loop; and
weighting the statistics for blocks in the first image enclosed by the closed loop higher than blocks in the first image not enclosed by the selected region.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/059,097 | 2016-03-02 | ||
US15/059,097 US9838594B2 (en) | 2016-03-02 | 2016-03-02 | Irregular-region based automatic image correction |
PCT/US2017/012701 WO2017151222A1 (en) | 2016-03-02 | 2017-01-09 | Irregular-region based automatic image correction |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108702457A CN108702457A (en) | 2018-10-23 |
CN108702457B true CN108702457B (en) | 2020-09-15 |
Family
ID=58018203
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201780012963.5A Expired - Fee Related CN108702457B (en) | 2016-03-02 | 2017-01-09 | Method, apparatus and computer-readable storage medium for automatic image correction |
Country Status (4)
Country | Link |
---|---|
US (1) | US9838594B2 (en) |
EP (1) | EP3424207A1 (en) |
CN (1) | CN108702457B (en) |
WO (1) | WO2017151222A1 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10785400B2 (en) | 2017-10-09 | 2020-09-22 | Stmicroelectronics (Research & Development) Limited | Multiple fields of view time of flight sensor |
US11138699B2 (en) * | 2019-06-13 | 2021-10-05 | Adobe Inc. | Utilizing context-aware sensors and multi-dimensional gesture inputs to efficiently generate enhanced digital images |
CN114466129A (en) * | 2020-11-09 | 2022-05-10 | 哲库科技(上海)有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
CN112556653B (en) * | 2020-12-16 | 2022-07-26 | 全芯智造技术有限公司 | Pattern measuring method in semiconductor manufacturing process, electronic device, and storage medium |
CN112819742B (en) * | 2021-02-05 | 2022-05-13 | 武汉大学 | Event field synthetic aperture imaging method based on convolutional neural network |
CN114964169B (en) * | 2022-05-13 | 2023-05-30 | 中国科学院空天信息创新研究院 | Remote sensing image adjustment method for image space object space cooperative correction |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101408709A (en) * | 2007-10-10 | 2009-04-15 | 鸿富锦精密工业(深圳)有限公司 | Image viewfinding device and automatic focusing method thereof |
CN105141843A (en) * | 2015-09-01 | 2015-12-09 | 湖南欧斐网络科技有限公司 | Scale positioning method and device based on shooting target image by camera device |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6956612B2 (en) * | 2001-07-31 | 2005-10-18 | Hewlett-Packard Development Company, L.P. | User selectable focus regions in an image capturing device |
US20050177315A1 (en) | 2004-02-06 | 2005-08-11 | Srinka Ghosh | Feature extraction of partial microarray images |
JP4755490B2 (en) * | 2005-01-13 | 2011-08-24 | オリンパスイメージング株式会社 | Blur correction method and imaging apparatus |
US20070222859A1 (en) | 2006-03-23 | 2007-09-27 | Coban Research And Technologies, Inc. | Method for digital video/audio recording with backlight compensation using a touch screen control panel |
US7664384B2 (en) | 2006-11-07 | 2010-02-16 | Sony Ericsson Mobile Communications Ab | User defined autofocus area |
US8249391B2 (en) | 2007-08-24 | 2012-08-21 | Ancestry.com Operations, Inc. | User interface method for skew correction |
JP5398156B2 (en) * | 2008-03-04 | 2014-01-29 | キヤノン株式会社 | WHITE BALANCE CONTROL DEVICE, ITS CONTROL METHOD, AND IMAGING DEVICE |
US8259208B2 (en) | 2008-04-15 | 2012-09-04 | Sony Corporation | Method and apparatus for performing touch-based adjustments within imaging devices |
US8452105B2 (en) | 2008-05-28 | 2013-05-28 | Apple Inc. | Selecting a section of interest within an image |
US8237807B2 (en) | 2008-07-24 | 2012-08-07 | Apple Inc. | Image capturing device with touch screen for adjusting camera settings |
US8885977B2 (en) | 2009-04-30 | 2014-11-11 | Apple Inc. | Automatically extending a boundary for an image to fully divide the image |
EP2667231B1 (en) * | 2011-01-18 | 2017-09-06 | FUJIFILM Corporation | Auto focus system |
CN103765276B (en) * | 2011-09-02 | 2017-01-18 | 株式会社尼康 | Focus evaluation device, imaging device, and program |
JP6083987B2 (en) | 2011-10-12 | 2017-02-22 | キヤノン株式会社 | Imaging apparatus, control method thereof, and program |
US20140247368A1 (en) | 2013-03-04 | 2014-09-04 | Colby Labs, Llc | Ready click camera control |
KR102068748B1 (en) | 2013-07-31 | 2020-02-11 | 삼성전자주식회사 | Digital photography apparatus, method for controlling the same, and apparatus and method for reproducing image |
KR102155093B1 (en) * | 2014-08-05 | 2020-09-11 | 엘지전자 주식회사 | Mobile terminal and method for controlling the same |
-
2016
- 2016-03-02 US US15/059,097 patent/US9838594B2/en not_active Expired - Fee Related
-
2017
- 2017-01-09 EP EP17704846.9A patent/EP3424207A1/en not_active Withdrawn
- 2017-01-09 CN CN201780012963.5A patent/CN108702457B/en not_active Expired - Fee Related
- 2017-01-09 WO PCT/US2017/012701 patent/WO2017151222A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101408709A (en) * | 2007-10-10 | 2009-04-15 | 鸿富锦精密工业(深圳)有限公司 | Image viewfinding device and automatic focusing method thereof |
CN105141843A (en) * | 2015-09-01 | 2015-12-09 | 湖南欧斐网络科技有限公司 | Scale positioning method and device based on shooting target image by camera device |
Also Published As
Publication number | Publication date |
---|---|
EP3424207A1 (en) | 2019-01-09 |
US20170257557A1 (en) | 2017-09-07 |
WO2017151222A1 (en) | 2017-09-08 |
CN108702457A (en) | 2018-10-23 |
US9838594B2 (en) | 2017-12-05 |
WO2017151222A9 (en) | 2018-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108702457B (en) | Method, apparatus and computer-readable storage medium for automatic image correction | |
CN108764091B (en) | Living body detection method and apparatus, electronic device, and storage medium | |
CN107950018B (en) | Image generation method and system, and computer readable medium | |
CN109889724B (en) | Image blurring method and device, electronic equipment and readable storage medium | |
CN107258077B (en) | System and method for Continuous Auto Focus (CAF) | |
CN108076278B (en) | Automatic focusing method and device and electronic equipment | |
CN104333748A (en) | Method, device and terminal for obtaining image main object | |
EP3038345B1 (en) | Auto-focusing method and auto-focusing device | |
CN108463993B (en) | Method and apparatus for preventing focus wobble in phase detection Autofocus (AF) | |
CN108924428A (en) | A kind of Atomatic focusing method, device and electronic equipment | |
EP3057304A1 (en) | Method and apparatus for generating image filter | |
JP2018509657A (en) | Extended search range for depth-assisted autofocus | |
CN104363378A (en) | Camera focusing method, camera focusing device and terminal | |
US20140307054A1 (en) | Auto focus method and auto focus apparatus | |
KR102661185B1 (en) | Electronic device and method for obtaining images | |
CN108648280B (en) | Virtual character driving method and device, electronic device and storage medium | |
CN107787463A (en) | The capture of optimization focusing storehouse | |
JP5968379B2 (en) | Image processing apparatus and control method thereof | |
CN106154688B (en) | Automatic focusing method and device | |
CN106922181B (en) | Direction-aware autofocus | |
JP2013195577A (en) | Imaging device, imaging method, and program | |
JP6645711B2 (en) | Image processing apparatus, image processing method, and program | |
CN115623313A (en) | Image processing method, image processing apparatus, electronic device, and storage medium | |
CN110910304B (en) | Image processing method, device, electronic equipment and medium | |
CN111724300A (en) | Single picture background blurring method, device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20200915 Termination date: 20220109 |
|
CF01 | Termination of patent right due to non-payment of annual fee |