US20120257072A1 - Systems, methods, and computer-readable media for manipulating images using metadata - Google Patents
Systems, methods, and computer-readable media for manipulating images using metadata Download PDFInfo
- Publication number
- US20120257072A1 US20120257072A1 US13/081,277 US201113081277A US2012257072A1 US 20120257072 A1 US20120257072 A1 US 20120257072A1 US 201113081277 A US201113081277 A US 201113081277A US 2012257072 A1 US2012257072 A1 US 2012257072A1
- Authority
- US
- United States
- Prior art keywords
- image
- zoom
- focus point
- smart
- metadata
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/22—Cropping
Abstract
Many cameras have the ability to capture an image and generate metadata associated with the image. Such image metadata may include focus point metadata information that may be indicative of the potential focus points available to the camera as well as which one or more of those potential focus points were utilized to capture the image. As the location of a focus point used during image capture is generally intended to coincide with the location of the photographer's main area of interest within the image, such focus point metadata can be accessed during image editing and used to zoom in to the captured image at that focus point location. Performing a “smart-zoom” based on an image's focus point metadata may save time and reduce frustration during the image editing process.
Description
- This can relate to systems, methods, and computer-readable media for manipulating images and, more particularly, to systems, methods, and computer-readable media for manipulating images using metadata.
- The advent of high quality digital cameras has enabled professional and novice photographers alike to capture and edit images in ways that were unthinkable just a short time ago. Modern day cameras abound with features, including autofocus, image stabilization, and face detection, that are designed to make every shot picture-perfect. As photographers working with digital cameras no longer need to have each picture developed, there can be a tendency to take far more pictures than one would have previously taken with a conventional film camera, where every picture taken costs a specific amount of money to develop.
- One of the results of all these trends is that a user (e.g., an image editor) may often have to interact with an image processing system to sift through a large number of digital images to find the few images that are worth retaining. However, currently available image processing systems may not adequately provide image manipulation techniques, such as image zooming, that can allow image editors to easily review large numbers of digital images.
- Systems, methods, and computer-readable media for manipulating images using metadata are disclosed. In some embodiments, systems, methods, and computer-readable media are disclosed for performing smart-zoom manipulation on images using metadata. An image can include among its associated metadata, focus point metadata information, which may be indicative of the position of one or more focus points utilized to capture the image. For example, a camera or any other suitable imager used to capture an image may be provided with any suitable pattern of potential focus points, one or more of which may be activated or otherwise utilized when capturing an image. As just one particular example, a camera may include a set of forty-five focus points that may be arranged in a pattern of 7, 10, 11, 10, 7, where each number may represent the number of focus points in each of five rows forming a diamond pattern of potential focus points. Any one or more of those forty-five focus points may be utilized as an active point of focus when capturing an image. In many instances, one might assume that the active focus point would be positioned at the center of the image (e.g., the sixth focus point in the row with eleven focus points in the above example). However, that is often not the case. In many instances, a camera may utilize a focus point that is not positioned at the center of the pattern of potential focus points, such that the in-focus object of a captured image may not be positioned in the center of the image.
- An image processing system may access captured images and display the images to a user such that the user may view and edit the images. In some embodiments, the image processing system may download the images from a distinct imager, while, in other embodiments, a camera or any other suitable imager may be provided with an image processing system such that a user may view and edit images on the imager itself. A captured image may be displayed in full within a display window (i.e., “scaled-to-fit”), such that the entire content of the image may be viewed by the user. An image may also be displayed zoomed-in about a particular smart-zoom point that may be determined by the focus point metadata associated with the image. The term “smart-zoom” may refer to any manipulation that may center the zoom area of an image about a point (e.g., a “smart-zoom point”) that may be determined by the focus point metadata associated with the image. Generally, when an image is zoomed-in, only a portion of the image may be displayed to the user within a display window. Accordingly, in some instances, an image displayed at its native resolution (i.e., 1× or 100%) may be considered zoomed-in if the entire image does not fit within the display window.
- An image may be displayed as zoomed-in about a particular smart-zoom point automatically or at the request of a user. A smart-zoom point may be the position of a particular focus point utilized by the camera when capturing the image, as may be indicated by the focus point metadata information associated with the image. The smart-zoom point may also, in some embodiments, be chosen from among several utilized focus points or otherwise calculated based upon the focus point metadata. By displaying a zoomed-in portion of a captured image based on the focus point metadata associated with the image, a user may more easily determine whether or not the captured image is worth retaining or editing in some way.
- In some embodiments, a user can start with a scaled-to-fit view of an image captured by a camera. In these and other embodiments, a number of features may be provided by the image processing system for purposes of enhancing the viewing and editing process of the user. For example, a representation of the array of potential focus points available to the imager that captured the image may be overlaid on the displayed image, and the one or more particular focus points of the array actually used by the imager to capture the image can be highlighted in the overlaid representation in order to quickly and clearly display the position of the one or more focus points of the captured image to the user. In this way, a user may easily determine both whether the content of the entire image is desirable and whether the in-focus portion of the image is positioned correctly with respect to a particular subject of the image. The user may then zoom-in to the focus point for a more detailed view of the subject. Alternatively, the image may initially be presented to the user as smart-zoomed in to a point based on the focus point metadata associated with the image, such that the user may immediately determine whether or not the in-focus portion of the image is positioned correctly with respect to the content of the image. In some embodiments, a user may also view a thumbnail of the full image while in a zoomed-in mode. The thumbnail view option may be beneficial, for instance, to give the user the full context of the image while in a zoomed-in mode. The focus point overlay, zoom, and thumbnail view options can, in some embodiments, be implemented as buttons or keyboard shortcuts to allow the user to efficiently proceed through a set of captured images.
- The user may also view a set of images in an array according to some embodiments. An array view may be useful, for example, for sorting through a large number of images at once. If the images are displayed with their focus point arrays, a user may be able to quickly determine which images are likely to be in focus on the intended subjects. The user can then tag those images for further viewing and editing. Tagging an image may, for example, require the user to click a check box.
- In some embodiments, an image processing system may be configured to recognize a particular subject within an image and its particular position with the image (e.g., by utilizing a subject recognition algorithm). For example, an image processing system may be configured to detect the position of a subject's face within a captured image (e.g., as provided by the face recognition feature available in iPhoto™ by Apple Inc. of Cupertino, Calif.). Such image subject recognition may be employed in conjunction with focus point metadata by an image processing system to make educated guesses about which images are likely to be worth retaining. As an example, if a focus point of an image overlaps or is within a specific distance from a detected face within the image, an image processing system may determine that the image is probably focused correctly and worth retaining. In such embodiments, the system may automatically prioritize that image for further viewing/editing.
- The above and other aspects and advantages of the invention will become more apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings, in which like reference characters refer to like parts throughout, and in which:
-
FIG. 1 is a schematic view of an image processing system according to at least one embodiment; -
FIG. 2 is a user-interface of an image processing system for viewing images according to at least one embodiment; -
FIG. 3 is a user-interface of an image processing system for setting smart-zoom user preferences according to at least one embodiment; -
FIG. 4 is a view of a display window of an image processing system showing a full image with a focus point overlay according to at least one embodiment; -
FIG. 5A is a view of a display window of an image processing system showing a zoomed-in image with smart-zoom disabled according to at least one embodiment; -
FIG. 5B is a view of a display window of an image processing system showing a zoomed-in image with smart-zoom enabled according to at least one embodiment; -
FIG. 6 is a view of a display window of an image processing system showing a zoomed-in image with a full-image thumbnail according to at least one embodiment; -
FIG. 7 is view of a display window of an image processing system showing multiple images with focus point overlays according to at least one embodiment; -
FIG. 8A is a view of a display window of an image processing system showing a full image with multiple focus points according to at least one embodiment; -
FIG. 8B is a view of a display window of an image processing system showing a zoomed-in image with multiple focus points according to at least one embodiment; -
FIG. 9 is a flow chart of a process for performing image zooming according to at least one embodiment; and -
FIG. 10 is a flow chart of a process for performing image zooming according to at least one embodiment. - Systems, methods, and computer-readable media for manipulating images using metadata are provided and described with reference to
FIGS. 1-10 . - An image processing system may be configured to allow a user to view images in either a “scaled-to-fit” mode or at a particular, fixed, zoom level (e.g., 1× or 2×). The user may also be provided with the ability to switch back and forth between these two modes using a keyboard “shortcut” (e.g., ‘z’ or ‘Ctrl-+’), a menu option, a mouse gesture, or any other suitable option. In the case of the keyboard shortcut, the image may simply be zoomed by a predetermined zoom level about the center of the image, even though the center may often not be the photographer's primary area of interest and/or the in-focus portion of the image. In the case of the mouse gesture, the image can be zoomed to the location of a mouse click on the image. While this approach may be somewhat of an improvement, it relies on the user trying to guess what the photographer's area of interest was and/or to determine the in-focus portion of the image while also requiring precise aim by the user. Incorrect or imprecise zooming can be very frustrating and time consuming for a user working with a large number of high resolution images. According to some embodiments, systems, methods, and computer-readable media are provided for zooming or otherwise manipulating an image using metadata associated with the image, such a focus point metadata.
- Many high quality cameras in the market today have autofocus capability, and many of the cameras currently on the market may provide the photographer with the ability to choose a particular focus point from an ever growing number of potential focus points available to the camera. Depending on the camera, there may be only one potential focus point or an array of up to forty-five or more potential focus points. A photographer may manually choose one or more of the potential focus points to be utilized for capturing a particular image or the camera may automatically do so.
- Cameras may also be configured to generate and save “metadata,” or data about a captured image, along with the captured image. Image metadata may include information regarding, for example, who owns the photo, the exposure information, keywords about the image, date, time and location data, and the like. Some cameras may also have the ability to record which of its potential focus points were used to capture the image as focus point metadata. As the location of the one or more focus points actually used during image capture may coincide with each portion of the image that is in-focus, the focus point metadata can be accessed during image editing and can then be used to identify that location on the captured image to a user and/or to zoom in to that location of the captured image. Performing a “smart-zoom” based on an image's focus point metadata may beneficially save time and reduce frustration during the image editing process.
-
FIG. 1 is a schematic view of animage processing system 100 according to at least one embodiment.System 100 may include aprocessor 101, adisplay 105, a user-input device 107, and a memory 109.System 100 may also optionally include animager 103.Processor 101 may be included within, for example, a computer or any other suitable electronic device that can store or otherwise access images captured byimager 103.Imager 103 can be any device (e.g., a digital SLR camera) capable of capturing images along with focus point metadata. A user can import images to memory 109 coupled toprocessor 101 fromimager 103. Alternatively, images may be downloaded, obtained, or otherwise accessed byprocessor 101 from any other suitable source.Processor 101 may include any processing circuitry operative to control the operations and performance of one or more components ofimage processing system 100. In some embodiments,processor 101 may be used to run operating system applications, firmware applications, or any other suitable application or program, such as an image editing application (e.g., iPhoto™ and/or Aperture™ by Apple Inc. of Cupertino, Calif.), that may be configured to manipulate an image based on one or more predetermined parameters (e.g., focus point metadata). For example,processor 101 may load a user interface program or other application program (e.g., a program stored in memory 109 or on another device or server) to determine how certain data may be manipulated on certain components (e.g., how image data may be manipulated on display 105).Display 105 may be coupled toprocessor 101 and may provide a user with a visual interface for viewing and potentially editing captured images. User-input device 107 may be any device suitable to allow a user to interact withprocessor 101. For example, user-input device 107 may be a keyboard and/or a mouse. In some embodiments,display 105 and user-input device 107 may be a single component, such as a touch screen, that may receive input information through a user's touch of a display screen and that may also provide visual information to a user via that same display screen. In some embodiments, all elements ofsystem 100 may be encapsulated in a single device (e.g., a camera). -
FIG. 2 shows an example of a user-interface 200 that may be provided by an image processing system (e.g., system 100) to a user for viewing and editing images according to one or more embodiments. In some embodiments, user-interface 200 may be provided to a user bydisplay 105 ofsystem 100 and may include an image viewing window or animage display window 232, a zoom-inbutton 202, a zoom-out button 204, a smart-zoom toggle button 206, an auto-zoom toggle button 208, a focus pointoverlay toggle button 210, athumbnail toggle button 212, and/or an array-view toggle button 214. Persons skilled in the art will appreciate that some of these buttons may be optional, and that other buttons/functions can be added. Animage 230 may be displayed inwindow 232 to a user for the purposes of viewing and/or editing. Although user-interface 200 showsbuttons buttons - Zoom-in
button 202 may allow a user to zoom in onimage 230 by a predetermined zoom factor. For example, if the current view mode is 1× (or 100%), pressing zoom-inbutton 202 may increase the zoom to, for example, 2× (or 200%). Further activations of zoom-inbutton 202 may, for example, increase the zoom to 4×, 8×, 16×, etc. Zoom-out button 204 may perform the inverse function of zoom-in button 202 (i.e., zoom-out). Pressing zoom-out button 204 can decrease the zoom from, for example, 4× to 2× or 2× to 1×. Although the examples above assume a 2× zoom factor, any zoom factor can be chosen by the user (see, e.g.,preference 323 ofFIG. 3 below), and/or there can be system default settings which can be used to determine how much zooming should occur prior to any customization by the user. - In some embodiments, user-
interface 200 can include smart-zoom toggle button 206. Smart-zoom toggle button 206 can allow a user to enable or disable a smart-zoom feature. A smart-zoom feature may, according to some embodiments, zoom in on an area of the image associated with one or more focus points that may be determined from metadata associated with the image. This can occur, for example, when a user presses zoom-inbutton 202, and/or it may occur by default when the user selects an image to view (e.g., an image processing system may routinely display an image as a zoomed in image, rather than as the entire image that was captured by the photographer). The focus point or points can be saved along with the image as metadata when the image is captured (e.g., byimager 103 ofFIG. 1 ). When smart-zoom toggle button 206 is disabled, on the other hand, pressing zoom-inbutton 202 may simply zoom to the center ofimage 230. Other zoom options may be available when smart-zoom toggle button 206 is disabled. For example, a user may use a mouse gesture (e.g., clicking a button or using a scroll wheel) to zoom in on a particular area ofimage 230. In some embodiments, the smart-zoom feature can be automatically enabled if focus point metadata is available for a particular image being displayed and/or automatically disable if focus point metadata is not available for a particular image being displayed. - User-
interface 200 may also include auto-zoom toggle button 208. Auto-zoom toggle button 208 can allow a user to choose to automatically zoom to a predetermined level. For example, when auto-zoom toggle button 208 is enabled, all images may be displayed at 2× by default. When auto-zoom toggle button 208 is not enabled, the default display mode for images may be “scaled-to-fit,” which may displayimage 230 in full (e.g., at the zoom level that causes the entire image as captured by the photographer to fillwindow 232 in at least one dimension). As an example, when auto-zoom toggle button 208 is enabled, a user scrolling through a set of images one-by-one may be presented with each image at the default zoom factor. However, if auto-zoom toggle button 208 is disabled, the user may be presented with, for instance, a view of each image scaled-to-fit the provided window. Moreover, even if a default setting is selected, each image may be viewed at a different zoom level based on various factors or image metadata (e.g., orientation, etc.). - In some embodiments, focus point
overlay toggle button 210 may also be included in user-interface 200. Enabling focuspoint toggle button 210 can turn on afocus point overlay 220.Focus point overlay 220 may represent at least a portion of a focus point array associated with the camera used to capture image 230 (e.g.,imager 103 ofFIG. 1 ). As shown inFIG. 2 , for example, focuspoint overlay 220 may include forty-five focus points, each of which may be associated with a particular potential focus point available to the imager used to captureimage 230. However, an imager can have any suitable number of potential focus points. One or more highlighted focus points 222 ofoverlay 220 may represent the one or more actual focus points that may have been used by the imager to captureimage 230. Each focus point ofoverlay 220 may be displayed with respect to an image in any suitable way (e.g., as translucent, transparent, or opaque boxes or any other suitable shape that may be positioned appropriately with respect to the displayed image), and each highlightedfocus point 222 can be highlighted or otherwise distinguished from other focus points in any suitable fashion to show the user which particular focus point or focus points ofoverlay 220 were used to capture the image (e.g., each highlightedfocus point 222 may be of a different color, translucency, size, or shape than non-highlighted focus points). The number and position of focus points in the array depicted byoverlay 220 as well as the number and position of highlighted focus points 222 may be determined by focus point metadata associated with the image being displayed inwindow 232. In some embodiments, focuspoint toggle button 210 may be configured to only display the one or more highlighted focus points 222 and not any other focus points ofoverlay 220. - According to further embodiments, user-
interface 200 may also includethumbnail toggle button 212. Enablingthumbnail toggle button 212 may generate a fully zoomed-out, but reduced size version ofimage 230. That thumbnail image may be displayed along with a zoomed-in portion of the image (see, e.g.,thumbnail image 670 ofFIG. 6 below). Alternatively,thumbnail toggle button 212 may be configured to generate a zoomed-in portion ofimage 230 as a thumbnail image that may be displayed along with a fully zoomed-out version of image 230 (e.g., the opposite of what is shown inFIG. 6 ). - User-
interface 200 can also include image array-view toggle button 214. When image array-view toggle button 214 is enabled, editing/viewing window 232 may display a view where a user can quickly scan a number of images at once to decide whether to retain or delete one or more of them (see, e.g.,FIG. 7 ). -
FIG. 3 is an example of a user-interface 300 that may be provided by an image processing system (e.g., system 100) to a user for setting smart-zoom user preferences according to some embodiments. Preferences chosen in user-interface 300 may define default behavior for many aspects of an image processing system. Under the Image Smart-Zoom Preferences heading 311, a user may have the option, for example: to choose to always enable smart-zoom when focus point metadata is available atpreference 313; to have the system always ask before enabling smart-zoom atpreference 315; or to disable smart-zoom atpreference 317. - Under the Single Focus Point Smart-Zoom Preferences heading 319, a user may have the option, for example: to smart-zoom to an exact zoom level at preference 321 (e.g., to a specific smart-zoom factor, such as 2× or 200%, that may be chosen at preference 323); or to smart-zoom to image edge at
preference 325. Choosing the option smart-zoom to imageedge preference 325 may be useful if a focus point of the image is off-center, at least by a particular distance. In that case, the image's focus point may be chosen as the smart-zoom point and the image may be zoomed to a level that maximizes the viewable area of the image while maintaining the focus point in the center of the viewing window. Alternatively, in some embodiments, the smart-zoom point may be chosen such that, at a particular zoom factor (e.g., 1× or 2×), the focus point is as close to the center of the zoomed-in image as possible while also filling the viewing window in at least one direction with image content. - Options under the Multi Focal Point Smart-Zoom Preferences heading 327 may allow a user to decide how to perform a smart-zoom function on images with more than one utilized or activated focus point (e.g., as may be indicated by focus point metadata associated with the image). One option may be to smart-zoom to boundaries defined by focus points at
preference 329. In that case, the image may be zoomed to a level that does not place any of those focus points outside the image viewing window. A user may define the maximum or minimum distance that a focus point may be from an edge of the viewing window atpreference 331 for any given focus point (e.g., 90% of the distance from the center of the image to the edge). A de-facto smart-zoom point of the image may be established and may then be positioned at the center of the image viewing window of the zoomed-in image. Further zooming may, for example, zoom in on the de-facto smart-zoom point. Otherwise, a user may choose the option to smart-zoom to the center of the area defined by the multiple focus points atpreference 333 with a particular zoom factor at preference 335 (e.g., 2× or 200%). In that instance, the system may base the de-facto smart-zoom point for a captured image on an average location that exists between each of the activated or utilized focus points designated by the camera in the metadata of that image. Any suitable method of determining the center of the focus points, such as a least squares approach, may also be used to determine the de-facto smart-zoom point. - A user may also choose from one or more automatic smart-
zoom options 337. For example, a user may choose automatic smart-zoom preference 339 to automatically view a smart zoomed-in image by default or manual smart-zoom preference 341 to view an image in a scaled-to-fit screen view or any other suitable view by default. - Buttons OK 343 and Cancel 345 may be utilized by a user to save or cancel the chosen options of
interface 300, respectively. - The various preferences shown in
FIG. 3 are for purposes of example only. Various preferences may be added or removed from the settings of user-interface 300. -
FIG. 4 is a view of anexemplary display window 400 that may be provided by an image processing system (e.g., system 100) to a user with in a viewing mode for animage 430 in which the user preferences may be set to scaled-to-fit zoom with afocus point overlay 420 according to at least some embodiments. As shown inFIG. 4 , for example,image 430 may be provided at scaled-to-fit zoom, such thatimage 430 is shown in full as large as possible withinwindow 400 without sacrificing any portion of the image.Focus point overlay 420 may be superimposed overimage 430 and a highlightedfocus point 422 may indicate the focus point ofimage 430 utilized by a camera when capturing image 430 (e.g.,imager 103 ofFIG. 1 ).Focus point overlay 420 may be toggled on and off using, for example, focuspoint toggle button 210 ofFIG. 2 . As shown inFIG. 4 , highlightedfocus point 422 is clearly not positioned over the probable intended subject of image 430 (i.e., the face of the depicted dog). - Viewing an image in this way may allow a user to quickly ascertain whether the portion of the image that is in-focus is located where an intended subject of the image is positioned. For instance, because highlighted
focus point 422 is not positioned over the intended subject of image 430 (e.g., as assumed to be the face of the depicted dog), a user can reasonably deleteimage 430 because the subject of the image is very likely out of focus and the user may then move on to the next image without spending the time to zoom in and observe the finer details of the deleted image. This can significantly reduce the time and effort required to review a set of images. -
FIG. 5A is a view of anexemplary display window 500 that may be provided by an image processing system (e.g., system 100) to a user with in a viewing mode for a zoomed-inimage 540 with smart-zoom disabled according to some of the embodiments described above.FIG. 5A may show a typical result from the use of conventional zooming techniques, in which the system may simply zoom in on the center of the image. As shown inFIG. 5A , because highlightedfocus point 522 offocus point overlay 520 is off-center, and because zoomed-inimage 540 may be missing part of the intended subject of the image, it can be difficult to tell whether the subject is in focus. -
FIG. 5B , on the other hand, is a view of anexemplary display window 501 that may be provided by an image processing system (e.g., system 100) to a user with in a viewing mode for a zoomed-inimage 541 with smart-zoom enabled according to at least some of the embodiments described above. Here, highlightedfocus point 522 offocus point overlay 520 may be shown in the center of zoomed-inimage 541. It is easy to tell that the utilized focus point of the captured image was correctly positioned on the intended subject of interest. As shown inFIG. 5B , for example, becauseimage 541 is zoomed-in to the highlightedfocus point 522, no time may need to be wasted panning to that position to observe the details ofimage 541. The high resolution images captured by modern cameras can make panning on a zoomed-in image very difficult and time consuming. The viewing mode ofdisplay window 501 may allow a user to quickly ascertain: (1) whether the intended subject was positioned at the focus point of the image; and (2) whether the subject is actually in focus. This can significantly simplify the entire process for the user that is reviewing and editing one or more images. It is to be understood, that in some embodiments, highlightedfocus point 522 and/or the remainder ofoverlay 520 need not be displayed inwindow 500 and/orwindow 501. -
FIG. 6 is a view of anexemplary display window 600 that may be provided by an image processing system (e.g., system 100) to a user with in a viewing mode for a zoomed-inimage 641 with a full-image thumbnail according to at least some of the embodiments described above. Zoomed-inimage 641 may be an example of a zoomed-in image with smart-zoom enabled. Highlightedfocus point 622 offocus point overlay 620 may be positioned in essentially the center ofwindow 600 along with zoomed-inimage 641.Thumbnail image 670 may also be provided to display a fully zoomed-out, but reduced-size copy of the original version ofimage 641.Thumbnail image 670 may be positioned in a corner of zoomed-inimage 641 or in any other suitable way with respect to zoomed-inimage 641 in order to display the full context of zoomed-inimage 641 without disrupting the user's view of zoomed-inimage 641. In some embodiments,thumbnail image 670 may be positioned in a separate window fromwindow 600.Thumbnail toggle button 212 ofFIG. 2 , for example, may be used to toggle on and off the display ofthumbnail image 670. This thumbnail viewing feature can add a further element of convenience to the viewing mode shown inFIG. 6 , as well as those previously described. Not only can a user quickly and easily ascertain whether the utilized focus point was located on the intended subject of the image and whether the intended subject is actually in focus, but the user can also determine whether the subject matter of the image as a whole is desirable. Alternatively, in some embodiments,thumbnail image 670 may be configured to display the smart-zoomed in portion ofimage 641 andwindow 600 may be configured to display the original version of image 641 (e.g.,image 641 fit-to-scale withinwindow 600 andimage 641 smart-zoomed within thumbnail image 670). It is to be understood, that in some embodiments, highlightedfocus point 622 and/or the remainder ofoverlay 620 need not be displayed inwindow 600 and/orthumbnail 670. -
FIG. 7 is a view of anexemplary display window 700 that may be provided by an image processing system (e.g., system 100) to a user for reviewing multiple images at once in an array view mode with focus point overlays according to at least some of the embodiments. As shown inFIG. 7 , for example, a user can quickly scan a number of images 730 (e.g., images 730 a-730 i) with respective focus overlays 720 (e.g., overlays 720 a-720 i) having one or more respective highlighted focus points 722 (e.g., highlighted focus points 722 a-722 i) and decide whether to retain or delete each one of images 730. For example, a user can determine thatimages Images display window 700 are to be kept for further viewing and/or editing. The use of retain boxes 770 may be provided as an additional feature to provide a user with further efficiencies in reviewing a set of images. In other embodiments, rather than displaying each image 730 in the array of images ofwindow 700 in full, the image processing system may be configure to display each image 730 in the array as smart-zoomed. It is to be understood, that in some embodiments, one or more highlighted focus points 722 and/or the remainder of one or more overlays 720 need not be displayed inwindow 700. -
FIG. 8A is a view of anexemplary display window 800 that may be provided by an image processing system (e.g., system 100) to a user for reviewing animage 830 as scaled-to-fit with multiple highlighted focus points 822 with smart-zoom disabled according to at least some of the embodiments disclosed herein. Scaled-to-fit image 830 may be displayed withfocus point overlay 820 that may have multiple highlighted focus points 822. The number and position of each focus point ofoverlay 820 may be determined by metadata associated withimage 830. -
FIG. 8B is a view of anexemplary display window 801 that may be provided by an image processing system (e.g., system 100) to a user for reviewing image 831 with multiple highlighted focus points 822 and smart-zoom enabled according to at least some embodiments. Smart-zoomed image 831 may be displayed withfocus point overlay 820 and multiple highlighted focus points 822. Ade-facto focus point 824 can be calculated as the approximate center point of highlighted focus points 822 for the purposes of choosing a smart-zoom point. Any suitable method for calculatingde-facto focus point 824 may be employed (e.g., a least squares method). A de-facto smart-zoom point 824 may also be calculated by defining a maximum and/or minimum allowed distance from a focus point to an edge of window 801 (e.g., as described in reference to smart-zoom to imageedge option 331 ofFIG. 3 above). In some embodiments, a user may also be able to jump between highlighted focus points 822 using, for example, a button or a keyboard shortcut or any other suitable input mechanism. A user may also be presented with the option to choose a particular highlightedfocus point 822 as the de-facto smart-zoom point. It is to be understood, that in some embodiments, de-facto smart-zoom point 824 and/or highlighted focus points 822 and/or the remainder ofoverlay 820 need not be displayed inwindow 800 and/orwindow 801. -
FIG. 9 is a flow chart of aprocess 900 for performing image zooming according to at least some embodiments. In particular,process 900 may be directed to obtaining an image with associated focus point metadata and performing a smart-zoom function based on the one or more utilized focus points indicated by the focus point metadata. An image with associated focus point metadata may be obtained atstep 901. For example, an image processing system may access or otherwise obtain at least one image and focus point metadata associated with that image from any suitable image source, such asimager 103. In some embodiments, an image may be accessed from one source and the associated metadata may be accessed from another source. For example, an image and/or associated metadata may be imported from a digital camera (e.g.,imager 103 ofFIG. 1 ), downloaded from the internet, or imported from an external memory device to a system (e.g.,system 100 ofFIG. 1 ). - Next, at
step 903, at least one smart-zoom point may be determined based on the metadata. For example, an image processing system may determine at least one smart-zoom point by any suitable method discussed above. For example, a smart-zoom point may be determined to coincide with a single focus point indicated by the metadata or a smart-zoom point may be determined to have a particular relationship to several focus points indicated by the metadata. In some embodiments, a de-facto smart-zoom point may be calculated based at least on multiple focus points indicated by the focus point metadata. - At
step 905, the image may be smart-zoomed based on the at least one smart-zoom point. For example, an image processing system may display within a window a zoomed portion of the image and the center of the zoom portion of the image may be based on the at least one smart-zoom point. In some embodiments, the center of the zoom portion of the image may be the position of the smart-zoom point. The zoom portion of the image may be at any suitable zoom factor. A thumbnail of the full image may be displayed along with the smart-zoomed image to a user. The system may be configured to toggle between displaying the smart-zoomed image and the full image to the user (e.g., within a particular window). In some embodiments,process 900 may be performed simultaneously or serially for multiple images, and multiple images may be displayed in an array to the user at the same time. Each of the images in the array may be displayed as smart-zoomed. In some embodiments,process 900 may also include generating an overlay for an image based on the focus point metadata associated with that image. The overlay may be displayed along with the full image and/or with the smart-zoomed image. The overlay may include a representation of at least one focus point utilized by an imager to capture the image and/or a representation of at least one potential focus point that may have been used by the imager to capture the image. -
FIG. 10 is a flow chart of aprocess 1000 for performing image zooming according to at least some embodiments. In particular,process 1000 may be directed to performing smart-zooming on an image based at least on comparing focus point metadata associated with the image and subject recognition data. An image with associated focus point metadata may be obtained atstep 1001. For example, an image processing system may access or otherwise obtain at least one image and focus point metadata associated with that image from any suitable image source, such asimager 103. In some embodiments, an image may be accessed from one source and the associated metadata may be accessed from another source. For example, an image and/or associated metadata may be imported from a digital camera (e.g.,imager 103 ofFIG. 1 ), downloaded from the internet, or imported from an external memory device to a system (e.g.,system 100 ofFIG. 1 ). - Next, a subject recognition procedure may be performed on the image at
step 1003. For example, an image processing system may be configured to recognize a particular subject within an image and its particular position with the image (e.g., by utilizing a subject recognition algorithm). In some embodiments, an image processing system may be configured to detect the position of a subject's face within a captured image (e.g., as provided by the face recognition feature available in iPhoto™ by Apple Inc. of Cupertino, Calif.). In some embodiments, an image processing system may be configured to detect a certain type of object, which may be determined by a user. For example, a user my instruct an image processing system to recognize only the faces of human subjects, or only automobiles, or only red balls, or only green balls, or any particular subject or set of particular subjects that a user may choose. Therefore, a system may be configured to recognize a particular subject that may likely be the intended subject of an image. - At
step 1005, the focus point data obtained atstep 1001 may be compared with the subject recognition data obtained atstep 1003. For example, an image processing system may compare the position of one or more focus points of the image, which may be indicated by the focus point metadata, with the position of one or more subjects recognized. Next, atstep 1007, it may be determined from the comparison whether the position of at least one recognized subject matches or is close to the position of at least one focus point. Generally, if at least one of the image's focus points overlaps with or is within a particular distance from at least one recognized subject, there may be a high probability that the subject or subjects may be in-focus within the image. On the other hand, if a recognized subject does not overlap or is not close to any focus points, then the recognized image subject may most likely not be in focus. - Based on the determination at
step 1007,process 1000 may either advance to step 1009 orstep 1011. For example, if it is determined atstep 1007 that at least one recognized subject matches at least one focus point, thenprocess 1000 may proceed to step 1009 and the image may be prioritized (e.g., for further viewing and/or editing). If, however, it is determined atstep 1007 that no recognized subject matches any focus point, thenprocess 1000 may advance to step 1011 and the image may be de-prioritized (e.g., from further viewing and/or editing). For example, prioritized images may be ordered for presentation to a user before or after de-prioritized images (e.g., in an array of images, as shown inFIG. 7 for example). - It is to be understood that the steps shown in each one of
processes FIGS. 9 and 10 , respectively, are merely illustrative and that existing steps may be modified or omitted, additional steps may be added, and the order of certain steps may be altered. - Moreover, the processes described with respect to
FIGS. 9 and 10 , as well as any other aspects of the invention, may each be implemented by software, but may also be implemented in hardware, firmware, or any combination of software, hardware, and firmware. They each may also be embodied as computer-readable code recorded on a computer-readable medium. The computer-readable medium may be any data storage device that can store data which can thereafter be read by a computer system. Examples of the computer-readable medium may include, but are not limited to, read-only memory, random-access memory, flash memory, CD-ROMs, DVDs, magnetic tape, and optical data storage devices (e.g., memory 109 that may be accessible byprocessor 101 ofFIG. 1 ). The computer-readable medium can also be distributed over network-coupled computer systems so that the computer-readable code is stored and executed in a distributed fashion. - Insubstantial changes from the claimed subject matter as viewed by a person with ordinary skill in the art, now known or later devised, are expressly contemplated as being equivalently within the scope of the claims. Therefore, obvious substitutions now or later known to one with ordinary skill in the art are defined to be within the scope of the defined elements.
- The above-described embodiments of the invention are presented for purposes of illustration and not of limitation.
Claims (28)
1. An image processing system comprising:
a display; and
a processor coupled to the display and configured to:
process metadata associated with an image; and
manipulate the image on the display at least based on the processed metadata.
2. The system of claim 1 , wherein the metadata comprises focus point metadata.
3. The system of claim 2 , wherein the manipulation comprises a smart-zoom based on the focus point metadata.
4. The system of claim 2 , wherein the focus point metadata comprises information about one focus point.
5. The system of claim 2 , wherein the focus point metadata comprises information about a plurality of focus points.
6. The system of claim 2 , wherein the manipulation comprises superimposing a focus point overlay on the image on the display based on the focus point metadata.
7. The system of claim 1 , further comprising an imager configured to capture the image and generate the metadata.
8. The system of claim 7 , wherein each one of the display, processor, and imager is part of a camera.
9. A method for processing images comprising:
obtaining focus point metadata for an image with a processor; and
performing a smart-zoom function on the image on a display coupled to the processor based on at least the obtained focus point metadata.
10. The method of claim 9 , wherein the focus point metadata comprises a single focus point.
11. The method of claim 10 , wherein the smart-zoom function comprises:
determining a smart-zoom point based at least on the single focus point; and
providing a zoomed-in portion of the image about the smart-zoom point on the display.
12. The method of claim 11 , wherein the smart-zoom point coincides with the single focus point.
13. The method of claim 9 , wherein the focus point metadata comprises a plurality of focus points.
14. The method of claim 13 , wherein the smart-zoom function comprises:
determining a smart-zoom point based at least on the plurality of focus points; and
providing a zoomed-in portion of the image about the smart-zoom point on the display.
15. The method of claim 13 , wherein the smart-zoom function comprises determining at least one smart-zoom point that coincides with at least one of the plurality of focus points.
16. A method for manipulating images comprising:
obtaining focus point metadata for an image with a processor;
recognizing a subject in the image with the processor; and
comparing the recognized subject with the focus point metadata using the processor.
17. The method of claim 16 , further comprising determining at least based on the comparing whether to prioritize the image.
18. The method of claim 16 , wherein:
the focus point metadata is indicative of the location of a focus point of the image;
the recognizing comprises determining a location of the recognized subject in the image;
the comparing comprises comparing the location of the recognized subject with the location of the focus point of the image.
19. A user-interface for a system that processes images, comprising user-input options configured to set various parameters relating to a smart-zoom function that performs a zoom operation on an image based on focus point metadata associated with the image, the parameters comprising at least one of:
an enable/disable the smart-zoom function parameter;
a parameter for choosing a particular smart-zoom process; and
a parameter for toggling between automatic and manual smart-zoom modes.
20. The user-interface of claim 19 , wherein the parameters further comprise a parameter for setting zoom-in and zoom-out options.
21. The user-interface of claim 19 , wherein the parameters further comprise a parameter for toggling a smart-zoom option.
22. The user-interface of claim 19 , wherein the parameters further comprise a parameter for toggling an automatic-zoom option.
23. The user-interface of claim 19 , wherein the parameters further comprise a parameter for toggling a focus point overlay option.
24. The user-interface of claim 19 , wherein the parameters further comprise a parameter for toggling a thumbnail option.
25. The user-interface of claim 19 , wherein parameters further comprise a parameter for toggling an image array-view option.
26. Computer-readable media for controlling an electronic device, comprising computer-readable code recorded thereon for:
obtaining an image and focus point metadata associated with the image; and
providing a zoomed-in portion of the image on a display, wherein the zoomed-in portion of the image is centered on a focus point position provided by the focus point metadata.
27. A method for processing image data comprising:
obtaining an image and focus point metadata associated with the image; and
providing a zoomed-in portion of the image on a display, wherein the zoomed-in portion of the image is centered on a location of the image determined by the focus point metadata.
28. The method of claim 27 , wherein the focus point metadata comprises information related to at least two focus points utilized to capture the image, and wherein the method further comprises determining the location based on the at least two focus points.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/081,277 US20120257072A1 (en) | 2011-04-06 | 2011-04-06 | Systems, methods, and computer-readable media for manipulating images using metadata |
US14/065,149 US9001230B2 (en) | 2011-04-06 | 2013-10-28 | Systems, methods, and computer-readable media for manipulating images using metadata |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/081,277 US20120257072A1 (en) | 2011-04-06 | 2011-04-06 | Systems, methods, and computer-readable media for manipulating images using metadata |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/065,149 Continuation US9001230B2 (en) | 2011-04-06 | 2013-10-28 | Systems, methods, and computer-readable media for manipulating images using metadata |
Publications (1)
Publication Number | Publication Date |
---|---|
US20120257072A1 true US20120257072A1 (en) | 2012-10-11 |
Family
ID=46965816
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/081,277 Abandoned US20120257072A1 (en) | 2011-04-06 | 2011-04-06 | Systems, methods, and computer-readable media for manipulating images using metadata |
US14/065,149 Active US9001230B2 (en) | 2011-04-06 | 2013-10-28 | Systems, methods, and computer-readable media for manipulating images using metadata |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/065,149 Active US9001230B2 (en) | 2011-04-06 | 2013-10-28 | Systems, methods, and computer-readable media for manipulating images using metadata |
Country Status (1)
Country | Link |
---|---|
US (2) | US20120257072A1 (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120147031A1 (en) * | 2010-12-13 | 2012-06-14 | Microsoft Corporation | Response to user input based on declarative mappings |
US20130093791A1 (en) * | 2011-10-13 | 2013-04-18 | Microsoft Corporation | Touchscreen selection visual feedback |
US20130311941A1 (en) * | 2012-05-18 | 2013-11-21 | Research In Motion Limited | Systems and Methods to Manage Zooming |
US9001230B2 (en) | 2011-04-06 | 2015-04-07 | Apple Inc. | Systems, methods, and computer-readable media for manipulating images using metadata |
US20150288887A1 (en) * | 2012-03-16 | 2015-10-08 | Katsuya Yamamoto | Imaging device and image processing method |
EP2961155A1 (en) * | 2014-06-24 | 2015-12-30 | Nokia Technologies Oy | A method and an apparatus for image capturing and viewing |
US20160081554A1 (en) * | 2013-02-27 | 2016-03-24 | DermSpectra LLC | Viewing grid and image display for viewing and recording skin images |
US9760954B2 (en) | 2014-01-16 | 2017-09-12 | International Business Machines Corporation | Visual focal point composition for media capture based on a target recipient audience |
US10019140B1 (en) * | 2014-06-26 | 2018-07-10 | Amazon Technologies, Inc. | One-handed zoom |
US20190216327A1 (en) * | 2014-09-19 | 2019-07-18 | DermSpectra LLC | Viewing grid and image display for viewing and recording skin images |
GB2572435A (en) * | 2018-03-29 | 2019-10-02 | Samsung Electronics Co Ltd | Manipulating a face in an image |
US11809929B2 (en) * | 2021-08-27 | 2023-11-07 | Memjet Technology Limited | Method of printing onto variable-sized box substrates |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2009251137B2 (en) * | 2009-12-23 | 2013-04-11 | Canon Kabushiki Kaisha | Method for Arranging Images in electronic documents on small devices |
US9360942B2 (en) * | 2012-05-21 | 2016-06-07 | Door Number 3 | Cursor driven interface for layer control |
US20140028726A1 (en) * | 2012-07-30 | 2014-01-30 | Nvidia Corporation | Wireless data transfer based spanning, extending and/or cloning of display data across a plurality of computing devices |
US10235587B2 (en) | 2014-03-04 | 2019-03-19 | Samsung Electronics Co., Ltd. | Method and system for optimizing an image capturing boundary in a proposed image |
US10592762B2 (en) | 2017-02-10 | 2020-03-17 | Smugmug, Inc. | Metadata based interest point detection |
US10607122B2 (en) * | 2017-12-04 | 2020-03-31 | International Business Machines Corporation | Systems and user interfaces for enhancement of data utilized in machine-learning based medical image review |
US10671896B2 (en) | 2017-12-04 | 2020-06-02 | International Business Machines Corporation | Systems and user interfaces for enhancement of data utilized in machine-learning based medical image review |
CN110602401A (en) * | 2019-09-17 | 2019-12-20 | 维沃移动通信有限公司 | Photographing method and terminal |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050134719A1 (en) * | 2003-12-23 | 2005-06-23 | Eastman Kodak Company | Display device with automatic area of importance display |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3977810B2 (en) | 2001-11-27 | 2007-09-19 | フレニ・ブレンボ エス・ピー・エー | Dual servo drum brake adjuster with internal shoe |
EP1900196B1 (en) | 2005-05-11 | 2012-03-21 | FUJIFILM Corporation | Image capturing apparatus, image capturing method and program |
JP4683339B2 (en) | 2006-07-25 | 2011-05-18 | 富士フイルム株式会社 | Image trimming device |
US8031914B2 (en) | 2006-10-11 | 2011-10-04 | Hewlett-Packard Development Company, L.P. | Face-based image clustering |
JP5164692B2 (en) | 2008-06-27 | 2013-03-21 | キヤノン株式会社 | Image processing apparatus, image processing method, and program |
US20120257072A1 (en) | 2011-04-06 | 2012-10-11 | Apple Inc. | Systems, methods, and computer-readable media for manipulating images using metadata |
US8854491B2 (en) * | 2011-06-05 | 2014-10-07 | Apple Inc. | Metadata-assisted image filters |
US9165017B2 (en) | 2011-09-29 | 2015-10-20 | Google Inc. | Retrieving images |
US9025836B2 (en) | 2011-10-28 | 2015-05-05 | Intellectual Ventures Fund 83 Llc | Image recomposition from face detection and facial features |
CN103988503B (en) | 2011-12-12 | 2018-11-09 | 英特尔公司 | Use the scene cut of pre-capture image motion |
US9077888B2 (en) | 2011-12-29 | 2015-07-07 | Verizon Patent And Licensing Inc. | Method and system for establishing autofocus based on priority |
US8824750B2 (en) | 2012-03-19 | 2014-09-02 | Next Level Security Systems, Inc. | Distributive facial matching and notification system |
TWI519167B (en) | 2012-04-23 | 2016-01-21 | 廣達電腦股份有限公司 | System for applying metadata for object recognition and event representation |
KR101990089B1 (en) * | 2012-10-16 | 2019-06-17 | 삼성전자주식회사 | Method for creating for thumbnail and image an electronic device thereof |
-
2011
- 2011-04-06 US US13/081,277 patent/US20120257072A1/en not_active Abandoned
-
2013
- 2013-10-28 US US14/065,149 patent/US9001230B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050134719A1 (en) * | 2003-12-23 | 2005-06-23 | Eastman Kodak Company | Display device with automatic area of importance display |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9152395B2 (en) * | 2010-12-13 | 2015-10-06 | Microsoft Technology Licensing, Llc | Response to user input based on declarative mappings |
US20120147031A1 (en) * | 2010-12-13 | 2012-06-14 | Microsoft Corporation | Response to user input based on declarative mappings |
US9001230B2 (en) | 2011-04-06 | 2015-04-07 | Apple Inc. | Systems, methods, and computer-readable media for manipulating images using metadata |
US20130093791A1 (en) * | 2011-10-13 | 2013-04-18 | Microsoft Corporation | Touchscreen selection visual feedback |
US8988467B2 (en) * | 2011-10-13 | 2015-03-24 | Microsoft Technology Licensing, Llc | Touchscreen selection visual feedback |
US20150288887A1 (en) * | 2012-03-16 | 2015-10-08 | Katsuya Yamamoto | Imaging device and image processing method |
US9667874B2 (en) * | 2012-03-16 | 2017-05-30 | Ricoh Company, Ltd. | Imaging device and image processing method with both an optical zoom and a digital zoom |
US9435801B2 (en) * | 2012-05-18 | 2016-09-06 | Blackberry Limited | Systems and methods to manage zooming |
US20130311941A1 (en) * | 2012-05-18 | 2013-11-21 | Research In Motion Limited | Systems and Methods to Manage Zooming |
US20160081554A1 (en) * | 2013-02-27 | 2016-03-24 | DermSpectra LLC | Viewing grid and image display for viewing and recording skin images |
US10238293B2 (en) * | 2013-02-27 | 2019-03-26 | DermSpectra LLC | Viewing grid and image display for viewing and recording skin images |
US9760954B2 (en) | 2014-01-16 | 2017-09-12 | International Business Machines Corporation | Visual focal point composition for media capture based on a target recipient audience |
US11455693B2 (en) | 2014-01-16 | 2022-09-27 | International Business Machines Corporation | Visual focal point composition for media capture based on a target recipient audience |
EP2961155A1 (en) * | 2014-06-24 | 2015-12-30 | Nokia Technologies Oy | A method and an apparatus for image capturing and viewing |
US10019140B1 (en) * | 2014-06-26 | 2018-07-10 | Amazon Technologies, Inc. | One-handed zoom |
US20190216327A1 (en) * | 2014-09-19 | 2019-07-18 | DermSpectra LLC | Viewing grid and image display for viewing and recording skin images |
GB2572435A (en) * | 2018-03-29 | 2019-10-02 | Samsung Electronics Co Ltd | Manipulating a face in an image |
GB2572435B (en) * | 2018-03-29 | 2022-10-05 | Samsung Electronics Co Ltd | Manipulating a face in an image |
US11809929B2 (en) * | 2021-08-27 | 2023-11-07 | Memjet Technology Limited | Method of printing onto variable-sized box substrates |
Also Published As
Publication number | Publication date |
---|---|
US20140118395A1 (en) | 2014-05-01 |
US9001230B2 (en) | 2015-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9001230B2 (en) | Systems, methods, and computer-readable media for manipulating images using metadata | |
CN107566732B (en) | Method for controlling camera of device and device thereof | |
KR102140882B1 (en) | Dual-aperture zoom digital camera with automatic adjustable tele field of view | |
US8259208B2 (en) | Method and apparatus for performing touch-based adjustments within imaging devices | |
US20050134719A1 (en) | Display device with automatic area of importance display | |
US7248294B2 (en) | Intelligent feature selection and pan zoom control | |
US8520116B2 (en) | Photographing apparatus and method | |
US20110243397A1 (en) | Searching digital image collections using face recognition | |
US20100289924A1 (en) | Imager that adds visual effects to an image | |
CN101998048A (en) | Digital image signal processing method, medium for recording the method, digital image signal pocessing apparatus | |
US20100189365A1 (en) | Imaging apparatus, retrieval method, and program | |
JP2006140990A (en) | Image display apparatus, camera, display methods of image display apparatus and camera | |
KR101930460B1 (en) | Photographing apparatusand method for controlling thereof | |
WO2006116744A1 (en) | Method and apparatus for incorporating iris color in red-eye correction | |
US20140125831A1 (en) | Electronic device and related method and machine readable storage medium | |
US7545428B2 (en) | System and method for managing digital images | |
CN105607825B (en) | Method and apparatus for image processing | |
US9792012B2 (en) | Method relating to digital images | |
JP2009505261A (en) | Method and apparatus for accessing data using symbolic representation space | |
EP3151243B1 (en) | Accessing a video segment | |
JP2011142574A (en) | Digital camera, image processor, and image processing program | |
JP2010273166A (en) | Photographic condition control device, camera and program | |
KR20110099498A (en) | Digital image processing apparatus and digital image processing method | |
US20050102609A1 (en) | Image processing apparatus, image processing method, and image processing program | |
US9736380B2 (en) | Display control apparatus, control method, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: APPLE INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:JIRMAN, STAN;REEL/FRAME:026087/0341 Effective date: 20110405 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |